“AI won’t replace programmers, but it will become an essential tool in their arsenal. It’s about empowering humans to do more, not do less.”- Satya Nadella, CEO of Microsoft
“AI tools enable threats as well as associated cybersecurity measures simultaneously, but ironically AI, however it advances, badly requires cybersecurity to protect itself.”- Author
Besides providing a great boost to productivity and innovation, AI, with its fast-paced and relentless advancement, poses serious threats of job displacements. Can AI replace software engineers and cybersecurity professionals? We will address that in this blog post with the support of recent research studies.
Software Development and AI
Peter Naur, a celebrated computer scientist, wrote “Programming in Theory Building” in 1985, which was later reprinted in his collection of works, “Computing: A Human Activity” in 1992. In that article, he brought out the importance of designing and coding a program and, most importantly, the inextricable elements of the human mind and computer programs. To illustrate, every program is based on a theory as perceived by the human programmer. This theory or philosophy is so native to the original coding team, it is present both in the form of explicit and tacit knowledge. He brilliantly points out through his case studies the complexity of the task of passing such theory to the next team for the update of the same program.
In the case study, he elaborates how an accomplished computer program developed by team A was found to be difficult to update to an additional extension by team B, despite the presence of detailed text and documentation of the program. In short, such clear documentation failed in conveying the theory adopted by team A while developing the code. This fact became evident when team A pointed in its review that team B should have used the inherent facilities instead of additions in the form of patches, which only destroyed the power and simplicity of the program.
Now let’s add a twist to this story. If all the coders of team A have left the organization, team B will not have any recourse to get the program reviewed, resulting in mediocre code and wasted resources collectively for both teams.
Every code has a theory – the living, breathing system providing context and rationale – equivalent to that of the thinking mind in the human body. So, when code is developed by AI, it is not only theory-less, but also lacks the context in which it will be deployed, namely, the organization and the associated expectations of its environment. Worse, it will lead to scores of mediocre developers who might have just depended on ready-made codes from large language models, without struggling to find a solution through their thinking process – the natural growth path of any developer.
Human developers are required to maintain the theoretical framework in line with the business and suitable software architecture. They know that code has a purpose and a lifecycle, and how it should evolve and reflect the thoughts of its programmers. Human developers, nevertheless, will collaborate with AI, but it should be intentional, with only laborious and tedious tasks delegated to AI, retaining the art and craft of human developers. LLMs are useful for mechanical tasks, but the core work of programming – the theory-building exercise that transforms business requirements into software models – will and must remain a deeply human activity.
Syntax vs. Semantics and the Inherent Limitations of LLMs
Syntax is the set of rules for how words are arranged to form grammatically correct sentences, while semantics is the meaning of those words and the sentence as a whole.
LLMs have shown good capability in automated code generation from natural language descriptions and in remediation of code issues. However, code comprises both syntax and semantics components and LLMs, though good in handling syntactic issues, cannot seem to handle program semantics well. Semantics involve implicit assumptions, domain- and program-specific knowledge, and changing it without understanding the implications clearly can lead to compromise of functionality, security or safety. Thus, LLMs cannot replace a software development team’s collective intelligence. The future of AI-enabled software engineering, where both human and machine intelligence co-exist, is shown in the below image:

The Indispensability of Cybersecurity Expertise
In a security incident reported at Amazon in July 2025, the official Amazon Q extension for Visual Studio Code was compromised to include a prompt to wipe the user’s home directory and delete all the AWS resources. Though AWS had issued a statement that its production services were unaffected, it did not address the root cause of the incident, such as not enough human checks and too much reliance placed on AI to check the security of the code. In addition, human developers are severely overworked and there is not enough room for testing or vigilance. This incident points to the importance of dedicated, sufficiently staffed cybersecurity teams.
Cybersecurity and AI: An Interdependent Relationship
AI tools are generated every day and a variety of AI browsers with competing functionalities are available for use. But all these AI functionalities lack protection and come with serious vulnerabilities. Prompt injection attacks have become one of the main challenges and there is no clear solution. While AI on one hand can automate attacks on infrastructure and pose a serious challenge to cybersecurity measures, it also requires cybersecurity to be responsibly implemented. AI cannot replace cybersecurity expertise as they strengthen and complement each other.
It is clear that software engineers and security professionals have vital roles to play and can never be dispensed of despite the advent of AI tools.