Can AI Replace Software Engineers? Vibe Coding Strikes Back
Over the past several years, extensive exploration of large language models (LLMs), and conversations with professionals in various roles, have led to a clear conclusion: AI will truly replace software developers in companies. Spoiler: you’re not going to love the reason why.
Before diving into the tech-specific aspects, let me reference Cal Newport’s book, "Deep Work". Newport discusses the almost lost skill of sustained concentration. He cites research demonstrating how social media destroys our ability to focus. Crucially, he points out that in the early 2000s, during the initial social media boom, management demanded professionals, including journalists, to engage with social media rather than deeply and productively writing articles. Toxic work practices destroyed the ability to focus, even for those who didn't voluntarily spend time on social media.
Now, let's shift to programming. Programming involves two key aspects: decision-making and writing code. Imagine a programmer tasked with writing data from a CSV file into a database. At first glance, it seems simple:
Read from the file;
Form an SQL query;
Open a database connection and write into it;
Handle errors like connection interruptions or data mismatches.
A good junior programmer can handle this easily. But there’s a catch: step one only works if the file’s content fits into RAM. What if it doesn’t? You could read and write line-by-line, or batch-process rows. The code becomes more complex, introducing more errors, requiring more maintenance. Therefore, developers must assess whether the file size issue will ever realistically arise based on the context. This nuanced decision-making is the intellectual core of programming. Writing code, once the decisions are made, is pretty straightforward.
AI enthusiasts imagined that developers would make these decisions and write instructions into chat prompts, letting LLMs quickly generate code, thus saving time. But from the real observation, the opposite thing occurs. Even senior developers simply dump vague requests like "write a file to database" into ChatGPT/Claude/Copilot/Cursor, then tweak the generated code. Ironically, machines end up performing intellectual tasks – decision-making – while humans do code cleanup! And when experts stop making decisions in their domain, their skills degrade.
Why are even experienced developers engaging in this self-sabotage? Because our brains naturally prefer saving energy. They’re tired, distracted by social media, and conditioned by tech culture promoting "optimization" over effort. Crucially, management increasingly mandates using AI tools to speed up development. Companies pay for LLM licenses, expecting clear productivity gains, so developers face pressure to offload more work to AI. This mirrors the destructive spread of social media, with workplace practices undermining human focus and skill.
There is a good research "AI Meets the Classroom: When Do Large Language Models Harm Learning?" Researchers found students using ChatGPT to answer questions performed worse on subsequent tasks. Further studies revealed that students who used AI for explanations improved slightly, whereas those who relied on it for ready-made solutions covered more topics superficially. Allowing copy-paste increased reliance on ready-made answers from 40% to 60%. These robust, unbiased studies match the observations and existing theories on learning and intellectual discipline, but not the enthusiastic corporate “AI reports”.
Stack Overflow case
One argument developers make against degradation is Stack Overflow. Developers claim Stack Overflow hasn’t degraded their skills. But firstly, we lack definitive data on that. Secondly, Stack Overflow strictly moderates against "do my work" requests: questions must be abstract enough to help others, requiring active cognitive effort to implement solutions. Stack Overflow usage resembles students who use AI for explanations, not ready-made answers. Current LLM usage in development is predominantly of the "do it for me" variety, leading to significantly different outcomes.
Another argument from management is that “degradation doesn’t matter, as developers will soon be entirely replaced by AI”. They assume AI growth and human skill decline curves will intersect appropriately. But let’s realistically assess AI's current capabilities, focusing on Production-ready code (PRC). PRC is robust, maintainable code suitable for real-world environments, handling various data inputs, security standards, and scalability. Writing PRC is complex and time-consuming, significantly more than prototype code (can be 10x harder). Current LLMs can’t produce reliable PRC. They’re good at prototypes and scripts, but terrible at maintenance and complex projects.
Could future LLMs overcome these shortcomings? Possibly. But consider this: the decline in Stack Overflow questions is noticeable as developers switch to AI-generated solutions. Yet Stack Overflow was a critical training dataset for these LLMs - structured human questions and curated answers. If developers stop generating such content, what data will AI learn from, especially regarding emerging technologies? Official documentation alone is insufficient; AI needs large-scale human-generated content to evolve effectively. Disrupting Stack Overflow and other developer communities threatens future AI advancement.
Stack Overflow popularity chart
Summarizing the key points:
Developers are degrading by relying heavily on AI.
Aggressive AI adoption in workplaces accelerates this trend.
Without significant breakthroughs, AI can't reliably write production-ready code.
The decline of developer communities further degrades both human and AI capabilities.
Unlike other apocalyptic AI predictions, here we highlight risks arising from AI failing to overcome hallucinations and continuing incremental development. Popular discussions typically emphasize dramatic threats like AGI, machine rebellion, and cyberpunk stuff, overlooking mundane yet toxic work practices.
Can degradation be prevented, and what are the consequences? Frankly, prevention seems impossible. Big CEOs and managers openly dream of replacing expensive developers with AI, driven by corporate greed, cost-cutting goals, and resentment towards developer-dominated job markets. Layoff decisions are already being made, not conspiratorially, but transparently driven by these incentives.
Consequences include significant setbacks for tech communities, fewer opportunities for new developers, and generational skill gaps. Software quality will gradually decline, but users will adapt, accepting occasional glitches as inevitable. The real threat lies in accumulated minor errors potentially causing cascading failures across global IT infrastructure.
There is a strong belief that something similar awaits us in many professional areas: Disruption and/or degradation of human specialists → Less open content made by these people → Degradation of AI.
As for products and apps, you should understand that the overall accumulated degradation caused by relying on AI instead of developers will likely happen slowly - slow enough for managers to pretend everything is going according to plan. Yes, software quality will gradually decline, but this isn’t the first time automation has led to a general drop in quality while users were convinced everything was fine. Once upon a time, individually tailored shoes were accessible to a much larger portion of society than they are now. Not so long ago, support was provided by real people rather than mindless chatbots. Now, your weather app might say it’s sunny while you’re standing in the rain. Your smart fridge might order oat milk instead of regular. It’s fine, you’ll get used to it. That’s just how things are everywhere, and soon, you won’t have any alternative.
Conclusion
So, we’re heading into tricky territory. New tools promise speed, savings, and easy solutions, but at a hidden cost. AI is powerful, but relying too much on it can be dangerous. If software engineers stop thinking deeply and let AI handle all the hard parts, their skills will fade. Right now, AI is helpful but far from perfect, it can’t fully replace humans yet. The real risk is that we rush into using AI everywhere, forgetting how important human knowledge and experience really are. We should slow down and find a smart balance between human creativity and AI efficiency. If we don’t, we might face a future full of unreliable software and shaky technology. Humans built the digital world - we shouldn’t let ourselves become strangers in our own creation.
If you want to build a great product crafted by actual humans (experienced software engineers who understand code, context, and quality) - drop us a message. At molfar.io, we blend innovative thinking with sharp execution to help you build fast and smart.