ideas@asquared.uk

The vibe-coded trap

tagged
published
May 11, 2026

Why AI makes software easier to start and harder to finish

Artificial Intelligence is sitting at the centre of massive hype right now. Much of it is being driven by AI companies, chip manufacturers, and social media influencers claiming new models and autonomous agents can replace entire professions, software engineering and design included. The reality, however, is that AI works far better as an acceleration tool than a replacement for human expertise, even if it's not always being used that way.

While modern AI systems can generate code, build prototypes, and significantly speed up development workflows, they still depend heavily on human direction. That means engineers who understand architecture, system design, trade-offs, scalability, and infrastructure choices. In practice, developers are not being replaced. Their role is shifting from writing every line of code to guiding AI systems: defining structure, reviewing outputs, and making the critical engineering calls that actually matter.

Photo by José Martín Ramírez Carrasco on Unsplash

This shift becomes especially visible when you look at real-world AI-generated code. It may appear correct at first glance, but it often hides unnecessary complexity such as redundant helper functions, overly defensive logic, or hard-coded assumptions layered on top of LLM outputs. These inefficiencies are largely invisible to non-technical users, which creates a gap between how good the output looks and how production-ready it actually is.

That gap widens considerably when AI is used for full product development. Many non-technical founders see rapid early wins: quickly generated APIs, simple apps, and prototypes built with minimal friction. But once systems need iteration, scaling, or long-term maintenance, the cracks start to show. Suddenly there’s inconsistent architecture, poor separation of concerns, and what developers have started calling "AI bloat" where code evolves without any coherent design philosophy underneath it. These are the "vibe-coded" systems: things that work on the surface but fall apart under scrutiny.

Picture a founder who uses AI to spin up a working prototype in a weekend (think authentication, a basic API, and a front end). Three months later, they bring in an engineer to scale it. That engineer will spend the first two weeks not building, but untangling: renaming inconsistent variables, restructuring database calls, removing logic duplicated across five files. The AI got them to a certain point. But it also created a debt that a human had to repay.

Photo by Annie Spratt on Unsplash

This is the tension at the heart of AI-assisted development. It is genuinely brilliant at getting you started. It is much less brilliant at thinking ahead.

There is also a fundamental limitation worth keeping front of mind: AI does not make decisions. It generates options based on probability. Choosing between architectures, cloud providers, scaling strategies, or database designs still requires human judgment. AI can suggest solutions, but it cannot reliably evaluate long-term system impact or business constraints and that is precisely why skilled engineers remain essential. They are decision-makers, not just implementers.

Interestingly, this has reshaped hiring too. Where companies once needed separate frontend, backend, and DevOps engineers, a single strong engineer with AI support can now often cover multiple layers of the stack. Though it is worth noting: complex systems still demand specialisation and deep expertise.

What this means in practice is that softer, more strategic skills are becoming more valuable, not less. That means system thinking, technical communication, the ability to review and critique AI output critically, and knowing when not to use AI at all; increasingly the skills that will make strong engineers shine.

Here's something the benchmarks won't tell you: real-world differences between AI models are often far less dramatic than the marketing suggests. Performance depends heavily on prompting quality, context, and the expertise of the person using the tool. Even a smaller model used well can outperform a larger one used poorly. AI systems are also inherently non-deterministic, producing different outputs across sessions, and tend to perform best in heavily represented ecosystems like Python and JavaScript, simply because of how they were trained.

Ultimately, AI is best understood as a force multiplier, not an autonomous engineer. It increases speed, lowers the barrier to entry, and genuinely enhances productivity, but it does not replace core engineering thinking.

Photos by Pencari Angin and Jackson Allan on Unsplash

The real transformation is not that engineers are becoming obsolete. It is that their responsibilities are evolving: less boilerplate, more orchestration, validation, architecture design, and high-level technical decision-making. The differentiator is no longer whether someone uses AI; everyone does. It is how effectively they combine it with strong technical judgment to build systems that are not just fast to create, but stable, scalable, and maintainable for the long run. 

For engineers, that means investing in judgment, not just output. For founders and product teams, it means understanding that AI can accelerate building, but it cannot substitute for the expertise needed to build well.

Keep reading

Request a quote

Hand drawn underline by a marker

Let's see what we can make together.

From startups to scaleups and enterprise, we're always happy to talk to impact-focussed and ambitious organisations. Use our quote creator to get a better idea of scope, timescales and next steps.

Request a quote
Update your cookie preferences