05/02/2026
The evolution of AI-assisted software development - By Wil Jones, Technical Director, Propel Tech
Over the past 18 months, platforms for AI-assisted coding have moved at a pace that few areas of enterprise technology can match. What started as experimental tools has rapidly evolved into a new generation of agents capable of reasoning across entire codebases, operating in the cloud, and reshaping how software teams think about productivity, scale and risk.
What’s most interesting is not any single product or provider, but the trajectory: how these platforms have grown, where their real strengths lie, and what that suggests about the future of software delivery.
From open-source experimentation to enterprise focus
The early wave of AI coding agents, such as Cline and Roo Code, largely emerged from the open-source community. These tools were often lightweight, editor-integrated agents that could be instructed to work through a task until completion. Many of them looked similar because they were solving the same early problem: could an AI reason over a real codebase and make coherent changes without constant supervision?
A defining feature of these early platforms was transparency. By open-sourcing their agents, teams could show exactly how decisions were made and how models were created. This was important, particularly for enterprise environments, where trust, auditability and control are prerequisites for adoption. Commercialisation wasn’t the immediate goal; proving the concept was.
These tools were ahead of the market for a while. They demonstrated that long-running, autonomous code changes were not only possible, but practical. However, they were also model-agnostic. Because they sat on top of third-party models, they had to make assumptions about how those models behaved, setting the stage for the next phase.
The shift to model-native platforms
As these ideas gained traction, the centre of gravity moved. Foundation model providers began releasing model-native coding platforms - tools built specifically around the behaviour and strengths of their own models. Examples include GitHub Copilot, OpenAI’s Codex-based tooling, and Anthropic’s Claude Code. Cloud providers followed suit with offerings such as Amazon Q Developer and Google’s Gemini-based developer tools.
As foundation model providers matured their offerings, they began to build coding platforms designed specifically around their own models. This is significant, because when a platform is built by the same organisation that trains the model, it can take advantage of nuances that generic tools simply can’t, where prompting strategies, reasoning depth, error handling and tool orchestration can all be tuned to how the model actually behaves in practice.
At the same time, major ecosystem players began to enter the space. GitHub Copilot brought AI assistance directly into mainstream developer workflows. Cloud providers introduced their own developer-focused tools, embedding AI into existing infrastructure and governance models.
The result is that today’s landscape is crowded, but not chaotic. Most serious providers now offer some combination of inline assistance, agent-based workflows and enterprise controls. The differentiation lies less in whether they can write code, and more in how reliably they can operate at scale.
From writing code to understanding systems
One of the most underappreciated shifts in these platforms is their emphasis on context. Early tools treated every request as a fresh interaction, re-analysing large parts of a codebase each time. That approach doesn’t scale well and increases the risk of subtle errors.
Modern platforms increasingly encourage an upfront analysis phase: understanding the architecture, conventions, dependencies and patterns of a system once, and then reusing that understanding across tasks. This turns the agent from a reactive assistant into something closer to a system-aware collaborator.
The implications are significant. Instead of asking an AI to “add a feature”, teams can define guardrails, standards and expectations that persist across workstreams. Over time, the platform begins to reflect how a particular organisation builds software, not just what it builds.
The rise of cloud-based agents
Another major evolution is the move away from the developer’s local machine as the centre of activity. Several platforms now support agents that can run entirely in the cloud, operating directly against codebases hosted on services like GitHub.. In these models, agents can be started, paused and reviewed independently, without relying on a developer’s laptop staying online.
This changes the economics and the ergonomics of development. Work can continue without a human supervising every step, multiple tasks can run in parallel across different codebases and long-running changes no longer block individual developers.
It also lowers the barrier to entry. For simpler applications, it’s becoming possible for people with limited technical backgrounds to assemble functional software by using agents rather than writing code line by line.
Productivity gains and new constraints
Writing thousands of lines of code in minutes is no longer remarkable. What is remarkable is how this flips traditional assumptions and review, validation and testing can now take longer than initial implementation.
It still takes people to be able to run systems, so understanding the system remains top goal of the developer. The new challenge is to ensure you know how the system works when agents are writing the majority of the code.
Agents operate with the permissions they’re given. In environments where credentials, infrastructure access and production systems are in play, guardrails are essential.
Why integration will define the next phase
Looking ahead, the most important advances are likely to come from integration rather than raw model performance. We’re already seeing platforms experiment with connections into tools such as issue trackers, documentation systems and test frameworks, as well as early integrations with productivity suites and collaboration platforms.
The logical next step is end-to-end workflows: from structured requirements, to implementation, to automated testing and validation. Alongside that sits a trust challenge which is ensuring that actions with real-world impact always include appropriate human oversight.
There’s also a subtler trend at play. As platforms introduce reusable workflows and organisation-wide “skills”, switching costs will increase. Teams won’t just choose a model; they’ll choose an ecosystem.
A technology still finding its shape
What’s striking is how early this journey still feels. The tools are already useful, but the operating models around them are still forming. Standards, patterns and best practices are emerging in real time.
For technology leaders, the opportunity is not to chase every new release, but to understand the direction of travel. These platforms are moving from assistants to agents, from local to cloud-based, and from isolated tools to deeply integrated systems.
The organisations that benefit most will be those that combine experimentation with discipline, embracing capability while being thoughtful about governance, trust and human oversight.
The future of software development won’t be defined by whether AI writes code. It will be defined by how well we design systems where humans and intelligent agents work together, each playing to their strengths.
AI coding platforms: a quick fact file
- What they are: Platforms that combine large language models with tooling that can read, reason over and modify real-world codebases.
- How they’ve evolved: From simple, editor-based assistants to long-running agents capable of operating across entire systems.
- Who’s building them: Open-source communities, foundation model providers, cloud platforms and developer tooling companies.
- Where they run: Increasingly in the cloud, connected directly to code bases rather than individual machines.
- What they’re good at: Rapid implementation, pattern consistency, boilerplate reduction and parallelising work.
- What still needs humans: Requirements, architectural decisions, security, validation and accountability.
- What’s coming next: Deeper integration with requirements, testing and operational tooling, and clearer governance models to match their growing autonomy.
Connect with Wil on LinkedIn.