AI Coding Agents: The Beginning of the Revolution, or the Beginning of the End?

Share
AI Coding Agents: The Beginning of the Revolution, or the Beginning of the End?

There are moments in technology when a new tool arrives and, almost immediately, feels less like a product and more like a warning.

OpenAI’s Codex and Claude Code are beginning to feel like that.

At first glance, they are simply the latest advance in software development: clever assistants that can write code, inspect files, suggest fixes and turn rough instructions into working applications. Useful, certainly. Impressive, often. Occasionally infuriating.

But spend enough time with them and a more uncomfortable question begins to emerge.

Are we looking at the start of the AI revolution, or the beginning of the end for the way we currently understand work, cost and value?

At LDS, we like to think we operate towards the sharper end of technology development. We have been using tools such as Codex and Claude Code for some time now, and the experience has been both exciting and sobering.

Because these tools do not merely make existing processes faster. They start to challenge the assumptions underneath them.

From Chatbot Parlour Trick to Digital Worker

Most people have now experienced what we might call “ChatGPT tennis.”

It is the modern email exchange in which two people appear to be debating a point, while both are quietly using AI to sharpen their argument, soften their tone, or dismantle the other person’s position with suspiciously well-balanced prose.

One suspects many corporate inboxes are already full of machines politely arguing with one another on behalf of humans who have long since lost interest.

There is something faintly comic about that. But it is also a distraction.

The real power of AI is not in making emails sound more reasonable, or turning a blunt message into something that would survive HR scrutiny. The real power appears when AI stops merely drafting the conversation about work and starts doing the work itself.

A well-written prompt can now create an application, draft a specification, publish a website, analyse a dataset, summarise a contract or respond intelligently to an email. What once required a chain of meetings, requirements documents, technical scoping and delivery resource can now, at least in prototype form, happen in minutes.

That does not mean the output is always correct. It does not mean human judgement has become redundant. It certainly does not mean organisations can hand over the keys and hope for the best.

But it does mean something important has changed.

The distance between an idea and a working version of that idea is collapsing.

AI Is Not Replacing Thought. It Is Replacing Friction.

The most interesting thing about these tools is not that they are intelligent, though they often appear to be.

It is that they remove friction.

For years, good ideas inside organisations have died quietly. Not because they were bad ideas, but because turning them into something real was too slow, too expensive or too dependent on already stretched teams.

A small internal application. A reporting tool. A customer portal. A workflow automation. A proof of concept that might or might not go anywhere.

Traditionally, even modest digital projects carried a certain weight. Someone had to define the requirement. Someone had to design the process. Someone had to build it. Someone had to test it. Someone had to justify why it was worth doing in the first place.

AI does not remove all of that. But it does change the economics of starting.

A capable person with the right tools can now move from thought to prototype at remarkable speed. That is not a marginal improvement. It is a structural one.

The danger, of course, is that speed can be mistaken for certainty. A working prototype is not the same as a production-grade system. Code that runs is not necessarily code that is secure, maintainable or commercially sensible. An AI-generated answer can sound authoritative while being quietly wrong.

The future, then, is not one in which humans disappear from the process. It is one in which human judgement becomes more important, not less.

The question is not whether AI can produce output.

It clearly can.

The question is whether organisations are mature enough to govern what it produces.

Welcome to the Token Economy

Every revolution comes with a bill.

In the AI world, that bill is increasingly measured in tokens.

A token is, in simple terms, one of the units of text that an AI model processes. It might be a word, part of a word, punctuation, or even a space. It is an arbitrary-sounding concept, but it is rapidly becoming one of the core units of cost in the new digital economy.

This matters because businesses are used to buying labour and software in familiar ways. They understand salaries, contractors, subscriptions, licences and day rates. They understand, even if imperfectly, what a person costs and what a system costs.

AI blurs those categories.

An AI agent is not quite an employee. It is not quite a contractor. It is not quite software in the old sense either. It consumes resources as it works. It reasons, searches, drafts, rewrites, retries, generates and validates. Each step can carry a cost.

A careless prompt may be cheap. A poorly governed agentic workflow may not be.

That creates a commercial problem many organisations have not yet fully grasped. How do you budget for work when the unit of effort is no longer a day, an hour or a licence, but a stream of machine activity measured in tokens?

How do you control cost when an AI system can keep iterating?

How do you know whether the money spent was productive, wasteful or simply invisible?

In traditional delivery, businesses understand the language of budget overrun, rework and cost of non-conformance. They know what it means when a project takes too long, when a supplier misses the brief, or when poor quality creates additional expense.

AI will not abolish those problems.

It will simply give them new names.

The Cloud Lesson No One Should Forget

There is a useful comparison here with cloud computing.

Over the past decade, organisations moved vast amounts of infrastructure into hyperscale cloud platforms. There were many good reasons for doing so. Cloud offered resilience, flexibility, scalability, security and access to modern tools that would have been difficult to replicate internally.

Yet the financial story was not always as clean as the sales deck suggested.

Many organisations discovered that moving to cloud did not automatically reduce cost. In some cases, it merely changed the shape of the bill. Fixed capital expenditure became variable operational expenditure. Infrastructure became easier to create, but also easier to lose control of. Environments multiplied. Ownership blurred. Consumption rose.

Then came the surprise bills.

AI risks following the same pattern.

The strategic case is compelling. The capability is real. The productivity gains may be significant. But without cost visibility, governance and discipline, organisations may simply recreate the mistakes of cloud migration in a new and more beguiling form.

The question should not be: can AI do this task?

Increasingly, the answer will be yes.

The better question is: should it, at what cost, under whose supervision, and with what accountability?

From Time and Materials to Tokens and Outcomes

Most organisations would be wary of giving a supplier an open-ended brief on a pure time-and-materials basis with no commercial boundaries, no milestones and no clear definition of success.

Yet it is entirely possible to create a similar risk with AI.

A badly designed agentic process can consume tokens, call tools, generate output, encounter errors, retry, refine, expand its scope and continue working long after the commercial value of the exercise has become questionable.

This is not an argument against AI. Quite the opposite.

It is an argument for taking it seriously.

The organisations that succeed with AI will not be those that simply buy the most tools or attach the word “agent” to every process. They will be the organisations that understand the work being performed, the controls required, the cost of the activity and the value of the outcome.

That means asking rather unfashionable questions.

What is the AI actually doing?

What decision is it supporting?

What output is it producing?

What would failure look like?

Who checks the result?

Who carries the risk?

And who is accountable when the system is confidently wrong?

These questions may sound prosaic, but they are exactly the questions that separate useful technology from expensive theatre.

The Taxman Will Eventually Notice

There is a deeper issue still.

If AI begins to perform more of the work currently done by people, what happens to the economic model built around human labour?

Most western societies rely heavily on taxing income. People work, they earn, they pay tax, and that tax helps fund the public realm: healthcare, education, defence, welfare, infrastructure and the machinery of the state.

But what happens if a growing share of productive effort is no longer performed by people in the traditional sense?

If an AI agent writes the code, drafts the proposal, processes the claim, analyses the report or handles the customer query, where has the taxable labour gone?

It may sound fanciful to ask whether there will one day be a token tax, but the question is not as absurd as it first appears. If tokens become a proxy for machine labour, and machine labour displaces taxable human labour, governments will eventually be forced to look at where value is being created and how it should be taxed.

Perhaps AI consumption will be taxed directly. Perhaps companies that automate at scale will face new obligations. Perhaps the entire relationship between work, income and public revenue will need to be reconsidered.

None of this will be simple. Governments are rarely quick to understand new technology, still less to tax it elegantly.

But the issue will not go away.

A society funded by taxing work cannot be indifferent to technologies that change who, or what, does the work.

The Beginning, Not the End

So are tools like Codex and Claude Code the beginning of the AI revolution, or the beginning of the end?

The answer is probably both.

They are the beginning of a new phase in which AI moves from assistant to actor. From drafting text to performing tasks. From clever interface to productive system.

But they may also mark the beginning of the end for a comfortable set of assumptions.

The assumption that software delivery must always be slow and expensive.

The assumption that labour must be human to create value.

The assumption that digital work can be governed using old commercial models.

The assumption that AI is merely another tool to be licensed, deployed and forgotten.

The opportunity is enormous. Organisations will be able to build faster, automate more, test ideas sooner and remove much of the dead weight that slows progress.

But the risks are equally real.

Cost models need to mature. Governance needs to catch up. Boards need to understand not only what AI can do, but what it costs, where it fails and who remains accountable.

The future will not belong to organisations that blindly hand work to machines. Nor will it belong to those that dismiss AI as overhyped novelty.

It will belong to those that can combine ambition with control.

AI is not the end.

But it is the beginning of a world in which work, cost and value are about to be renegotiated.

And that negotiation has already begun.

Read more