There’s a tendency in technology to assume that every new problem requires a new model

Share
There’s a tendency in technology to assume that every new problem requires a new model

It rarely does.

Long before AI arrived, organisations had already spent decades working out how to manage risk, change and accountability in complex systems. In IT, that thinking found its way into ITIL. Change Advisory Boards. Continuous improvement. Structures that, at their core, were designed to answer a very human question.

What is about to change, and who is prepared to stand behind it?

For the most part, it worked.

Change was something you could point to. A release. A configuration shift. A decision taken, discussed, approved. Even in faster environments, there was still a moment where responsibility became clear. A record existed. A name sat against it.

AI doesn’t remove that need. It just removes that moment.

Because the change is no longer an event.

It’s a condition.

Agentic systems don’t wait to be deployed before they begin influencing outcomes. They are already inside the process, adjusting, optimising, nudging decisions in ways that don’t announce themselves as “change” in the traditional sense. There is no ticket raised. No CAB meeting scheduled. No clean handover from intent to execution.

Just a continuous stream of small decisions, each one rational in isolation, but collectively capable of shifting outcomes in ways that are harder to see, and harder still to own.

And this is where the old models begin to stretch.

The Change Advisory Board was never really about slowing things down. It was about ensuring that change carried accountability with it. That someone, somewhere, had looked at the risk and accepted it.

But what happens when change is no longer something you can isolate?

You don’t retire the CAB. You ask something different of it.

Instead of approving individual actions, it begins to define the space in which actions are allowed to happen. The boundaries. The intent. The acceptable level of autonomy. It moves upstream, quietly, from reviewing decisions to shaping the conditions under which decisions are made.

And alongside that, something else has to evolve.

Continuous improvement, often the overlooked part of ITIL thinking, starts to matter again. Not as a quarterly exercise, but as a constant act of observation. Watching how systems behave. Not just whether they work, but whether they are still aligned with what the organisation believes to be right.

Because AI systems don’t just execute. They drift. They optimise. They learn patterns that may not have been intended.

Which means governance cannot sit at the edge of the process anymore.

It has to live inside it.

This is the gap many organisations are now feeling, even if they haven’t quite named it yet. The frameworks are still there. The controls still exist. But they were designed for a world where decisions had edges. Where you could draw a line around a moment and say, this is where responsibility sits.

That line is becoming harder to see.

And so the question changes.

Not should this change happen, but who owns the decisions this system is making right now?

That is where the idea of the Digital Trustee starts to feel less like a concept and more like a necessity.



Not as a replacement for what came before, but as its natural evolution. A way of carrying forward the discipline of ITIL into a world where decisions are no longer queued up for approval, but happening continuously, often invisibly, and always at scale.

The principle, in truth, hasn’t moved at all.

Organisations still need to understand what is happening inside their systems. They still need to be able to explain it. And when it matters, they still need to be able to say, without hesitation, who is accountable for the outcome.

What has changed is where that accountability needs to live.

Not in the meeting room, after the fact.

But inside the system, in real time.

Read more