Over recent months I have written a good deal about the changing nature of software in the age of Agentic AI. In those reflections I argued that the issue is not simply that software is evolving, but that the assumptions beneath much of the software economy are being exposed. Traditional enterprise systems, and indeed much of the Software-as-a-Service model, were designed around human interaction. The interface, the workflow, the controls, the permissions, the escalation paths: all of it assumed that a person sat in the middle of the process. That world is now beginning to shift.
What is arriving in its place is not merely a smarter application. It is the emergence of digital workers: always-on, increasingly autonomous systems able to reason across tasks, act on goals, orchestrate other tools, and complete meaningful units of work. That is a very different proposition from software as a passive instrument. It means that instead of people using software to do work, software will increasingly do work on behalf of people.
This distinction matters more than many leaders currently appreciate. The question is no longer whether organisations can acquire AI models, copilots or agents. The real question is whether the enterprise environment into which those systems are being introduced is actually fit for purpose. In many cases, it is not. And that, in my judgement, is one of the principal reasons why so many promising AI programmes still struggle to move from pilot to scaled operational reality.
The wrong pitch for the new game
The evidence that this is becoming a structural issue rather than a passing technical inconvenience is now substantial. Stanford’s 2025 AI Index reported that 78 percent of organisations said they were using AI in 2024, up sharply from 55 percent the previous year. Capgemini’s 2025 research on AI agents found that although momentum is clearly building, only 2 percent of organisations had implemented agents at full scale, while 23 percent had launched pilots and the majority were still exploring or preparing. Deloitte’s Tech Trends 2026 makes the underlying point even more directly: many enterprises are discovering that their existing computing strategies were never designed for production-scale AI inference, and that cloud-first assumptions alone are no longer sufficient when the economics and latency of AI workloads begin to bite.
That combination is telling. Adoption is rising. Ambition is rising. Investment is rising. Yet scaled operational success remains limited. This usually indicates that the problem is not enthusiasm, and not even the models themselves. It indicates that the foundations are wrong.
I often frame it in far simpler terms. Trying to deploy digital workers on traditional enterprise architecture is like attempting to play paddle board on a tennis court, or asking a modern rugby side to perform at pace on a pitch marked and maintained for a completely different game. The lines are there, the surface looks respectable, and the rules appear familiar, but the conditions are fundamentally misaligned with what is required. What once worked perfectly well for structured play, controlled movement, and human-led decision making simply does not translate to an environment where speed, autonomy, and continuous motion define success.
There are four reasons for this. First, compute. Generative and agentic AI place radically different demands on infrastructure from conventional enterprise software. Inference at scale requires sustained access to accelerated compute, often specialised GPUs, increasingly optimised networking, and a cost model that remains manageable when requests are no longer occasional but continuous. Deloitte’s infrastructure work highlights that some enterprises are now seeing AI-related monthly bills in the tens of millions, even as token costs have fallen dramatically, because usage has expanded faster than efficiencies have arrived. In other words, AI at scale can become cheaper per interaction and still vastly more expensive overall.
Second, workflow architecture. Human-centred processes were designed around manual review, episodic decision-making, and interfaces intended to guide people step by step. Agents do not operate in that way. They plan, trigger, call, retrieve, update, and escalate across systems. If the workflow itself assumes a person at every junction, then the agent becomes constrained, inefficient, and unreliable. It is not enough to insert AI into an old flow and expect transformation. In many cases the flow itself must be redesigned.
Third, identity and security. Most enterprise security models are built around human users, their credentials, and the risk patterns associated with human behaviour. Digital workers introduce a new category of actor. They need permissions, role boundaries, audit trails, exception handling, and real-time supervision. They also create new attack surfaces, because the enterprise must now distinguish between authorised machine activity and malicious or compromised machine activity. Security architecture designed only for people will not be enough.
Fourth, governance. This, in my view, is the most underestimated issue of all. It is one thing to govern a human workforce using policies, training, supervisory hierarchies and compliance routines. It is quite another to govern autonomous or semi-autonomous digital labour that can take actions at speed and at scale. Governance for agents must deal with model behaviour, explainability, escalation thresholds, statutory duties, ethical constraints, and the question that ultimately matters most: who is accountable when the machine acts?
What the global evidence is already telling us
These are not theoretical concerns. They are already visible in sectors around the world. Klarna reported that its AI assistant handled two-thirds of customer service chats, carried out work equivalent to around 700 full-time agents, and reduced repeat enquiries while improving resolution times. That is not simply a chatbot story. It is an operating model story. The system works because the surrounding architecture, process design and service model allow it to work.
JPMorgan’s long-standing COiN platform offers a different but equally important lesson. It showed years ago that machine intelligence could analyse complex legal documentation in seconds rather than consume vast quantities of expert manual time. The lesson today is not merely that AI can process contracts. It is that when machine reasoning is embedded into the operating fabric of an institution, the institution itself changes. Human expertise is redeployed upward, not just displaced sideways.
In industry, Siemens has moved beyond generic AI rhetoric and into industrial copilots and AI agents intended to automate parts of engineering and production workflows. In telecommunications, Telefónica reported progress on autonomous network operations, including multiple Level 4 use cases across its group. In public governance, Singapore launched a dedicated Model AI Governance Framework for Agentic AI in January 2026, explicitly recognising that organisations now need governance designed for systems capable of reasoning, planning and acting on behalf of humans. Across all of these examples, the pattern is the same: value comes not from the model in isolation, but from the readiness of the surrounding environment.
This is why I have consistently argued that the debate around AI cannot be reduced to model choice, vendor selection or interface novelty. Those are important, but they are not decisive. Decisive advantage will come from architectural readiness.
We are entering a period in which enterprises will need to think much more carefully about where AI workloads should run, which processes are suitable for machine-led execution, how digital workers are provisioned and supervised, and how operating models are redesigned around human-machine collaboration. Deloitte now talks about a shift from a simplistic cloud-first posture toward a more strategic hybrid model, combining cloud elasticity, on-premises consistency and edge immediacy according to workload need. That is not a technical footnote. It is a strategic signal.
The mature organisations are beginning to understand that AI is not a bolt-on feature. It is a new layer of operational capability that places demands right across the stack. PwC’s 2026 AI research makes a similar point from another angle: the firms capturing the greatest value from AI are far more likely than others to have eliminated outdated and costly applications, systems and infrastructure. In other words, the businesses that achieve better returns are not merely experimenting harder. They are redesigning more deeply.
From tools to digital colleagues
There is also an important cultural dimension to this transition. One of the mistakes I still see in boardrooms is the tendency to treat agents primarily as a substitution technology. That is much too narrow a reading. The more useful frame is augmentation first, autonomy second. Human beings remain essential where judgement, context, creativity, accountability and long-horizon strategic thinking are involved. But between wholly manual work and fully autonomous work lies a vast middle ground in which digital workers can transform productivity, responsiveness and scale.
Microsoft’s 2025 Work Trend Index described the rise of what it called the “Frontier Firm”, in which agents become digital colleagues embedded into teams and workflows. That language is important. It implies that organisational design itself is beginning to change. We are moving toward mixed workforces in which some tasks are executed by people, some by machines, and many through collaboration between the two. Once one accepts that premise, the redesign challenge becomes obvious. We do not need better versions of yesterday’s workflow. We need new workflows built for mixed labour systems.
This has implications for every business function. In customer operations, the future will not be won by replacing every person with a bot, but by designing service architectures in which digital workers handle triage, routine resolution, knowledge retrieval and orchestration, while human teams concentrate on judgement-heavy cases and relationship value. In finance, the issue is not merely automating reconciliations, but creating auditable agent pathways that can operate within policy limits and escalate exceptions cleanly. In supply chains, the opportunity lies in moving from dashboard awareness to machine-supported intervention. In telecoms and infrastructure operations, it lies in combining expert engineers with AI systems that can detect, diagnose and sometimes remediate at machine speed.
Seen properly, the future enterprise is not one in which humans disappear. It is one in which human capability is amplified by reliable digital labour.
What leaders should do now
So what, in practical terms, should leaders prioritise? First, they should stop treating AI readiness as a narrow data science or innovation issue. Agent readiness is an enterprise architecture issue, an operating model issue, a security issue and a governance issue.
Second, they should identify where current infrastructure becomes a bottleneck under real production conditions. A proof of concept often hides the true compute, latency and integration challenges that appear only at scale. Leaders need to understand the economics of inference, not just the excitement of the demo.
Third, they should redesign workflows rather than merely automate them. If the process assumes human clicks, human interpretation and human handoffs at every turn, then agentic value will remain partial. Work has to be re-authored for a human-AI environment.
Fourth, they should create explicit identity, access and supervision models for digital workers. That means machine credentials, policy boundaries, logging, exception management and clear escalation paths.
Fifth, they should build governance that is proportionate to autonomy. Not every agent requires the same level of scrutiny, but every organisation needs a framework that clarifies what an agent may do, what it must never do, when it must ask, and how its actions are reviewed.
Above all, leaders should remember that the window is open precisely because so many organisations are still in transition. Capgemini’s figures show that large-scale deployment remains rare. That means the race is not over. But it also means that those who use this period to strengthen the foundations may create a durable advantage when digital workers move from novelty to normality.
A final thought
For me, this is one of the most important strategic questions in business today. We have spent the last twenty years optimising enterprises around software consumption. The next phase will be about work execution: who performs it, how it is orchestrated, where accountability sits, and what kind of architecture can sustain it.
That is why I believe the organisations that perform best over the next two to five years will not simply be those that buy the most AI. They will be the ones that rebuild most intelligently around it. They will treat digital workers not as a gadget, but as a new factor of production. They will rethink infrastructure, redesign workflows, modernise governance, and create secure foundations for mixed human-machine operating models.
In earlier writing I suggested that SaaS, while far from disappearing, was becoming vulnerable in a world increasingly shaped by Agentic AI. I would now extend that thought. The vulnerability is not only commercial. It is architectural. Enterprise technology built for human users alone is now being asked to support autonomous digital labour. That is too great a shift to be solved by cosmetic upgrades.
The future belongs to enterprises that are prepared to redesign from the ground up. In the age of digital workers, architecture is no longer the back office of strategy. It is strategy.
Evidence base
• Stanford HAI, AI Index 2025: enterprise AI usage accelerated to 78 percent in 2024.
• Capgemini, Rise of Agentic AI (2025): 23 percent of organisations had launched pilots; 2 percent had reached full-scale deployment.
• Deloitte Tech Trends 2026: many existing enterprise computing strategies are not designed for production-scale AI inference and are shifting toward more strategic hybrid architectures.
• Microsoft Work Trend Index 2025: agents are emerging as digital colleagues in mixed human-agent teams.
• Klarna / OpenAI (2024): AI assistant handled two-thirds of chats and work equivalent to roughly 700 agents.
• Siemens (2025): industrial AI agents and copilots aimed at autonomous engineering and production workflows.
• Telefónica (2026): 12 Level 4 autonomous network use cases across the group.
• Singapore IMDA (2026): dedicated Model AI Governance Framework for Agentic AI.
