• The Hidden Risks of Unsupervised AI Agents

    The Hidden Risks of Unsupervised AI Agents

    Why the Real Economic Impact of AI Is Harder to Measure Than You Think.

    Over the past year I have had many conversations with executives, board members, and investors about Agentic AI and the profound changes it promises to bring to organisations. The tone of these discussions is usually enthusiastic, and understandably so.

    We are told that AI agents will unlock new revenue streams, dramatically increase productivity, and automate complex workflows across the enterprise. Marketing teams expect faster campaign creation, customer service leaders expect 24-hour support automation, finance departments expect automated reconciliation, and operations teams expect continuous optimisation. In short, everyone is focused on the upside.

    But there is a question I often ask in boardrooms and strategy sessions that tends to bring the conversation to a pause:

    How do you actually measure the real economic value of AI?

    Because while everyone is excited about the promise of increased revenue and operational efficiency, far fewer organisations are measuring the full economic impact of AI — including the hidden risks that come with deploying autonomous or semi-autonomous AI agents. And those risks can be significant.

    The Problem with Simplistic ROI Thinking

    Most AI business cases presented to CFOs follow a predictable format.

    They focus on two numbers:

    1. Revenue growth
    2. Operational efficiency

    This is a reasonable starting point. AI can absolutely help organisations generate new revenue opportunities and reduce operational costs. But it is only part of the picture. What is often missing from these models is a third and much more complex factor: Intangible Benefits. (IB)

    These can be positive — such as improved customer experience, faster innovation, or stronger competitive positioning.

    But they can also be negative, — And when negative intangibles occur in the context of AI systems, they can escalate quickly. Before discussing those risks, it helps to introduce a simple framework I often use when discussing AI economics with executive teams.

    A Practical Metric for Measuring AI Value

    One way to frame the discussion with finance leaders — particularly the CFO, who is usually the most sceptical person in the room — is to express the impact of AI in terms of Economic Impact  (EI) relative to the organisation’s financial scale.

    The metric I use is the following:

    Economic Impact (EI) = (∆ Revenue + ∆ Efficiency + Intangible Benefits) / EBITDAR 

    Where:

    • Δ Revenue represents the incremental revenue generated by AI initiatives (Use Cases)
    • Δ Efficiency represents measurable improvements in productivity or cost reduction
    • Intangible Benefits (IB) capture both positive and negative strategic effects
    • EBITDAR represents Earnings Before Interest, Taxes, Depreciation, Amortisation and Restructuring (or Rent), which effectively normalises the organisation’s operating scale

    Why divide by EBITDAR?

    Because doing so contextualises the Economic Impact (EI) relative to the size of the organisation. A £5 million efficiency gain means something very different to a company with £20 million EBITDAR than it does to one with £500 million.

    This framework gives the CFO a common financial language in which to evaluate AI initiatives. But the most important component of the equation is the one that is most frequently ignored. Intangible Benefits. (IB)

    The Hidden Side of Intangible Benefits (IB)

    When organisations present AI initiatives internally, intangible benefits are usually framed in positive terms:

    • improved decision-making
    • faster response times
    • enhanced customer experiences
    • stronger brand perception

    All of these are real.

    However, what is often underestimated are the negative intangible impacts that can emerge from poorly supervised AI systems. Particularly when organisations begin deploying autonomous AI agents.

    AI agents are powerful because they can act independently — analysing information, making decisions, and executing tasks across multiple systems. But autonomy without governance creates new categories of risk.

    Three deserve careful attention.

    1. Data Leakage

    AI systems depend heavily on data.

    When those systems are connected to internal knowledge bases, customer records, contracts, or intellectual property, the risk of data leakage becomes significant.

    This can occur in multiple ways:

    • sensitive data being exposed through prompts or responses
    • proprietary information being incorporated into external models
    • confidential customer data being accessed or transmitted improperly

    The consequences can range from regulatory breaches to loss of competitive advantage. In highly regulated sectors — such as telecommunications, healthcare, or finance — the reputational damage alone can be considerable.

    Large language models and AI agents can sometimes generate hallucinations — confident but incorrect responses.

    2. Hallucination and Customer Trust

    In internal workflows this may simply create inefficiencies.

    In customer-facing systems, however, the consequences can be more serious.

    Imagine an AI agent:

    • giving incorrect billing information
    • misrepresenting product capabilities
    • generating misleading compliance guidance

    The immediate impact is poor customer experience. But the deeper issue is trust erosion.

    Trust, once lost, is extremely difficult to rebuild.

    3. Model Drift

    AI systems are not static.

    Over time, models can experience drift — where their behaviour gradually deviates from expected performance.

    This may occur because:

    • the underlying data environment changes
    • feedback loops alter model behaviour
    • system updates introduce unintended bias or errors

    If drift is not detected early, the organisation may continue operating under the assumption that AI outputs remain accurate. In reality, decision quality may already be deteriorating.

    Reputation: The Fragile Asset

    When organisations discuss AI benefits, they often overlook the fact that reputation is one of the most valuable assets any company possesses.

    And reputation behaves asymmetrically. One bad event can wipe out thousands of positive interactions. I often summarise it in very simple terms:

    One negative event can wipe out 10,000 positive ones.

    In the context of AI, this could be:

    • a widely reported data breach
    • an AI-generated decision perceived as unethical
    • a discriminatory algorithmic outcome
    • a regulatory violation resulting from automated decision-making

    These events do not just affect operations. They affect brand trust, customer loyalty, regulatory scrutiny, and investor confidence. All of which belong squarely within the Intangible Benefits (IB) component of the economic impact equation.

    Why Governance Matters

    None of this should be interpreted as an argument against AI. Far from it.

    AI will undoubtedly become one of the most powerful productivity tools organisations have ever deployed. But the organisations that succeed will not simply deploy AI faster than others. They will deploy it more responsibly and more intelligently.

    That means introducing:

    • strong AI governance frameworks
    • human oversight for critical decisions
    • continuous model monitoring
    • robust data protection mechanisms
    • clear ethical guidelines for AI deployment

    In other words, AI should augment human judgement — not replace it entirely.

    The Conversation CFOs Need to Have

    Whenever I present the Economic Impact (EI) equation to executive teams, I emphasise one point. The equation is not just a financial model. It is a governance conversation.

    It forces leadership teams to ask:

    • What new revenue can AI truly create?
    • What measurable efficiencies will it deliver?
    • What positive intangible benefits will it generate?
    • And critically, what negative intangible risks might it introduce?

    Only by considering all four elements together can organisations measure the true economic value of AI. Because if the numerator in the equation includes hidden risks that no one is monitoring, the apparent economic impact may be overstated.

    And when those risks materialise, the consequences can be sudden and severe.

    Final Thoughts

    AI agents will undoubtedly transform how organisations operate. They will create extraordinary opportunities for automation, innovation, and growth. But as with all powerful technologies, the benefits must be balanced with careful governance and realistic economic measurement. The organisations that thrive in the AI era will not be those that chase automation blindly. They will be those that understand both the upside and the downside and measure the true economic impact accordingly.

    Why This Thinking Matters in AI Readiness.

     This type of thinking is precisely why I developed my AI Readiness Assessment methodology. Too many organisations approach AI adoption as a technology deployment exercise rather than a strategic capability transformation.

    The purpose of the AI Readiness Assessment is to help organisations understand:

    • where they currently stand with AI maturity
    • how strong their governance and risk frameworks are
    • whether their data foundations are ready for AI deployment
    • how AI initiatives can be measured in terms of real economic impact

    More importantly, it allows organisations to design an AI journey that is measurable, risk-aware, and sustainable. In other words, it helps organisations capture the upside of AI while ensuring the hidden risks — the ones that often sit inside the “Intangible Benefits” component of the equation — are properly understood and managed.

    Because the real challenge of AI is not deploying it.

    The real challenge is deploying it responsibly, strategically, and in a way that strengthens the organisation rather than exposing it to unnecessary risk.

    If you would like to learn more about the AI Readiness Assessment methodology, feel free to contact me directly at: pjm@bolgiaten.com

    I would be delighted to continue the conversation.

  • When Vibe Coding Meets the Real World: Security, Governance and the Rise of S2aaS

    When Vibe Coding Meets the Real World: Security, Governance and the Rise of S2aaS

    The question is no longer whether AI can generate code. It clearly can. The real question is whether “vibe coded” products can be trusted, governed and secured well enough to be taken seriously inside an enterprise.

    Over the past year, tools such as Claude, OpenAI, Gemini and others have dramatically lowered the barrier to software creation. What many are now calling vibe coding allows founders, product teams and even non-engineers to produce working applications at remarkable speed. Prototypes that once took months can now appear in hours. That is genuinely transformative.

    But it also creates a dangerous illusion. The ability to generate software quickly is not the same as the ability to create software that is secure, resilient, compliant and enterprise ready. In fact, the faster code is created, the more important governance becomes. The risk is not that AI-generated code fails to compile. The risk is that it appears to work while hiding weaknesses that only emerge later under attack, under regulation, or under enterprise scrutiny.

    Where the problem begins

    This is where vibe coding may hit the rocks. Not because the model cannot write code, but because code alone is only one small part of software assurance. Enterprise-grade products require secure architecture, identity controls, dependency management, auditability, testing discipline, provenance, data governance, model risk controls, human accountability and clear operational ownership. None of that is guaranteed simply because an AI assistant can generate a neat application layer.

    Global best practice is already pointing in this direction. NIST’s Secure Software Development Framework profile for generative AI makes clear that AI-assisted development still requires disciplined secure development, validation and supply-chain control. The Open Worldwide Application Security Project (OWASP’s) work on LLM application risk highlights issues such as prompt injection, insecure output handling, data leakage and supply-chain vulnerabilities. The UK’s guidance on secure AI system development and its recent Software Security Code of Practice push the same message: security must be designed in, not bolted on afterwards.

    That matters commercially. A great many AI-generated products and services being built today are exciting, useful and investable at the prototype stage, but they are not yet enterprise ready in the full sense of the term. They may lack code provenance, robust access control, explainable governance, secure deployment patterns, red-team testing, policy enforcement and evidence that they can survive procurement due diligence. In other words, there is a widening gap between AI-enabled software creation and enterprise-grade software assurance.

    Why S2aaS could matter

    That gap is precisely where an opportunity emerges. I believe there is a growing market for a Secure Software as a Service model — S2aaS — sitting above or alongside the current generation of agentic and SaaS platforms. The proposition would not simply be to host software, nor merely to generate it faster, but to wrap AI-enabled product development in a governed, continuously monitored, policy-driven security and assurance layer. This would include secure coding controls, architectural review, software bill of materials, vulnerability scanning, secrets management, model governance, compliance mapping, runtime monitoring and board-level assurance reporting.

    In practical terms, S2aaS could become the trust fabric for the vibe coding economy. Start-ups could build at speed, but within a managed security and governance envelope. Mid-sized firms could adopt AI-generated internal tools without carrying the full burden of building a mature software assurance capability themselves. Large enterprises could accelerate innovation while retaining procurement-grade evidence, audit trails and risk visibility. Regulators and boards would be more likely to support innovation if they can see that clear control frameworks exist around it.

    Beyond Agentic AI versus SaaS

    This is also why the debate between Agentic AI and traditional SaaS may be missing a deeper point. The next battleground may not simply be who automates more work. It may be who can deliver trusted automation at scale. In that world, S2aaS starts to look less like a niche service and more like SaaS 2.0: software delivery fused with security, governance, compliance and assurance by design.

    My conclusion

    My conclusion is therefore straightforward. Vibe coding is real, powerful and economically important. But on its own it is not enough for serious enterprise deployment. The winners in the next phase of the market may not be those who generate the most code the fastest. They may be the organisations that make AI-generated software trustworthy, governable and insurable. That is where value migrates once the first excitement fades.

    So yes, I believe there is an opportunity here. The space between AI-generated software and enterprise trust is not a minor implementation issue. It is a strategic market gap. And for advanced security and governance organisations prepared to package that capability as a service, S2aaS could prove to be one of the most important commercial categories to emerge from the age of AI-assisted software development.

    Reference points informing the argument

    • NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (2024).

    • NIST AI Risk Management Framework (AI RMF).

    • OWASP Top 10 for LLM Applications 2025.

    • NCSC / CISA / partner agencies: Guidelines for Secure AI System Development.

    • UK Government, Code of Practice for the Cyber Security of AI (2025).

    • UK Government, Software Security Code of Practice (2026).

    • European Commission, General-Purpose AI Code of Practice (2025).

  • Agentic AI vs SaaS: Is This the Beginning of the End — or the Next Evolution?

    Agentic AI vs SaaS: Is This the Beginning of the End — or the Next Evolution?

    Over the past few months, I have been asked the same provocative question again and again: “Will Agentic AI be the nail in the coffin for SaaS?” It’s a good question. But I think it’s the wrong one.

    The real question is this: Will Agentic AI expose which SaaS companies actually own real value — and which ones were simply renting convenience in the cloud? For the past two decades SaaS has been one of the most successful business models in technology. 

    Subscription revenue, predictable cash flow, scalable delivery, and strong margins made it incredibly attractive to founders and investors alike. But a large portion of SaaS value has historically been built around user interfaces, workflow routing, dashboards, form entry and seat-based licences. In other words, SaaS often organised work rather than actually doing the work.

    Agentic AI changes that equation.

    Agentic AI systems can plan, execute and manage multi-step workflows autonomously. Instead of humans navigating multiple software tools, AI agents can increasingly complete the task themselves — resolving support tickets, updating CRM records, generating reports, reconciling invoices, or coordinating procurement processes. In short, the interface layer that defined much of SaaS may no longer be the Centre of gravity. That doesn’t mean SaaS disappears. But it does mean the economic model behind many SaaS companies is now under scrutiny.

    The companies that survive this shift will not be those that simply provide software. They will be those that control data, own critical workflows, operate in trusted domains, and can price based on outcomes rather than user seats. This is not the death of software. It is the transition from SaaS 1.0 to something much more autonomous.

    The Venture Capital Perspective

    From a venture capital perspective, software investment is not slowing down — but the type of software being funded is changing rapidly. AI companies accounted for the majority of venture capital investment in 2025, with roughly 61% of global VC funding going into AI-related companies [1]. Enterprise adoption is also accelerating quickly. One report found that 76% of enterprise AI deployments were purchased solutions rather than internally built systems [2]. In other words, investors are still enthusiastic about software businesses. They are simply shifting their capital toward AI-native platforms, vertical AI applications and agent-enabled workflow systems.

    What venture capitalists are becoming more cautious about is traditional SaaS that sits in the middle of a workflow but does not own the underlying data, decision logic, or automation layer. If an AI agent can orchestrate work across multiple tools, the value of those tools changes dramatically. The key question VCs now ask founders is simple: Why will your software still matter when AI agents can do the work themselves?

    Private Equity’s View

    Private equity investors are approaching the issue with characteristic pragmatism. Technology remains one of the most active sectors for private equity investment. Tech deals represented around 22% of North American private equity transactions in early 2025, and funds still hold hundreds of billions in undeployed capital targeting technology assets [3]. But the classic private equity SaaS playbook is under pressure. For years, PE firms could acquire a promising SaaS company, rely on rapid market expansion, increase revenue growth, and benefit from multiple expansion. Historically, the majority of value creation in technology buyouts came from revenue growth and valuation increases rather than operational improvements [3].

    Today that strategy looks more fragile. Higher interest rates, slower SaaS growth curves, and the disruptive potential of AI are forcing PE firms to become more selective. They are increasingly focused on companies that can use AI to improve margins, automate operations, and deepen product differentiation. In other words, private equity is not abandoning SaaS. It is simply demanding that SaaS businesses evolve into AI-enabled platforms with durable competitive advantages.

    The Family Office Perspective

    Family offices provide a particularly interesting perspective because their investment horizons are often longer and their capital structures more flexible. Most family offices already have some exposure to artificial intelligence. One report suggested that around 86% of family offices now have AI exposure, primarily through public market investments [4]. At the same time, around 65% intend to increase their focus on AI-related investments in the coming years [5].

    However, family offices are also becoming more cautious about valuations and private market liquidity. Despite this caution, both AI and SaaS continue to attract significant family office capital. In fact, venture deal values involving family offices more than doubled for both AI/ML and SaaS companies between 2023 and 2025, even though the total number of deals declined [6]. What this tells us is that family offices are concentrating capital into fewer, higher-quality opportunities rather than retreating from the sector entirely.

    They are asking the same question as other investors: Does this software business still matter in a world where intelligent agents are everywhere?

    My Conclusion

    So, will Agentic AI be the nail in the coffin for SaaS? For weak SaaS businesses, possibly yes. Companies with shallow product differentiation, limited data advantages and purely seat-based pricing models may find their value proposition eroded as automation expands. But for strong software companies, Agentic AI is not a coffin — it is a catalyst. It pushes the industry toward outcome-based software, deeper automation, and products that sit closer to real economic activity rather than simply organizing information. The companies that win in the next decade will not be those that simply manage workflows. They will be the ones whose systems actually perform the work, control the data, and deliver measurable outcomes.

    Serious investors are not turning away from software. They are simply becoming less tolerant of SaaS businesses that cannot explain why they will still matter in an AI-native world. And that may ultimately be the healthiest thing that could happen to the software industry.

    References

    [1] OECD – Venture Capital Investments in Artificial Intelligence Through 2025

    [2] Menlo Ventures – State of Generative AI in the Enterprise Report

    [3] Bain & Company – Global Technology Report 2025

    [4] Goldman Sachs – Family Office Investment Insights Report

    [5] J.P. Morgan – Global Family Office Report 2026

    [6] PwC – Global Family Office Deals Study 2025