Author: Paul Morrissey

  • Rethinking Cyber Defense Across Multiple Attack Surfaces

    Rethinking Cyber Defense Across Multiple Attack Surfaces

    Whenever technology evolves, cyber threats evolve alongside it. The arrival of autonomous and agentic artificial intelligence is accelerating that evolution in ways that many organisations are only beginning to understand. The real shift is not simply the automation of attacks, but the emergence of penetration at scale across multiple attack surfaces.

    In practical terms, this means attackers will increasingly be able to automate the entire attack cycle—from reconnaissance and vulnerability discovery to credential compromise, data extraction, and deception-based intrusion. AI systems can simultaneously probe identities, applications, networks, cloud environments and human decision-makers. The result is not a single attack vector but a coordinated campaign that unfolds across an organisation’s entire digital ecosystem.

    This represents a profound departure from the traditional model of cyber intrusion. Historically, human attackers focused their attention on a limited number of targets, investing time in reconnaissance before launching an intrusion. Artificial intelligence changes that equation dramatically. Autonomous tools can continuously scan for vulnerabilities across thousands or millions of potential targets, learning from each interaction and refining their approach in real time.

    The implication is clear: the future threat environment is defined by scale, persistence and simultaneous pressure across multiple attack surfaces.

    Penetration at AI Scale

    Human cybercriminals have historically been constrained by time and operational capacity. Identifying vulnerable systems, crafting convincing phishing campaigns, or attempting credential theft required careful manual effort. AI-enabled systems remove many of these constraints.

    Autonomous tools can perform reconnaissance continuously, mapping attack surfaces across identities, APIs, cloud infrastructure, and enterprise systems. They can generate and test thousands of phishing messages, automatically adapt social engineering techniques, and exploit exposed credentials within minutes of discovery.

    The attack does not occur in a single place. Instead, it unfolds across multiple surfaces simultaneously:

    • Identity systems such as authentication platforms and privileged accounts
    • Cloud infrastructure and software-as-a-service environments
    • APIs and interconnected digital services
    • AI models and data pipelines themselves
    • Human users targeted through increasingly convincing deception

    This is what penetration at scale looks like: not one entry point, but many potential openings tested continuously until one succeeds.

    And once access is achieved, AI-driven tools may accelerate lateral movement, privilege escalation and data discovery far more quickly than human attackers could manage. Sensitive data can be identified, aggregated and exfiltrated automatically, while malicious software can be inserted to enable future exploitation.

    At the same time, organisations themselves are rapidly deploying AI agents across their operations—from customer service and internal knowledge management to supply chains and decision support. While these systems deliver clear efficiency gains, they also introduce new vulnerabilities and attack surfaces that traditional cybersecurity frameworks were not designed to address.

    In particular, researchers have highlighted the risk of prompt injection attacks, data poisoning, model manipulation and agent misalignment. These vulnerabilities allow malicious actors to manipulate AI systems themselves, turning internal automation tools into potential attack vectors.

    In short, the defensive environment is becoming more complex at the same moment that offensive capability is becoming more automated.

    A New Cybersecurity Landscape

    We are therefore entering a new phase of cybersecurity where defence must operate at the same scale and speed as AI-enabled threats. Reactive models of cybersecurity—where incidents are analysed and mitigated after detection—will increasingly struggle to keep pace with automated attacks unfolding in real time.

    Governments and regulators are already recognising this shift. Emerging initiatives such as AI risk management frameworks, secure AI system development guidance, and new cybersecurity standards are being developed to help organisations manage these risks. The direction of travel is clear: cybersecurity must become more proactive, predictive and resilient.

    For businesses, this means developing a cybersecurity playbook designed specifically for the AI era.

    A Cybersecurity Playbook for the Agentic Era

    Every organisation should now be developing a strategic framework that prepares it for penetration attempts occurring simultaneously across multiple attack surfaces.

    The first element of such a playbook is governance. Organisations deploying AI systems must implement clear policies defining how those systems operate, what data they can access, and how their actions are monitored. Robust identity and access management is essential, alongside detailed logging and audit mechanisms capable of tracking both human and machine decision-making.

    Second, incident response strategies must evolve. Traditional response processes assume that human analysts investigate threats and then take action. When attacks unfold at machine speed, that model becomes increasingly impractical.

    Defensive systems will need automated containment capabilities capable of isolating compromised services, revoking credentials, and limiting lateral movement in real time. This raises an important governance question for leadership teams: when should automated systems be authorised to take disruptive action in order to protect the organisation?

    In many cases, cybersecurity platforms will need authority to shut down systems or restrict operations temporarily to prevent wider compromise. Determining where those boundaries lie will become a critical leadership decision in the coming years.

    Third, organisations must prioritise workforce awareness. AI-powered deception techniques—including deepfake audio, synthetic video, and highly personalised phishing—are becoming increasingly sophisticated. Security awareness cannot remain confined to IT departments; it must become a universal organisational capability.

    Employees need training to recognise emerging forms of manipulation and to understand the role they play in maintaining cyber resilience. Just as importantly, training programmes must evolve continuously as new attack techniques emerge.

    Finally, organisations must remain aligned with emerging standards and frameworks. Cybersecurity policies that remain static will rapidly become obsolete in a rapidly evolving threat environment. Continuous review against global best practices ensures that defensive strategies remain current.

    The Strategic Message

    If there is one central message for business leaders, it is this: the emergence of AI-enabled penetration at scale across multiple attack surfaces represents more than simply another cybersecurity threat.

    It represents a transformation of the entire threat landscape.

    Defensive strategies built for a slower, more predictable era of cyber intrusion are no longer sufficient. Organisations must now prepare for a world in which attacks occur continuously, adapt dynamically, and operate simultaneously across infrastructure, software, identities, data and human behaviour.

    In such an environment, cybersecurity resilience depends not only on stronger tools but on stronger strategy.

    The organisations that succeed will be those that recognise the scale of this transformation early, rethink their security playbooks, and build defences capable of operating at the same speed and scale as the threats they face.

  • The Hidden Risks of Unsupervised AI Agents

    The Hidden Risks of Unsupervised AI Agents

    Why the Real Economic Impact of AI Is Harder to Measure Than You Think.

    Over the past year I have had many conversations with executives, board members, and investors about Agentic AI and the profound changes it promises to bring to organisations. The tone of these discussions is usually enthusiastic, and understandably so.

    We are told that AI agents will unlock new revenue streams, dramatically increase productivity, and automate complex workflows across the enterprise. Marketing teams expect faster campaign creation, customer service leaders expect 24-hour support automation, finance departments expect automated reconciliation, and operations teams expect continuous optimisation. In short, everyone is focused on the upside.

    But there is a question I often ask in boardrooms and strategy sessions that tends to bring the conversation to a pause:

    How do you actually measure the real economic value of AI?

    Because while everyone is excited about the promise of increased revenue and operational efficiency, far fewer organisations are measuring the full economic impact of AI — including the hidden risks that come with deploying autonomous or semi-autonomous AI agents. And those risks can be significant.

    The Problem with Simplistic ROI Thinking

    Most AI business cases presented to CFOs follow a predictable format.

    They focus on two numbers:

    1. Revenue growth
    2. Operational efficiency

    This is a reasonable starting point. AI can absolutely help organisations generate new revenue opportunities and reduce operational costs. But it is only part of the picture. What is often missing from these models is a third and much more complex factor: Intangible Benefits. (IB)

    These can be positive — such as improved customer experience, faster innovation, or stronger competitive positioning.

    But they can also be negative, — And when negative intangibles occur in the context of AI systems, they can escalate quickly. Before discussing those risks, it helps to introduce a simple framework I often use when discussing AI economics with executive teams.

    A Practical Metric for Measuring AI Value

    One way to frame the discussion with finance leaders — particularly the CFO, who is usually the most sceptical person in the room — is to express the impact of AI in terms of Economic Impact  (EI) relative to the organisation’s financial scale.

    The metric I use is the following:

    Economic Impact (EI) = (∆ Revenue + ∆ Efficiency + Intangible Benefits) / EBITDAR 

    Where:

    • Δ Revenue represents the incremental revenue generated by AI initiatives (Use Cases)
    • Δ Efficiency represents measurable improvements in productivity or cost reduction
    • Intangible Benefits (IB) capture both positive and negative strategic effects
    • EBITDAR represents Earnings Before Interest, Taxes, Depreciation, Amortisation and Restructuring (or Rent), which effectively normalises the organisation’s operating scale

    Why divide by EBITDAR?

    Because doing so contextualises the Economic Impact (EI) relative to the size of the organisation. A £5 million efficiency gain means something very different to a company with £20 million EBITDAR than it does to one with £500 million.

    This framework gives the CFO a common financial language in which to evaluate AI initiatives. But the most important component of the equation is the one that is most frequently ignored. Intangible Benefits. (IB)

    The Hidden Side of Intangible Benefits (IB)

    When organisations present AI initiatives internally, intangible benefits are usually framed in positive terms:

    • improved decision-making
    • faster response times
    • enhanced customer experiences
    • stronger brand perception

    All of these are real.

    However, what is often underestimated are the negative intangible impacts that can emerge from poorly supervised AI systems. Particularly when organisations begin deploying autonomous AI agents.

    AI agents are powerful because they can act independently — analysing information, making decisions, and executing tasks across multiple systems. But autonomy without governance creates new categories of risk.

    Three deserve careful attention.

    1. Data Leakage

    AI systems depend heavily on data.

    When those systems are connected to internal knowledge bases, customer records, contracts, or intellectual property, the risk of data leakage becomes significant.

    This can occur in multiple ways:

    • sensitive data being exposed through prompts or responses
    • proprietary information being incorporated into external models
    • confidential customer data being accessed or transmitted improperly

    The consequences can range from regulatory breaches to loss of competitive advantage. In highly regulated sectors — such as telecommunications, healthcare, or finance — the reputational damage alone can be considerable.

    Large language models and AI agents can sometimes generate hallucinations — confident but incorrect responses.

    2. Hallucination and Customer Trust

    In internal workflows this may simply create inefficiencies.

    In customer-facing systems, however, the consequences can be more serious.

    Imagine an AI agent:

    • giving incorrect billing information
    • misrepresenting product capabilities
    • generating misleading compliance guidance

    The immediate impact is poor customer experience. But the deeper issue is trust erosion.

    Trust, once lost, is extremely difficult to rebuild.

    3. Model Drift

    AI systems are not static.

    Over time, models can experience drift — where their behaviour gradually deviates from expected performance.

    This may occur because:

    • the underlying data environment changes
    • feedback loops alter model behaviour
    • system updates introduce unintended bias or errors

    If drift is not detected early, the organisation may continue operating under the assumption that AI outputs remain accurate. In reality, decision quality may already be deteriorating.

    Reputation: The Fragile Asset

    When organisations discuss AI benefits, they often overlook the fact that reputation is one of the most valuable assets any company possesses.

    And reputation behaves asymmetrically. One bad event can wipe out thousands of positive interactions. I often summarise it in very simple terms:

    One negative event can wipe out 10,000 positive ones.

    In the context of AI, this could be:

    • a widely reported data breach
    • an AI-generated decision perceived as unethical
    • a discriminatory algorithmic outcome
    • a regulatory violation resulting from automated decision-making

    These events do not just affect operations. They affect brand trust, customer loyalty, regulatory scrutiny, and investor confidence. All of which belong squarely within the Intangible Benefits (IB) component of the economic impact equation.

    Why Governance Matters

    None of this should be interpreted as an argument against AI. Far from it.

    AI will undoubtedly become one of the most powerful productivity tools organisations have ever deployed. But the organisations that succeed will not simply deploy AI faster than others. They will deploy it more responsibly and more intelligently.

    That means introducing:

    • strong AI governance frameworks
    • human oversight for critical decisions
    • continuous model monitoring
    • robust data protection mechanisms
    • clear ethical guidelines for AI deployment

    In other words, AI should augment human judgement — not replace it entirely.

    The Conversation CFOs Need to Have

    Whenever I present the Economic Impact (EI) equation to executive teams, I emphasise one point. The equation is not just a financial model. It is a governance conversation.

    It forces leadership teams to ask:

    • What new revenue can AI truly create?
    • What measurable efficiencies will it deliver?
    • What positive intangible benefits will it generate?
    • And critically, what negative intangible risks might it introduce?

    Only by considering all four elements together can organisations measure the true economic value of AI. Because if the numerator in the equation includes hidden risks that no one is monitoring, the apparent economic impact may be overstated.

    And when those risks materialise, the consequences can be sudden and severe.

    Final Thoughts

    AI agents will undoubtedly transform how organisations operate. They will create extraordinary opportunities for automation, innovation, and growth. But as with all powerful technologies, the benefits must be balanced with careful governance and realistic economic measurement. The organisations that thrive in the AI era will not be those that chase automation blindly. They will be those that understand both the upside and the downside and measure the true economic impact accordingly.

    Why This Thinking Matters in AI Readiness.

     This type of thinking is precisely why I developed my AI Readiness Assessment methodology. Too many organisations approach AI adoption as a technology deployment exercise rather than a strategic capability transformation.

    The purpose of the AI Readiness Assessment is to help organisations understand:

    • where they currently stand with AI maturity
    • how strong their governance and risk frameworks are
    • whether their data foundations are ready for AI deployment
    • how AI initiatives can be measured in terms of real economic impact

    More importantly, it allows organisations to design an AI journey that is measurable, risk-aware, and sustainable. In other words, it helps organisations capture the upside of AI while ensuring the hidden risks — the ones that often sit inside the “Intangible Benefits” component of the equation — are properly understood and managed.

    Because the real challenge of AI is not deploying it.

    The real challenge is deploying it responsibly, strategically, and in a way that strengthens the organisation rather than exposing it to unnecessary risk.

    If you would like to learn more about the AI Readiness Assessment methodology, feel free to contact me directly at: pjm@bolgiaten.com

    I would be delighted to continue the conversation.

  • When Vibe Coding Meets the Real World: Security, Governance and the Rise of S2aaS

    When Vibe Coding Meets the Real World: Security, Governance and the Rise of S2aaS

    The question is no longer whether AI can generate code. It clearly can. The real question is whether “vibe coded” products can be trusted, governed and secured well enough to be taken seriously inside an enterprise.

    Over the past year, tools such as Claude, OpenAI, Gemini and others have dramatically lowered the barrier to software creation. What many are now calling vibe coding allows founders, product teams and even non-engineers to produce working applications at remarkable speed. Prototypes that once took months can now appear in hours. That is genuinely transformative.

    But it also creates a dangerous illusion. The ability to generate software quickly is not the same as the ability to create software that is secure, resilient, compliant and enterprise ready. In fact, the faster code is created, the more important governance becomes. The risk is not that AI-generated code fails to compile. The risk is that it appears to work while hiding weaknesses that only emerge later under attack, under regulation, or under enterprise scrutiny.

    Where the problem begins

    This is where vibe coding may hit the rocks. Not because the model cannot write code, but because code alone is only one small part of software assurance. Enterprise-grade products require secure architecture, identity controls, dependency management, auditability, testing discipline, provenance, data governance, model risk controls, human accountability and clear operational ownership. None of that is guaranteed simply because an AI assistant can generate a neat application layer.

    Global best practice is already pointing in this direction. NIST’s Secure Software Development Framework profile for generative AI makes clear that AI-assisted development still requires disciplined secure development, validation and supply-chain control. The Open Worldwide Application Security Project (OWASP’s) work on LLM application risk highlights issues such as prompt injection, insecure output handling, data leakage and supply-chain vulnerabilities. The UK’s guidance on secure AI system development and its recent Software Security Code of Practice push the same message: security must be designed in, not bolted on afterwards.

    That matters commercially. A great many AI-generated products and services being built today are exciting, useful and investable at the prototype stage, but they are not yet enterprise ready in the full sense of the term. They may lack code provenance, robust access control, explainable governance, secure deployment patterns, red-team testing, policy enforcement and evidence that they can survive procurement due diligence. In other words, there is a widening gap between AI-enabled software creation and enterprise-grade software assurance.

    Why S2aaS could matter

    That gap is precisely where an opportunity emerges. I believe there is a growing market for a Secure Software as a Service model — S2aaS — sitting above or alongside the current generation of agentic and SaaS platforms. The proposition would not simply be to host software, nor merely to generate it faster, but to wrap AI-enabled product development in a governed, continuously monitored, policy-driven security and assurance layer. This would include secure coding controls, architectural review, software bill of materials, vulnerability scanning, secrets management, model governance, compliance mapping, runtime monitoring and board-level assurance reporting.

    In practical terms, S2aaS could become the trust fabric for the vibe coding economy. Start-ups could build at speed, but within a managed security and governance envelope. Mid-sized firms could adopt AI-generated internal tools without carrying the full burden of building a mature software assurance capability themselves. Large enterprises could accelerate innovation while retaining procurement-grade evidence, audit trails and risk visibility. Regulators and boards would be more likely to support innovation if they can see that clear control frameworks exist around it.

    Beyond Agentic AI versus SaaS

    This is also why the debate between Agentic AI and traditional SaaS may be missing a deeper point. The next battleground may not simply be who automates more work. It may be who can deliver trusted automation at scale. In that world, S2aaS starts to look less like a niche service and more like SaaS 2.0: software delivery fused with security, governance, compliance and assurance by design.

    My conclusion

    My conclusion is therefore straightforward. Vibe coding is real, powerful and economically important. But on its own it is not enough for serious enterprise deployment. The winners in the next phase of the market may not be those who generate the most code the fastest. They may be the organisations that make AI-generated software trustworthy, governable and insurable. That is where value migrates once the first excitement fades.

    So yes, I believe there is an opportunity here. The space between AI-generated software and enterprise trust is not a minor implementation issue. It is a strategic market gap. And for advanced security and governance organisations prepared to package that capability as a service, S2aaS could prove to be one of the most important commercial categories to emerge from the age of AI-assisted software development.

    Reference points informing the argument

    • NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (2024).

    • NIST AI Risk Management Framework (AI RMF).

    • OWASP Top 10 for LLM Applications 2025.

    • NCSC / CISA / partner agencies: Guidelines for Secure AI System Development.

    • UK Government, Code of Practice for the Cyber Security of AI (2025).

    • UK Government, Software Security Code of Practice (2026).

    • European Commission, General-Purpose AI Code of Practice (2025).