• Beyond the Collapsing Pyramid

    Beyond the Collapsing Pyramid


    Why AI will make great consulting more valuable, not less — and why Bolgiaten’s AI Maturity Assessment is becoming an essential boardroom tool.

    The old consulting pyramid was built on leverage. The next generation of consulting will be built on judgment, governance, enterprise design, and the human leadership needed to turn AI from a tool into a transformation.

    For decades, the consulting business was built on a familiar structure: a broad base of junior analysts and associates feeding insight upward to a narrow band of partners and senior advisers. That model rewarded scale. Firms could deploy teams of smart graduates to gather data, build decks, perform benchmarking, document processes, and power the analysis behind recommendations. It was efficient, profitable, and deeply entrenched.

    Artificial intelligence is now breaking that structure apart.

    The market has been quick to notice the obvious part of the story: work once assigned to junior consultants can increasingly be completed faster, cheaper, and often more consistently by AI-enabled tools. Research synthesis, first-draft presentations, pattern recognition, market scanning, scenario generation, and parts of due diligence no longer require the same labor model they did even two years ago. In professional services, this is not a marginal productivity gain. It is a structural shock.

    Yet this is only half the truth. The deeper truth is more important for clients, advisers, and firms deciding what kind of business they want to become. The same force that is eroding the old consulting pyramid is creating a much larger market for a new kind of consultancy: one built on judgment, enterprise architecture, governance, change leadership, and the disciplined translation of AI capability into operating reality.

    This is the paradox at the heart of consulting’s AI moment. AI destroys low-level advisory work while simultaneously expanding the need for high-value advisory work.

    The New Scarcity Is Not Analysis. It Is Integration.

    The analytical scarcity that once justified large consulting teams is fading. What organizations increasingly lack is not information, but the ability to integrate AI safely, strategically, and at scale. Many enterprises now have pilots, proofs of concept, and isolated use cases. Far fewer have an enterprise-wide model that links AI strategy to governance, process redesign, workforce capability, data readiness, risk controls, and measurable commercial outcomes.

    That gap is where the next generation of consulting value sits.

    Recent global research points to the same conclusion from different angles. McKinsey has reported that while almost all companies are investing in AI, only a tiny minority describe themselves as genuinely mature in adoption, and the major barriers are leadership alignment, operating change, and scaling discipline rather than employee enthusiasm alone. NIST’s AI Risk Management Framework reinforces that AI deployment is not simply a technical issue but a governance and lifecycle challenge. The OECD’s AI Principles and its recent work on enterprise adoption likewise emphasize trustworthy governance, human-centered design, transparency, and capability-building as prerequisites for durable value creation. In Europe, the phased implementation of the EU AI Act is pushing organizations to translate AI ambition into documented controls, accountability, literacy, and risk-based operating practices.

    Taken together, these developments point to a simple reality: enterprises do not need more AI theatre. They need AI orchestration.

    This is why senior advisory work is becoming more valuable. The enterprise challenge is no longer “Can AI do this task?” It is now “How should this business redesign itself so that AI creates measurable value without creating unmanaged risk, fragmented workflows, regulatory exposure, or employee resistance?”

    That question cannot be answered by a chatbot alone.

    From Project Work to Enterprise Transformation

    The strongest global practice is moving beyond isolated use cases towards enterprise transformation. Leading organizations are not treating AI as a bolt-on technology layer. They are redesigning decision flows, clarifying governance, upgrading data foundations, defining accountable ownership, and investing in AI literacy across both executives and delivery teams.

    In practical terms, best practice now rests on six connected disciplines.

    First, strategy. High-performing organizations are explicit about where AI will create value and where it will not. They prioritize a small number of mission-critical business outcomes rather than chasing dozens of disconnected experiments.

    Second, operating model. AI needs a home inside the organization. That means clear sponsorship, role definition, investment logic, model ownership, and a decision-rights framework that prevents innovation from becoming chaos.

    Third, data and technology foundations. AI maturity is constrained by the quality, accessibility, and governance of enterprise data. No amount of enthusiasm compensates for poor metadata, fragmented systems, or weak integration architecture.

    Fourth, governance and trust. Responsible AI is no longer a compliance side note. It is a business requirement. Firms need controls around model risk, human oversight, security, auditability, third-party tools, and policy compliance. This is especially urgent for regulated sectors and for organizations operating across jurisdictions.

    Fifth, workforce and change. The organizations that succeed treat AI adoption as a human transformation. They redesign roles, reallocate work, retrain managers, and engage employees early. Change management is not the packaging around the transformation; it is the transformation.

    Sixth, value realization. Mature adopters define metrics in advance. They measure cycle-time reduction, cost-to-serve, quality uplift, revenue impact, risk reduction, and adoption depth. Without this discipline, AI becomes another innovation story rather than a business result.

    Every one of these domains is advisory-intensive. None can be solved by technology procurement alone. This is why consulting is not disappearing. It is being re-priced around deeper capability.

    Why the Old Pyramid Is Collapsing

    The traditional consulting pyramid assumed that clients would continue paying for labor-intensive analytical assembly. That assumption no longer holds. If AI can compress work that once took five analysts and two weeks into a few hours of guided review, then the economics of leverage change dramatically. Clients will be less willing to fund armies of junior staff producing outputs that can now be generated, compared, and refined by machines.

    This does not mean junior talent becomes irrelevant. It means the apprenticeship model must change. Tomorrow’s consultants will need stronger problem framing, industry context, facilitation, governance awareness, and data fluency much earlier in their careers. The premium will shift away from producing slides and toward shaping decisions.

    For consulting firms, this creates a stark strategic choice. They can defend the old model and watch margins erode, or they can redesign around senior expertise, domain-led teams, AI-enabled delivery, and repeatable transformation frameworks. The winners will not be those with the largest bench. They will be those with the clearest method for helping clients move from experimentation to enterprise maturity.

    The Bolgiaten Proposition: AI Maturity Assessment as a Strategic Entry Point

    This is exactly why Bolgiaten’s AI Maturity Assessment is not a nice-to-have diagnostic. It is an essential executive instrument.

    Most enterprises are currently trapped between ambition and execution. Boards want AI value. Business units want faster tools. Risk teams want assurance. IT wants standardization. HR worries about capability and workforce impact. Legal and compliance want clarity on obligations. Everyone is right, but very few organizations have a common picture of where they actually stand.

    An AI Maturity Assessment solves that problem.

    At its best, such an assessment gives leadership a clear, evidence-based view of current capability across the dimensions that matter most: strategy, governance, data readiness, technology architecture, operating model, workforce capability, responsible AI controls, and value realization. It reveals where the enterprise is genuinely ready, where it is exposed, where investment should be prioritized, and what sequence of actions will unlock scale.

    For Bolgiaten, this creates a compelling market proposition.

    First, it establishes a trusted advisory entry point. Instead of selling abstract AI transformation, Bolgiaten can begin with a structured diagnosis grounded in enterprise reality.

    Second, it converts uncertainty into a roadmap. Clients do not simply receive a score; they receive a staged transformation pathway tied to business outcomes, risk posture, and organizational readiness.

    Third, it creates board-level relevance. AI has now moved into the language of competitiveness, resilience, compliance, and workforce redesign. An assessment translates technical noise into executive decisions.

    Fourth, it opens downstream consulting opportunities. Once maturity gaps are visible, the follow-on demand becomes clear: governance frameworks, operating model redesign, use-case prioritization, AI policy development, vendor evaluation, workforce capability building, and enterprise change management.

    In other words, the assessment is both a client value tool and a consultancy growth engine.

    Why This Is a Massive Consultancy Opportunity

    The opportunity is massive because nearly every medium and large enterprise now needs the same sequence of support. They need to understand their AI maturity. They need to prioritize use cases. They need to redesign processes. They need to establish governance. They need to upskill leaders and teams. They need to embed trust, compliance, and accountability. And they need to prove measurable value.

    That demand is horizontal across industries and vertical within them. Financial services, telecoms, public sector, logistics, infrastructure, energy, health, and professional services all face the same core challenge: AI cannot remain a pilot portfolio. It must become an enterprise capability.

    This is precisely the territory where seasoned consulting earns its keep. The work is cross-functional, politically sensitive, operationally complex, and deeply human. It requires facilitation, judgment, pattern recognition, and the ability to move senior stakeholders from fragmented enthusiasm to coordinated action.

    That is why the future consultancy will look different. It will be smaller at the base, stronger at the center, and far more valuable at the top. It will use AI aggressively in delivery, but it will sell wisdom, not labor. It will package diagnostics, roadmaps, governance architectures, and transformation methods. It will blend technology fluency with organizational design and change capability.

    The Bottom Line

    The consulting industry is not facing extinction. It is facing selection.

    The firms under pressure are those still organized around work that AI now performs adequately. The firms that will grow are those that understand AI as a force that raises the premium on human judgment. As analytical work becomes automated, the value migrates upward to synthesis, leadership, architecture, governance, and change.

    The pyramid is collapsing. But what rises from its foundations will be something more strategic and more durable: a professional services model built not on scale, but on wisdom; not on volume, but on vision.

    And in that new model, tools such as Bolgiaten’s AI Maturity Assessment will become indispensable. They provide the starting point every serious enterprise now needs: an honest view of readiness, a practical route to maturity, and a disciplined bridge from AI ambition to enterprise performance.

    That is not simply a service offering. It is the gateway to the next great consultancy market.

    Bolgiaten Offer a free one hour consultation with Professor Paul Morrissey to discuss this and other related AI issues across your organization please send a request to PJM@bolgiaten.com

  • Rethinking Cyber Defense Across Multiple Attack Surfaces

    Rethinking Cyber Defense Across Multiple Attack Surfaces

    Whenever technology evolves, cyber threats evolve alongside it. The arrival of autonomous and agentic artificial intelligence is accelerating that evolution in ways that many organisations are only beginning to understand. The real shift is not simply the automation of attacks, but the emergence of penetration at scale across multiple attack surfaces.

    In practical terms, this means attackers will increasingly be able to automate the entire attack cycle—from reconnaissance and vulnerability discovery to credential compromise, data extraction, and deception-based intrusion. AI systems can simultaneously probe identities, applications, networks, cloud environments and human decision-makers. The result is not a single attack vector but a coordinated campaign that unfolds across an organisation’s entire digital ecosystem.

    This represents a profound departure from the traditional model of cyber intrusion. Historically, human attackers focused their attention on a limited number of targets, investing time in reconnaissance before launching an intrusion. Artificial intelligence changes that equation dramatically. Autonomous tools can continuously scan for vulnerabilities across thousands or millions of potential targets, learning from each interaction and refining their approach in real time.

    The implication is clear: the future threat environment is defined by scale, persistence and simultaneous pressure across multiple attack surfaces.

    Penetration at AI Scale

    Human cybercriminals have historically been constrained by time and operational capacity. Identifying vulnerable systems, crafting convincing phishing campaigns, or attempting credential theft required careful manual effort. AI-enabled systems remove many of these constraints.

    Autonomous tools can perform reconnaissance continuously, mapping attack surfaces across identities, APIs, cloud infrastructure, and enterprise systems. They can generate and test thousands of phishing messages, automatically adapt social engineering techniques, and exploit exposed credentials within minutes of discovery.

    The attack does not occur in a single place. Instead, it unfolds across multiple surfaces simultaneously:

    • Identity systems such as authentication platforms and privileged accounts
    • Cloud infrastructure and software-as-a-service environments
    • APIs and interconnected digital services
    • AI models and data pipelines themselves
    • Human users targeted through increasingly convincing deception

    This is what penetration at scale looks like: not one entry point, but many potential openings tested continuously until one succeeds.

    And once access is achieved, AI-driven tools may accelerate lateral movement, privilege escalation and data discovery far more quickly than human attackers could manage. Sensitive data can be identified, aggregated and exfiltrated automatically, while malicious software can be inserted to enable future exploitation.

    At the same time, organisations themselves are rapidly deploying AI agents across their operations—from customer service and internal knowledge management to supply chains and decision support. While these systems deliver clear efficiency gains, they also introduce new vulnerabilities and attack surfaces that traditional cybersecurity frameworks were not designed to address.

    In particular, researchers have highlighted the risk of prompt injection attacks, data poisoning, model manipulation and agent misalignment. These vulnerabilities allow malicious actors to manipulate AI systems themselves, turning internal automation tools into potential attack vectors.

    In short, the defensive environment is becoming more complex at the same moment that offensive capability is becoming more automated.

    A New Cybersecurity Landscape

    We are therefore entering a new phase of cybersecurity where defence must operate at the same scale and speed as AI-enabled threats. Reactive models of cybersecurity—where incidents are analysed and mitigated after detection—will increasingly struggle to keep pace with automated attacks unfolding in real time.

    Governments and regulators are already recognising this shift. Emerging initiatives such as AI risk management frameworks, secure AI system development guidance, and new cybersecurity standards are being developed to help organisations manage these risks. The direction of travel is clear: cybersecurity must become more proactive, predictive and resilient.

    For businesses, this means developing a cybersecurity playbook designed specifically for the AI era.

    A Cybersecurity Playbook for the Agentic Era

    Every organisation should now be developing a strategic framework that prepares it for penetration attempts occurring simultaneously across multiple attack surfaces.

    The first element of such a playbook is governance. Organisations deploying AI systems must implement clear policies defining how those systems operate, what data they can access, and how their actions are monitored. Robust identity and access management is essential, alongside detailed logging and audit mechanisms capable of tracking both human and machine decision-making.

    Second, incident response strategies must evolve. Traditional response processes assume that human analysts investigate threats and then take action. When attacks unfold at machine speed, that model becomes increasingly impractical.

    Defensive systems will need automated containment capabilities capable of isolating compromised services, revoking credentials, and limiting lateral movement in real time. This raises an important governance question for leadership teams: when should automated systems be authorised to take disruptive action in order to protect the organisation?

    In many cases, cybersecurity platforms will need authority to shut down systems or restrict operations temporarily to prevent wider compromise. Determining where those boundaries lie will become a critical leadership decision in the coming years.

    Third, organisations must prioritise workforce awareness. AI-powered deception techniques—including deepfake audio, synthetic video, and highly personalised phishing—are becoming increasingly sophisticated. Security awareness cannot remain confined to IT departments; it must become a universal organisational capability.

    Employees need training to recognise emerging forms of manipulation and to understand the role they play in maintaining cyber resilience. Just as importantly, training programmes must evolve continuously as new attack techniques emerge.

    Finally, organisations must remain aligned with emerging standards and frameworks. Cybersecurity policies that remain static will rapidly become obsolete in a rapidly evolving threat environment. Continuous review against global best practices ensures that defensive strategies remain current.

    The Strategic Message

    If there is one central message for business leaders, it is this: the emergence of AI-enabled penetration at scale across multiple attack surfaces represents more than simply another cybersecurity threat.

    It represents a transformation of the entire threat landscape.

    Defensive strategies built for a slower, more predictable era of cyber intrusion are no longer sufficient. Organisations must now prepare for a world in which attacks occur continuously, adapt dynamically, and operate simultaneously across infrastructure, software, identities, data and human behaviour.

    In such an environment, cybersecurity resilience depends not only on stronger tools but on stronger strategy.

    The organisations that succeed will be those that recognise the scale of this transformation early, rethink their security playbooks, and build defences capable of operating at the same speed and scale as the threats they face.

  • The Hidden Risks of Unsupervised AI Agents

    The Hidden Risks of Unsupervised AI Agents

    Why the Real Economic Impact of AI Is Harder to Measure Than You Think.

    Over the past year I have had many conversations with executives, board members, and investors about Agentic AI and the profound changes it promises to bring to organisations. The tone of these discussions is usually enthusiastic, and understandably so.

    We are told that AI agents will unlock new revenue streams, dramatically increase productivity, and automate complex workflows across the enterprise. Marketing teams expect faster campaign creation, customer service leaders expect 24-hour support automation, finance departments expect automated reconciliation, and operations teams expect continuous optimisation. In short, everyone is focused on the upside.

    But there is a question I often ask in boardrooms and strategy sessions that tends to bring the conversation to a pause:

    How do you actually measure the real economic value of AI?

    Because while everyone is excited about the promise of increased revenue and operational efficiency, far fewer organisations are measuring the full economic impact of AI — including the hidden risks that come with deploying autonomous or semi-autonomous AI agents. And those risks can be significant.

    The Problem with Simplistic ROI Thinking

    Most AI business cases presented to CFOs follow a predictable format.

    They focus on two numbers:

    1. Revenue growth
    2. Operational efficiency

    This is a reasonable starting point. AI can absolutely help organisations generate new revenue opportunities and reduce operational costs. But it is only part of the picture. What is often missing from these models is a third and much more complex factor: Intangible Benefits. (IB)

    These can be positive — such as improved customer experience, faster innovation, or stronger competitive positioning.

    But they can also be negative, — And when negative intangibles occur in the context of AI systems, they can escalate quickly. Before discussing those risks, it helps to introduce a simple framework I often use when discussing AI economics with executive teams.

    A Practical Metric for Measuring AI Value

    One way to frame the discussion with finance leaders — particularly the CFO, who is usually the most sceptical person in the room — is to express the impact of AI in terms of Economic Impact  (EI) relative to the organisation’s financial scale.

    The metric I use is the following:

    Economic Impact (EI) = (∆ Revenue + ∆ Efficiency + Intangible Benefits) / EBITDAR 

    Where:

    • Δ Revenue represents the incremental revenue generated by AI initiatives (Use Cases)
    • Δ Efficiency represents measurable improvements in productivity or cost reduction
    • Intangible Benefits (IB) capture both positive and negative strategic effects
    • EBITDAR represents Earnings Before Interest, Taxes, Depreciation, Amortisation and Restructuring (or Rent), which effectively normalises the organisation’s operating scale

    Why divide by EBITDAR?

    Because doing so contextualises the Economic Impact (EI) relative to the size of the organisation. A £5 million efficiency gain means something very different to a company with £20 million EBITDAR than it does to one with £500 million.

    This framework gives the CFO a common financial language in which to evaluate AI initiatives. But the most important component of the equation is the one that is most frequently ignored. Intangible Benefits. (IB)

    The Hidden Side of Intangible Benefits (IB)

    When organisations present AI initiatives internally, intangible benefits are usually framed in positive terms:

    • improved decision-making
    • faster response times
    • enhanced customer experiences
    • stronger brand perception

    All of these are real.

    However, what is often underestimated are the negative intangible impacts that can emerge from poorly supervised AI systems. Particularly when organisations begin deploying autonomous AI agents.

    AI agents are powerful because they can act independently — analysing information, making decisions, and executing tasks across multiple systems. But autonomy without governance creates new categories of risk.

    Three deserve careful attention.

    1. Data Leakage

    AI systems depend heavily on data.

    When those systems are connected to internal knowledge bases, customer records, contracts, or intellectual property, the risk of data leakage becomes significant.

    This can occur in multiple ways:

    • sensitive data being exposed through prompts or responses
    • proprietary information being incorporated into external models
    • confidential customer data being accessed or transmitted improperly

    The consequences can range from regulatory breaches to loss of competitive advantage. In highly regulated sectors — such as telecommunications, healthcare, or finance — the reputational damage alone can be considerable.

    Large language models and AI agents can sometimes generate hallucinations — confident but incorrect responses.

    2. Hallucination and Customer Trust

    In internal workflows this may simply create inefficiencies.

    In customer-facing systems, however, the consequences can be more serious.

    Imagine an AI agent:

    • giving incorrect billing information
    • misrepresenting product capabilities
    • generating misleading compliance guidance

    The immediate impact is poor customer experience. But the deeper issue is trust erosion.

    Trust, once lost, is extremely difficult to rebuild.

    3. Model Drift

    AI systems are not static.

    Over time, models can experience drift — where their behaviour gradually deviates from expected performance.

    This may occur because:

    • the underlying data environment changes
    • feedback loops alter model behaviour
    • system updates introduce unintended bias or errors

    If drift is not detected early, the organisation may continue operating under the assumption that AI outputs remain accurate. In reality, decision quality may already be deteriorating.

    Reputation: The Fragile Asset

    When organisations discuss AI benefits, they often overlook the fact that reputation is one of the most valuable assets any company possesses.

    And reputation behaves asymmetrically. One bad event can wipe out thousands of positive interactions. I often summarise it in very simple terms:

    One negative event can wipe out 10,000 positive ones.

    In the context of AI, this could be:

    • a widely reported data breach
    • an AI-generated decision perceived as unethical
    • a discriminatory algorithmic outcome
    • a regulatory violation resulting from automated decision-making

    These events do not just affect operations. They affect brand trust, customer loyalty, regulatory scrutiny, and investor confidence. All of which belong squarely within the Intangible Benefits (IB) component of the economic impact equation.

    Why Governance Matters

    None of this should be interpreted as an argument against AI. Far from it.

    AI will undoubtedly become one of the most powerful productivity tools organisations have ever deployed. But the organisations that succeed will not simply deploy AI faster than others. They will deploy it more responsibly and more intelligently.

    That means introducing:

    • strong AI governance frameworks
    • human oversight for critical decisions
    • continuous model monitoring
    • robust data protection mechanisms
    • clear ethical guidelines for AI deployment

    In other words, AI should augment human judgement — not replace it entirely.

    The Conversation CFOs Need to Have

    Whenever I present the Economic Impact (EI) equation to executive teams, I emphasise one point. The equation is not just a financial model. It is a governance conversation.

    It forces leadership teams to ask:

    • What new revenue can AI truly create?
    • What measurable efficiencies will it deliver?
    • What positive intangible benefits will it generate?
    • And critically, what negative intangible risks might it introduce?

    Only by considering all four elements together can organisations measure the true economic value of AI. Because if the numerator in the equation includes hidden risks that no one is monitoring, the apparent economic impact may be overstated.

    And when those risks materialise, the consequences can be sudden and severe.

    Final Thoughts

    AI agents will undoubtedly transform how organisations operate. They will create extraordinary opportunities for automation, innovation, and growth. But as with all powerful technologies, the benefits must be balanced with careful governance and realistic economic measurement. The organisations that thrive in the AI era will not be those that chase automation blindly. They will be those that understand both the upside and the downside and measure the true economic impact accordingly.

    Why This Thinking Matters in AI Readiness.

     This type of thinking is precisely why I developed my AI Readiness Assessment methodology. Too many organisations approach AI adoption as a technology deployment exercise rather than a strategic capability transformation.

    The purpose of the AI Readiness Assessment is to help organisations understand:

    • where they currently stand with AI maturity
    • how strong their governance and risk frameworks are
    • whether their data foundations are ready for AI deployment
    • how AI initiatives can be measured in terms of real economic impact

    More importantly, it allows organisations to design an AI journey that is measurable, risk-aware, and sustainable. In other words, it helps organisations capture the upside of AI while ensuring the hidden risks — the ones that often sit inside the “Intangible Benefits” component of the equation — are properly understood and managed.

    Because the real challenge of AI is not deploying it.

    The real challenge is deploying it responsibly, strategically, and in a way that strengthens the organisation rather than exposing it to unnecessary risk.

    If you would like to learn more about the AI Readiness Assessment methodology, feel free to contact me directly at: pjm@bolgiaten.com

    I would be delighted to continue the conversation.