• From Intellectual Gymnastics to Enterprise Value

    From Intellectual Gymnastics to Enterprise Value

    Why AI Readiness and the AI Canvas Matter

    For much of the last decade, enterprise conversations about artificial intelligence have been dominated by one deceptively simple question: which AI model is the best? Benchmarks, leaderboards, and release announcements have reinforced the belief that superior models automatically translate into superior business outcomes. In practice, this mindset has become one of the biggest obstacles to realizing real value from AI.

    The uncomfortable truth is that most organizations do not fail at AI because they choose the wrong model. They fail because they deploy AI without organizational readiness, economic clarity, and governance discipline. The result is a familiar pattern: fragmented pilots, impressive demonstrations, and very little sustained business impact. This is the point at which AI devolves into intellectual gymnastics—clever, technically interesting, and strategically hollow.

    The shift now required is not technological but managerial. AI must be treated as a governed, value-producing capability, not a collection of experiments. Two methodologies are central to making that shift real: a rigorous AI Readiness Assessment and the disciplined use of the AI Canvas.


    Why “Best Model” Thinking Breaks Down

    By 2026, most leading AI models are broadly comparable for general-purpose tasks. The differentiator is no longer raw capability, but fitness for purpose. Just as organizations do not hire employees solely based on IQ scores, they should not select AI systems based on headline benchmarks alone.

    Different business functions demand different AI characteristics. Finance and risk functions require traceability, explainability, and strong controls. Marketing and innovation teams may tolerate higher variability in exchange for creativity and speed. High-stakes environments demand restraint; low-stakes environments reward experimentation.

    The real leadership challenge, therefore, is not selecting a single “best” model, but orchestrating a portfolio of capabilities aligned to tasks, risks, and outcomes. That orchestration cannot happen in a vacuum. It requires organizational readiness and economic discipline—precisely where most AI programs falter.


    The Real Failure Mode: Fragmented Experimentation

    Inside many enterprises, AI activity is widespread but shallow. Teams run pilots in isolation, data foundations are inconsistent, accountability is blurred, and governance arrives too late—often after something has gone wrong.

    This fragmentation creates three structural problems. First, value is impossible to measure consistently, making it hard to distinguish promising initiatives from expensive distractions. Second, risk accumulates invisibly across the organization, particularly around data privacy, bias, and third-party dependencies. Third, boards and executives lack a coherent view of AI posture, leaving them unable to govern what they cannot see.

    This is not a tooling problem. It is a readiness problem.


    AI Readiness as a Board-Level Discipline

    An AI Readiness Assessment addresses a fundamental question: is the organization structurally, culturally, and operationally capable of scaling AI safely and profitably? This is not about technical maturity alone. It is about whether strategy, governance, people, data, platforms, and risk management are aligned to support AI as a core capability.

    A comprehensive readiness model evaluates maturity across dimensions such as strategy and value alignment, governance and ethics, workforce skills, data foundations, model lifecycle assurance, platform resilience, third-party risk, and monitoring. Crucially, it establishes clear maturity levels, allowing organizations to move beyond vague ambition toward evidence-based progress.

    For boards and senior executives, this creates a single, integrated view of AI readiness. It provides a common language for discussing AI posture, investment priorities, and risk appetite. It also enables benchmarking—both internally over time and externally against peers—turning AI governance into a measurable, repeatable discipline rather than a reactive compliance exercise.

    Most importantly, readiness assessments link AI ambition to organizational reality. They force difficult but necessary conversations: Are we overreaching? Are we underinvesting in foundations? Are we exposing ourselves to risks we do not understand? Without this clarity, scaling AI is not transformation—it is chance.


    The AI Canvas: From Ideas to Economically Defensible Use Cases

    If AI readiness answers whether the organization can scale AI, the AI Canvas answers whether a specific AI use case is worth pursuing in the first place.

    The AI Canvas is deliberately problem-first. It begins not with technology, but with a clearly articulated business problem, expressed without implying a solution. This distinction matters. “Reduce customer churn” is a business problem. “Build a churn prediction model” is a premature technical answer.

    From there, the Canvas forces early consideration of performance thresholds, data availability, operational constraints, ethical implications, and—critically—economic impact. It asks uncomfortable questions upfront: What level of accuracy is actually required to create value? What is the cost of being wrong? What assumptions underpin the business case, and how sensitive are outcomes to those assumptions?

    This discipline transforms AI ideation. It replaces enthusiasm with scrutiny, and replaces intuition with quantified judgment. Use cases that survive this process are not just technically feasible; they are economically and operationally defensible.


    Economic Impact as a Governance Control

    One of the most powerful aspects of the AI Canvas is its insistence on explicit economic impact modelling. This is not post-hoc ROI justification. It is pre-build economic reasoning.

    By estimating direct benefits, indirect benefits, and total costs before significant investment, organizations gain a baseline against which performance can be measured and governed. Economic impact becomes a control mechanism, not a marketing slide.

    For boards, this is decisive. Expressing AI impact in terms that relate to EBITDA or operating performance allows AI initiatives to be assessed alongside other capital allocation decisions. It also strengthens accountability. If a use case cannot articulate its expected economic contribution within a reasonable confidence range, it should not progress—regardless of how impressive the technology appears.

    This approach also surfaces risk early. Wide uncertainty ranges in economic assumptions highlight where further validation is required, whether in data quality, process design, or regulatory interpretation. In this way, economic modelling becomes inseparable from risk management.


    From Experiments to Orchestration

    When AI readiness assessments and the AI Canvas are used together, organizations undergo a subtle but profound shift. AI stops being a series of disconnected experiments and becomes a managed portfolio of capabilities.

    Readiness provides the enterprise-wide foundation: governance, skills, platforms, and oversight. The Canvas provides the use-case-level rigor: problem clarity, economic justification, and risk awareness. Together, they enable orchestration—matching the right capabilities to the right problems within clearly defined guardrails.

    This portfolio-based approach also reduces dependency risk. Organizations become less vulnerable to single-model failure, vendor lock-in, or sudden regulatory change. They gain resilience by design.


    What Serious Organizations Do Differently

    Organizations that consistently extract value from AI share a common trait: they treat AI as a strategic discipline, not an innovation sideshow. They invest in readiness before scale. They require economic clarity before build. They involve governance voices early, not as an afterthought. And they equip boards with the tools and language needed to exercise informed oversight.

    This does not slow innovation. On the contrary, it accelerates it by eliminating waste, reducing rework, and focusing effort where value is real.


    Conclusion: AI Without Discipline Is Not Strategy

    The era of AI as intellectual gymnastics is ending. As AI becomes embedded in the operational core of organizations, tolerance for unfocused experimentation will diminish. Leaders will be judged not by how much AI they deploy, but by the value, resilience, and trustworthiness of the systems they build. AI Readiness Assessments and the AI Canvas are not academic frameworks. They are the infrastructure of serious AI adoption. Organizations that embrace them will move from playing with AI to performing with it—consistently, responsibly, and at scale.

  • AI Agents in an AI-Native World: A Governance-First Guide

    AI Agents in an AI-Native World: A Governance-First Guide

    Preface

    This article reflects my perspective on the responsible adoption of AI agents within the context of TM Forum’s AI & Data mission, the AI Governance Toolkit, the Data Governance Framework, the AI-Native Blueprint and my own private consultancy work – in particular the Security & Governance and Agentic AI workstreams.

    As Global Ambassador for AI/ML Governance at TM Forum, my focus is on helping Communications Service Providers (CSPs), ecosystem partners and adjacent industries (government and enterprise) move beyond isolated proofs of concept towards safe, secure and scalable AI-native operations. That means anchoring every innovation – including the current wave of agentic AI – in clear governance, trusted data, transparent decisioning, and demonstrable business value.

    

The purpose of this article is pragmatic. It offers a simple, governance-aligned pathway for leaders and practitioners who want to begin experimenting with AI agents today, without compromising on control, compliance or trust. It connects the practical ‘how’ of getting started with agents to the ‘why’ and ‘so what’ of TM Forum’s AI Governance assets and the AI-Native Blueprint Security & Governance approach.

    Professor Paul Morrissey
Global Ambassador – AI/ML Governance, TM Forum & Chairman Bolgiaten Limited.


    AI Agents in an AI-Native World: A Governance-First Guide

    AI agents are rapidly emerging as one of the most powerful patterns in the AI landscape.

    They can plan, decide, act and collaborate across systems in ways that move us beyond simple chatbots or fixed automation. For Communications Service Providers, Digital Service Players and the whole enterprise business domain, this is directly aligned with the shift towards AI-native operations and Open Digital Architecture (ODA).

    However, the fact that something is technically possible does not make it operationally wise. TM Forum’s AI Governance Toolkit, Data Governance Framework and AI & Data Governance project all emphasise the same point: AI at scale must be governed by design, not bolted on as an afterthought. That is even more important when we move into the world of agentic AI – systems that can invoke tools, call APIs, read and write data, and trigger real-world actions with relatively little human intervention.

    In parallel, the TM Forum AI-Native Blueprint defines the foundational capabilities, principles and operational enablers needed to embed AI natively into CSP architectures and operations. Its workstreams – including Agentic AI, Security & Governance, Data Architecture and AI Operations – provide a coherent industry blueprint for how agents, data and models should be designed, secured and governed end-to-end.

    In this context, the practical question is: how do we start working with AI agents in a way that is aligned with these Global Best Practice frameworks, delivers measurable value, and does not introduce uncontrolled risk? The rest of this article provides a five-step, governance-first roadmap to do exactly that.


    From Automation to Agentic AI in an AI-Native Architecture

    Traditional software automates well-defined tasks by following static, hard-coded rules. It is fast and reliable within the boundaries we specify, but it does not reason or adapt.

    Conversational AI systems – the first wave of large language model (LLM) chatbots – advanced this model by interpreting natural language prompts. They excel at answering questions and generating content but still tend to operate in a single-turn or narrow multi-turn paradigm: you ask, they respond.

    Agentic AI changes this. Agents can:

    • Understand and align to high-level goals rather than just isolated instructions.
    • Break goals into sub-tasks, plan multi-step workflows and adapt as they go.
    • Orchestrate tools, APIs, data sources and enterprise systems.
    • Operate continuously and contextually rather than in a ‘one-and-done’ mode.

    Within an AI-native architecture, agents become first-class citizens. They sit alongside traditional services and ODA components, consuming data products, invoking intent-based APIs and collaborating with human operators. This is why governance is non-negotiable: agents are not just answering questions, they are influencing – and sometimes executing – real business operations.

    

For this reason, I always advise Clients and their partners to think of agents not as ‘virtual workers’ to be substituted for people, but as governed capabilities that augment human judgment and organisational intelligence. The role of TM Forum’s AI and data assets is to make that augmentation safe, observable and accountable.


    A Five-Step, Governance-Aligned Roadmap for Getting Started with Agents

    To cut through the hype and complexity, it helps to start small and structured. The following five steps are designed to be simple enough for immediate experimentation, but robust enough to map directly to TM Forum’s AI Governance, Data Governance and AI-Native Blueprint principles.


    Step 1: Define a Governed, Value-Driven Use Case

    The first decision is not which platform to use, but which problem to solve – and whether solving it with agents is both valuable and governable.

    Start with a task that is:

    • Repetitive and time-consuming.
    • Well understood by your domain experts.
    • Measurable in terms of cost, time or quality.
    • Low to moderate risk if something goes wrong.

    Typical examples in a CSP or digital enterprise context include:


    • Generating periodic performance or sales summaries from existing data.
    • Turning meeting transcripts or tickets into structured actions and follow-up communications.
    • Triaging customer requests, routing them and suggesting responses.
    • Monitoring competitor, network or market signals and producing concise insight briefs.

    From a TM Forum governance perspective, even at this early stage you should be asking:


    • What is the business objective for this agentic use case?
    • Which KPIs or value levers will we use to measure success?
    • What are the risks – operational, regulatory, reputational, security – and who owns them?

    This aligns directly with the AI Governance Toolkit and AI Risk Atlas thinking: value and risk must be framed together from the outset, not separately or sequentially. It is useful to use the AI Canvas for this purpose.


    Step 2: Choose Tools That Fit an AI-Native, Governed Architecture

    Once the use case is clear, the next step is to select the right implementation path. There are two broad approaches:

    1. Visual and low-code agent platforms
      These allow non-specialists to design agents and workflows using natural language and simple configuration. They are ideal for early experimentation, provided they can integrate with your security, identity and data governance controls.
    2. Code-first and framework-based approaches
      Frameworks such as LangChain, Autogen or CrewAI give engineering teams greater control over how agents plan, call tools, manage memory and interact with each other. They are well suited to embedding agents deeply into ODA-conformant architectures and AI-Native design patterns.

    Whichever route you choose, align it with the AI-Native Blueprint and TM Forum’s Open Digital Framework:

    • Treat agents as components within a governed architecture, not side-channel experiments.
    • Use standardised APIs and intent interfaces wherever possible.
    • Ensure platform choices do not bypass your existing security and compliance controls.

    In practice, this often means:

    • Integrating agent platforms with enterprise identity and access management.
    • Ensuring data access is mediated via governed data products and catalogues.
    • Designing for observability: logs, traces, events and metrics that can feed into SecOps and AIOps.

    The AI-Native Blueprint Security & Governance workstream is precisely about standardising this kind of end-to-end approach so that agentic innovation never sits outside your control plane.


    Step 3: Prepare and Govern the Data the Agent Will Use

    No agent is better than the data and signals it consumes. TM Forum’s Data Governance Framework is very clear on the need for ethical, secure and accountable use of data. For agentic use cases, this translates into a few practical checks:

    1. Discover and classify the data

    • Where does the data live (CRM, billing, network, OSS/BSS, data lake, external feeds)?
    • What is its sensitivity level (customer-identifiable, commercially sensitive, public)?
    • Who is accountable for its quality and use?

    2. Ensure lawful, policy-aligned access

    • Does the agent’s access comply with your data governance policies, privacy regulations and customer expectations?
    • Are there role-based or purpose-based access controls that must be enforced?

    3. Clean and standardise

    • Remove obviously duplicate, obsolete or corrupted records.
    • Align basic formats (dates, IDs, naming conventions) so that the agent’s reasoning is not derailed by noise.

    4. Start with a safe subset

    • For early experimentation, use sampled or synthetic data where possible.
    • Avoid directly connecting agents to production-critical or highly sensitive datasets before the patterns are well understood.

    From a TM Forum standpoint, you should treat data not as an afterthought but as a governed product. Your agents should consume data via well-defined, catalogued data products that carry metadata on lineage, quality and policy constraints. This makes it far easier to prove compliance and to diagnose issues when behaviour is unexpected.


    Step 4: Design the Agentic Workflow with Trust, Risk and Security Controls

    With the data and tools in place, the next step is to design the agentic workflow itself – not just in terms of tasks, but also in terms of controls.

At a functional level, define:

    • Inputs – the triggers that cause the agent to act (a new ticket, a status change, an event from the network, a customer query).
    • Tasks – the steps the agent will take (retrieve data, call tools, analyse, summarise, recommend, draft).
    • Outputs – the artefacts or actions produced (a report, an email draft, a classification, an API call, an alert).

    Now overlay the governance and security lens:

    Trust and explainability
    – Can we explain, at a business level, what the agent is doing and why?

    – Are its recommendations traceable back to data and policies we understand?

    Risk controls and guardrails
    – Are there thresholds beyond which the agent must escalate to a human?

    – Are high-risk actions (credits, discounts, configuration changes, access changes) always human-approved?

    Security and privacy
    – Are secrets, credentials and tokens managed securely and never exposed in prompts or logs?

    – Are prompts and responses monitored for data leakage, policy violations or adversarial behaviour?

    TM Forum’s AI assets, together with broader security frameworks, can be used as guardrails here. They encourage an approach where agentic workflows are designed with observability, assurance and controllability built in. Think of each agent as having a ‘control envelope’ – a clearly defined scope of authority, visibility and accountability that can be tested and audited.


    Step 5: Iterate, Assure and Scale – Never ‘Set and Forget’

    If you have chosen a focused use case, it should be relatively straightforward to compare the agent’s performance against the previous manual or semi-automated process. But in an AI-native, governed environment, iteration is not just about performance – it is about assurance.

Use this stage to:

    Validate value
    – Is the agent genuinely saving time, reducing errors or improving experience?

    – Are there measurable uplifts in the KPIs you defined in Step 1?

    Validate safety and compliance
    – Review logs to understand how the agent is reasoning and which tools it is invoking.

    – Check for policy breaches, hallucinations, mis-routings or unexpected behaviours.

    Calibrate guardrails
    – Tighten or loosen thresholds for human approval depending on observed behaviour.

    – Refine prompts, constraints and escalation paths.

    Once you have a stable, trustworthy pattern, you can begin to scale:

    • Apply the same pattern to adjacent processes or channels.
    • Introduce multiple specialised agents that collaborate, each with a clearly defined scope.
    • Integrate outputs into your broader AIOps, SecOps and assurance environment.

    Crucially, scaling must not mean relinquishing control. TM Forum’s AI Governance and Responsible AI initiatives emphasise continuous monitoring and post-production oversight as essential components of any serious AI programme. Agentic systems are no exception – they require ongoing operational governance, not one-time sign-off.


    Security, Governance and the AI-Native Security & Governance Stream

    The AI-Native Blueprint recognises that security and governance are not separate from innovation – they are enablers of safe innovation. For agentic AI, this translates into several practical imperatives:

    Align with established security frameworks
    – Map agentic risks and controls to recognised standards rather than inventing everything from scratch.

    – Treat LLMs, tools and agents as assets in your security architecture, with defined owners and controls.

    Establish clear trust boundaries
    – Define which systems and zones agents may interact with, and under what conditions.

    – Use gateways, policy engines and API management to enforce those boundaries.

    Instrument for observability and incident response
    – Ensure that agent activity feeds into your monitoring and incident response processes.

    – Treat agentic misuse, prompt injection or data leakage as security incidents, not just ‘model quirks’.

    Govern models and prompts as first-class artefacts
    – Maintain model, prompt and configuration lineage: who changed what, when and why.

    – Apply change management and testing to agent updates just as you would to production software.

    In other words, the Security & Governance stream of the AI-Native Blueprint is not an add-on; it is the fabric within which safe agentic innovation happens. CSPs that get this right will be able to adopt agents faster, with greater confidence and far less risk of regulatory or customer backlash.


    Keeping Humans in the Loop: Culture, Skills and Accountability

    One of the biggest risks with agents is not technical – it is human. If we start to treat agents as autonomous employees rather than governed tools, two things happen:

    • We over-trust their outputs and under-invest in oversight.
    • Our people begin to feel displaced rather than augmented.

    TM Forum’s work on AI Governance and Responsible AI consistently reinforces the human dimension:

    • Clear accountability – there is always a named human owner for each use case and each agentic workflow.
    • Skills and literacy – teams must understand both the power and the limitations of agents.
    • Transparency – users and customers should know when they are interacting with an agent and what that implies.

    In practice, this means designing your operating model so that:

    • Agents handle the repetitive, structured, automatable parts of a process.
    • Humans focus on edge cases, empathy, negotiation, judgment and strategy.
    • Feedback from human operators is systematically captured to improve and govern the agent over time.

    The goal is not to remove humans from the loop, but to move them to the right part of the loop – overseeing, steering and enriching the system rather than manually repeating tasks that an agent can perform more efficiently.


    Conclusion: The Agentic Future is AI-Native, Governed and Human-Centred

    AI agents are not a theoretical curiosity; they are already reshaping how digital businesses operate. For CSPs and the broader ecosystem, they sit at the intersection of AI-native architecture, data products, automation and customer experience.

    But there is a choice to be made. We can either deploy agents as ad hoc experiments – fast but fragile, powerful but poorly governed – or we can adopt them within the governance, security and architectural principles embodied in TM Forum’s AI & Data mission, AI Governance Toolkit, Data Governance Framework and AI-Native Blueprint.

    The five-step roadmap outlined here is intended as a practical starting point:

    1. Define a governed, value-driven use case.
    2. Choose tools that fit an AI-native, governed architecture.
    3. Prepare and govern the data the agent will use.
    4. Design the agentic workflow with trust, risk and security controls.
    5. Iterate, assure and scale – never ‘set and forget’.

    Those who take this governance-first path will not only move faster, they will move with confidence – able to demonstrate to boards, regulators, partners and customers that their agentic innovation is safe, explainable, ethical and aligned with long-term value creation.

    The agentic revolution is already underway. The question is not whether it will arrive, but whether we will shape it deliberately and responsibly. TM Forum’s frameworks, toolkits and blueprints exist to ensure that we do.

  • The AI Revolution in Cinema: Hollywood Faces a New Era

    The AI Revolution in Cinema: Hollywood Faces a New Era

    Artificial intelligence is ushering in a transformative era for the global film industry. Once a realm dominated by major studios, expert technicians, and high-budget productions, the landscape is changing rapidly. Today, AI-driven tools are enabling individuals with modest skills and basic technology to create high-quality, cinematic content. This democratisation of production is not only disrupting traditional workflows—it’s challenging the very foundation of Hollywood’s long-held dominance.

    Hollywood’s Role as a Creative Powerhouse

    For over a hundred years, Los Angeles has served as the headquarters of a global entertainment empire. According to the Motion Picture Association, the film and television sector directly employs over 165,000 people in the region and supports more than 2 million jobs nationwide. In 2024, global box office revenues from Hollywood releases surpassed $30 billion, and that figure represents just a slice of a much larger revenue ecosystem including licensing, merchandising, music, and streaming services.Hollywood’s impact is not just economic—it’s cultural. Its productions shape global narratives, influence fashion and trends, and act as a platform for new technologies. While movie theatre attendance has declined by 40% since 2019, the rise of streaming platforms has provided alternative models for content delivery and monetisation.

    An Industry Fueled by Technological Progress

    The film industry has always embraced innovation. From the early days of silent films to the latest in CGI and immersive sound technologies, Hollywood has consistently adapted to enhance storytelling. Each technological leap has typically been additive—bringing new tools, expanding creative boundaries, and increasing opportunities for workers in the industry.

    However, artificial intelligence represents a different kind of advancement—one that may subtract rather than add.

    AI Reshapes Creative Work

    Unlike previous innovations, AI has the potential to replace rather than support human roles. Tools powered by AI are now capable of generating music, writing scripts, editing video, designing visuals, and completing post-production work—often faster and at lower cost than human professionals. These advancements are already impacting job availability across the entertainment ecosystem.

    The threat is not hypothetical. In 2023, both the Writers Guild of America and the Screen Actors Guild held significant strikes, pressing for restrictions on the use of AI in content creation. Their efforts resulted in limited agreements, highlighting the growing influence of AI in the industry.

    The Challenge of Maintaining Relevance

    Perhaps the biggest question AI raises is not about employment but about relevance. Can Hollywood continue to matter in an age where anyone can produce compelling content?

    Since its launch in 2005, YouTube has shown how platforms can shift control from centralised studios to individual creators. AI is accelerating that trend by removing the need for advanced skills and expensive equipment. Tools like Midjourney, Google Veo3, and Kling now enable users to create short but high-quality video sequences using only plain text prompts. Although these tools currently produce brief clips, rapid improvements suggest that full-length films may soon be within reach of everyday creators.

    A case in point: during the 2025 NBA Finals, a national commercial created entirely with AI aired on television. The project was completed in three days at a cost of $2,000—compared to the weeks and hundreds of thousands of dollars a traditional production would have required.

    A Glimpse Into the Future

    Imagine a world where anyone can make a feature-length film by simply describing a plot. AI systems could handle everything—from scriptwriting to directing, acting, editing, and scoring. Disliked a movie ending? A revised version could be generated instantly.

    Startups like Fable’s Showrunner are already providing platforms for users to create episodic content driven by AI. These innovations hint at a future where interactive, user-tailored narratives become commonplace.

    For independent creators and small businesses, the opportunity is unprecedented. But for traditional studios, the road ahead demands reinvention. Without it, the relevance and scale of Hollywood as we know it could dramatically diminish.

    Skeptics may argue that AI lacks the imagination and depth of human creativity. That remains to be seen. Today’s AI can already analyze and mimic the styles of acclaimed directors and writers, empowering users to generate content in their image.

    Hollywood’s Crossroads—and a Warning to Other Industries

    Hollywood is now at a crossroads. If it hopes to retain its leadership, it must innovate boldly and reimagine its role in an AI-driven creative economy. Other industries should pay close attention: the disruption playing out in the entertainment sector offers a preview of what lies ahead for them.

    The age of intelligent content creation is here. The question is—who will lead, and who will be left behind?