• AI Agents in an AI-Native World: A Governance-First Guide

    AI Agents in an AI-Native World: A Governance-First Guide

    Preface

    This article reflects my perspective on the responsible adoption of AI agents within the context of TM Forum’s AI & Data mission, the AI Governance Toolkit, the Data Governance Framework, the AI-Native Blueprint and my own private consultancy work – in particular the Security & Governance and Agentic AI workstreams.

    As Global Ambassador for AI/ML Governance at TM Forum, my focus is on helping Communications Service Providers (CSPs), ecosystem partners and adjacent industries (government and enterprise) move beyond isolated proofs of concept towards safe, secure and scalable AI-native operations. That means anchoring every innovation – including the current wave of agentic AI – in clear governance, trusted data, transparent decisioning, and demonstrable business value.

    

The purpose of this article is pragmatic. It offers a simple, governance-aligned pathway for leaders and practitioners who want to begin experimenting with AI agents today, without compromising on control, compliance or trust. It connects the practical ‘how’ of getting started with agents to the ‘why’ and ‘so what’ of TM Forum’s AI Governance assets and the AI-Native Blueprint Security & Governance approach.

    Professor Paul Morrissey
Global Ambassador – AI/ML Governance, TM Forum & Chairman Bolgiaten Limited.


    AI Agents in an AI-Native World: A Governance-First Guide

    AI agents are rapidly emerging as one of the most powerful patterns in the AI landscape.

    They can plan, decide, act and collaborate across systems in ways that move us beyond simple chatbots or fixed automation. For Communications Service Providers, Digital Service Players and the whole enterprise business domain, this is directly aligned with the shift towards AI-native operations and Open Digital Architecture (ODA).

    However, the fact that something is technically possible does not make it operationally wise. TM Forum’s AI Governance Toolkit, Data Governance Framework and AI & Data Governance project all emphasise the same point: AI at scale must be governed by design, not bolted on as an afterthought. That is even more important when we move into the world of agentic AI – systems that can invoke tools, call APIs, read and write data, and trigger real-world actions with relatively little human intervention.

    In parallel, the TM Forum AI-Native Blueprint defines the foundational capabilities, principles and operational enablers needed to embed AI natively into CSP architectures and operations. Its workstreams – including Agentic AI, Security & Governance, Data Architecture and AI Operations – provide a coherent industry blueprint for how agents, data and models should be designed, secured and governed end-to-end.

    In this context, the practical question is: how do we start working with AI agents in a way that is aligned with these Global Best Practice frameworks, delivers measurable value, and does not introduce uncontrolled risk? The rest of this article provides a five-step, governance-first roadmap to do exactly that.


    From Automation to Agentic AI in an AI-Native Architecture

    Traditional software automates well-defined tasks by following static, hard-coded rules. It is fast and reliable within the boundaries we specify, but it does not reason or adapt.

    Conversational AI systems – the first wave of large language model (LLM) chatbots – advanced this model by interpreting natural language prompts. They excel at answering questions and generating content but still tend to operate in a single-turn or narrow multi-turn paradigm: you ask, they respond.

    Agentic AI changes this. Agents can:

    • Understand and align to high-level goals rather than just isolated instructions.
    • Break goals into sub-tasks, plan multi-step workflows and adapt as they go.
    • Orchestrate tools, APIs, data sources and enterprise systems.
    • Operate continuously and contextually rather than in a ‘one-and-done’ mode.

    Within an AI-native architecture, agents become first-class citizens. They sit alongside traditional services and ODA components, consuming data products, invoking intent-based APIs and collaborating with human operators. This is why governance is non-negotiable: agents are not just answering questions, they are influencing – and sometimes executing – real business operations.

    

For this reason, I always advise Clients and their partners to think of agents not as ‘virtual workers’ to be substituted for people, but as governed capabilities that augment human judgment and organisational intelligence. The role of TM Forum’s AI and data assets is to make that augmentation safe, observable and accountable.


    A Five-Step, Governance-Aligned Roadmap for Getting Started with Agents

    To cut through the hype and complexity, it helps to start small and structured. The following five steps are designed to be simple enough for immediate experimentation, but robust enough to map directly to TM Forum’s AI Governance, Data Governance and AI-Native Blueprint principles.


    Step 1: Define a Governed, Value-Driven Use Case

    The first decision is not which platform to use, but which problem to solve – and whether solving it with agents is both valuable and governable.

    Start with a task that is:

    • Repetitive and time-consuming.
    • Well understood by your domain experts.
    • Measurable in terms of cost, time or quality.
    • Low to moderate risk if something goes wrong.

    Typical examples in a CSP or digital enterprise context include:


    • Generating periodic performance or sales summaries from existing data.
    • Turning meeting transcripts or tickets into structured actions and follow-up communications.
    • Triaging customer requests, routing them and suggesting responses.
    • Monitoring competitor, network or market signals and producing concise insight briefs.

    From a TM Forum governance perspective, even at this early stage you should be asking:


    • What is the business objective for this agentic use case?
    • Which KPIs or value levers will we use to measure success?
    • What are the risks – operational, regulatory, reputational, security – and who owns them?

    This aligns directly with the AI Governance Toolkit and AI Risk Atlas thinking: value and risk must be framed together from the outset, not separately or sequentially. It is useful to use the AI Canvas for this purpose.


    Step 2: Choose Tools That Fit an AI-Native, Governed Architecture

    Once the use case is clear, the next step is to select the right implementation path. There are two broad approaches:

    1. Visual and low-code agent platforms
      These allow non-specialists to design agents and workflows using natural language and simple configuration. They are ideal for early experimentation, provided they can integrate with your security, identity and data governance controls.
    2. Code-first and framework-based approaches
      Frameworks such as LangChain, Autogen or CrewAI give engineering teams greater control over how agents plan, call tools, manage memory and interact with each other. They are well suited to embedding agents deeply into ODA-conformant architectures and AI-Native design patterns.

    Whichever route you choose, align it with the AI-Native Blueprint and TM Forum’s Open Digital Framework:

    • Treat agents as components within a governed architecture, not side-channel experiments.
    • Use standardised APIs and intent interfaces wherever possible.
    • Ensure platform choices do not bypass your existing security and compliance controls.

    In practice, this often means:

    • Integrating agent platforms with enterprise identity and access management.
    • Ensuring data access is mediated via governed data products and catalogues.
    • Designing for observability: logs, traces, events and metrics that can feed into SecOps and AIOps.

    The AI-Native Blueprint Security & Governance workstream is precisely about standardising this kind of end-to-end approach so that agentic innovation never sits outside your control plane.


    Step 3: Prepare and Govern the Data the Agent Will Use

    No agent is better than the data and signals it consumes. TM Forum’s Data Governance Framework is very clear on the need for ethical, secure and accountable use of data. For agentic use cases, this translates into a few practical checks:

    1. Discover and classify the data

    • Where does the data live (CRM, billing, network, OSS/BSS, data lake, external feeds)?
    • What is its sensitivity level (customer-identifiable, commercially sensitive, public)?
    • Who is accountable for its quality and use?

    2. Ensure lawful, policy-aligned access

    • Does the agent’s access comply with your data governance policies, privacy regulations and customer expectations?
    • Are there role-based or purpose-based access controls that must be enforced?

    3. Clean and standardise

    • Remove obviously duplicate, obsolete or corrupted records.
    • Align basic formats (dates, IDs, naming conventions) so that the agent’s reasoning is not derailed by noise.

    4. Start with a safe subset

    • For early experimentation, use sampled or synthetic data where possible.
    • Avoid directly connecting agents to production-critical or highly sensitive datasets before the patterns are well understood.

    From a TM Forum standpoint, you should treat data not as an afterthought but as a governed product. Your agents should consume data via well-defined, catalogued data products that carry metadata on lineage, quality and policy constraints. This makes it far easier to prove compliance and to diagnose issues when behaviour is unexpected.


    Step 4: Design the Agentic Workflow with Trust, Risk and Security Controls

    With the data and tools in place, the next step is to design the agentic workflow itself – not just in terms of tasks, but also in terms of controls.

At a functional level, define:

    • Inputs – the triggers that cause the agent to act (a new ticket, a status change, an event from the network, a customer query).
    • Tasks – the steps the agent will take (retrieve data, call tools, analyse, summarise, recommend, draft).
    • Outputs – the artefacts or actions produced (a report, an email draft, a classification, an API call, an alert).

    Now overlay the governance and security lens:

    Trust and explainability
    – Can we explain, at a business level, what the agent is doing and why?

    – Are its recommendations traceable back to data and policies we understand?

    Risk controls and guardrails
    – Are there thresholds beyond which the agent must escalate to a human?

    – Are high-risk actions (credits, discounts, configuration changes, access changes) always human-approved?

    Security and privacy
    – Are secrets, credentials and tokens managed securely and never exposed in prompts or logs?

    – Are prompts and responses monitored for data leakage, policy violations or adversarial behaviour?

    TM Forum’s AI assets, together with broader security frameworks, can be used as guardrails here. They encourage an approach where agentic workflows are designed with observability, assurance and controllability built in. Think of each agent as having a ‘control envelope’ – a clearly defined scope of authority, visibility and accountability that can be tested and audited.


    Step 5: Iterate, Assure and Scale – Never ‘Set and Forget’

    If you have chosen a focused use case, it should be relatively straightforward to compare the agent’s performance against the previous manual or semi-automated process. But in an AI-native, governed environment, iteration is not just about performance – it is about assurance.

Use this stage to:

    Validate value
    – Is the agent genuinely saving time, reducing errors or improving experience?

    – Are there measurable uplifts in the KPIs you defined in Step 1?

    Validate safety and compliance
    – Review logs to understand how the agent is reasoning and which tools it is invoking.

    – Check for policy breaches, hallucinations, mis-routings or unexpected behaviours.

    Calibrate guardrails
    – Tighten or loosen thresholds for human approval depending on observed behaviour.

    – Refine prompts, constraints and escalation paths.

    Once you have a stable, trustworthy pattern, you can begin to scale:

    • Apply the same pattern to adjacent processes or channels.
    • Introduce multiple specialised agents that collaborate, each with a clearly defined scope.
    • Integrate outputs into your broader AIOps, SecOps and assurance environment.

    Crucially, scaling must not mean relinquishing control. TM Forum’s AI Governance and Responsible AI initiatives emphasise continuous monitoring and post-production oversight as essential components of any serious AI programme. Agentic systems are no exception – they require ongoing operational governance, not one-time sign-off.


    Security, Governance and the AI-Native Security & Governance Stream

    The AI-Native Blueprint recognises that security and governance are not separate from innovation – they are enablers of safe innovation. For agentic AI, this translates into several practical imperatives:

    Align with established security frameworks
    – Map agentic risks and controls to recognised standards rather than inventing everything from scratch.

    – Treat LLMs, tools and agents as assets in your security architecture, with defined owners and controls.

    Establish clear trust boundaries
    – Define which systems and zones agents may interact with, and under what conditions.

    – Use gateways, policy engines and API management to enforce those boundaries.

    Instrument for observability and incident response
    – Ensure that agent activity feeds into your monitoring and incident response processes.

    – Treat agentic misuse, prompt injection or data leakage as security incidents, not just ‘model quirks’.

    Govern models and prompts as first-class artefacts
    – Maintain model, prompt and configuration lineage: who changed what, when and why.

    – Apply change management and testing to agent updates just as you would to production software.

    In other words, the Security & Governance stream of the AI-Native Blueprint is not an add-on; it is the fabric within which safe agentic innovation happens. CSPs that get this right will be able to adopt agents faster, with greater confidence and far less risk of regulatory or customer backlash.


    Keeping Humans in the Loop: Culture, Skills and Accountability

    One of the biggest risks with agents is not technical – it is human. If we start to treat agents as autonomous employees rather than governed tools, two things happen:

    • We over-trust their outputs and under-invest in oversight.
    • Our people begin to feel displaced rather than augmented.

    TM Forum’s work on AI Governance and Responsible AI consistently reinforces the human dimension:

    • Clear accountability – there is always a named human owner for each use case and each agentic workflow.
    • Skills and literacy – teams must understand both the power and the limitations of agents.
    • Transparency – users and customers should know when they are interacting with an agent and what that implies.

    In practice, this means designing your operating model so that:

    • Agents handle the repetitive, structured, automatable parts of a process.
    • Humans focus on edge cases, empathy, negotiation, judgment and strategy.
    • Feedback from human operators is systematically captured to improve and govern the agent over time.

    The goal is not to remove humans from the loop, but to move them to the right part of the loop – overseeing, steering and enriching the system rather than manually repeating tasks that an agent can perform more efficiently.


    Conclusion: The Agentic Future is AI-Native, Governed and Human-Centred

    AI agents are not a theoretical curiosity; they are already reshaping how digital businesses operate. For CSPs and the broader ecosystem, they sit at the intersection of AI-native architecture, data products, automation and customer experience.

    But there is a choice to be made. We can either deploy agents as ad hoc experiments – fast but fragile, powerful but poorly governed – or we can adopt them within the governance, security and architectural principles embodied in TM Forum’s AI & Data mission, AI Governance Toolkit, Data Governance Framework and AI-Native Blueprint.

    The five-step roadmap outlined here is intended as a practical starting point:

    1. Define a governed, value-driven use case.
    2. Choose tools that fit an AI-native, governed architecture.
    3. Prepare and govern the data the agent will use.
    4. Design the agentic workflow with trust, risk and security controls.
    5. Iterate, assure and scale – never ‘set and forget’.

    Those who take this governance-first path will not only move faster, they will move with confidence – able to demonstrate to boards, regulators, partners and customers that their agentic innovation is safe, explainable, ethical and aligned with long-term value creation.

    The agentic revolution is already underway. The question is not whether it will arrive, but whether we will shape it deliberately and responsibly. TM Forum’s frameworks, toolkits and blueprints exist to ensure that we do.

  • The AI Revolution in Cinema: Hollywood Faces a New Era

    The AI Revolution in Cinema: Hollywood Faces a New Era

    Artificial intelligence is ushering in a transformative era for the global film industry. Once a realm dominated by major studios, expert technicians, and high-budget productions, the landscape is changing rapidly. Today, AI-driven tools are enabling individuals with modest skills and basic technology to create high-quality, cinematic content. This democratisation of production is not only disrupting traditional workflows—it’s challenging the very foundation of Hollywood’s long-held dominance.

    Hollywood’s Role as a Creative Powerhouse

    For over a hundred years, Los Angeles has served as the headquarters of a global entertainment empire. According to the Motion Picture Association, the film and television sector directly employs over 165,000 people in the region and supports more than 2 million jobs nationwide. In 2024, global box office revenues from Hollywood releases surpassed $30 billion, and that figure represents just a slice of a much larger revenue ecosystem including licensing, merchandising, music, and streaming services.Hollywood’s impact is not just economic—it’s cultural. Its productions shape global narratives, influence fashion and trends, and act as a platform for new technologies. While movie theatre attendance has declined by 40% since 2019, the rise of streaming platforms has provided alternative models for content delivery and monetisation.

    An Industry Fueled by Technological Progress

    The film industry has always embraced innovation. From the early days of silent films to the latest in CGI and immersive sound technologies, Hollywood has consistently adapted to enhance storytelling. Each technological leap has typically been additive—bringing new tools, expanding creative boundaries, and increasing opportunities for workers in the industry.

    However, artificial intelligence represents a different kind of advancement—one that may subtract rather than add.

    AI Reshapes Creative Work

    Unlike previous innovations, AI has the potential to replace rather than support human roles. Tools powered by AI are now capable of generating music, writing scripts, editing video, designing visuals, and completing post-production work—often faster and at lower cost than human professionals. These advancements are already impacting job availability across the entertainment ecosystem.

    The threat is not hypothetical. In 2023, both the Writers Guild of America and the Screen Actors Guild held significant strikes, pressing for restrictions on the use of AI in content creation. Their efforts resulted in limited agreements, highlighting the growing influence of AI in the industry.

    The Challenge of Maintaining Relevance

    Perhaps the biggest question AI raises is not about employment but about relevance. Can Hollywood continue to matter in an age where anyone can produce compelling content?

    Since its launch in 2005, YouTube has shown how platforms can shift control from centralised studios to individual creators. AI is accelerating that trend by removing the need for advanced skills and expensive equipment. Tools like Midjourney, Google Veo3, and Kling now enable users to create short but high-quality video sequences using only plain text prompts. Although these tools currently produce brief clips, rapid improvements suggest that full-length films may soon be within reach of everyday creators.

    A case in point: during the 2025 NBA Finals, a national commercial created entirely with AI aired on television. The project was completed in three days at a cost of $2,000—compared to the weeks and hundreds of thousands of dollars a traditional production would have required.

    A Glimpse Into the Future

    Imagine a world where anyone can make a feature-length film by simply describing a plot. AI systems could handle everything—from scriptwriting to directing, acting, editing, and scoring. Disliked a movie ending? A revised version could be generated instantly.

    Startups like Fable’s Showrunner are already providing platforms for users to create episodic content driven by AI. These innovations hint at a future where interactive, user-tailored narratives become commonplace.

    For independent creators and small businesses, the opportunity is unprecedented. But for traditional studios, the road ahead demands reinvention. Without it, the relevance and scale of Hollywood as we know it could dramatically diminish.

    Skeptics may argue that AI lacks the imagination and depth of human creativity. That remains to be seen. Today’s AI can already analyze and mimic the styles of acclaimed directors and writers, empowering users to generate content in their image.

    Hollywood’s Crossroads—and a Warning to Other Industries

    Hollywood is now at a crossroads. If it hopes to retain its leadership, it must innovate boldly and reimagine its role in an AI-driven creative economy. Other industries should pay close attention: the disruption playing out in the entertainment sector offers a preview of what lies ahead for them.

    The age of intelligent content creation is here. The question is—who will lead, and who will be left behind?

  • Rediscovering Trust in the Age of AI: A Call to Action for Humanity

    Rediscovering Trust in the Age of AI: A Call to Action for Humanity

    In Dante’s Divine Comedy, the poet journeys through realms of despair and redemption, seeking truth amidst uncertainty. Like Dante, humanity now stands at a crossroads, navigating the labyrinthine rise of artificial intelligence. At Bolgiaten, our mission resonates with this journey: to illuminate the path where data, technology, and human insight converge, fostering clarity, connection, and trust in an age shadowed by doubt. As we leverage AI and Earth Observation to solve global challenges, we are reminded that, like Dante’s journey, ours is not merely technical but profoundly human—a quest to restore the fragile bonds that make us whole.

    In the swirling storm of technological advancement, artificial intelligence (AI) stands as both an achievement and a reckoning. It has transformed industries, redefined possibilities, and ignited imaginations. Yet, it has also brought humanity to an unsettling crossroads, raising questions about the very fabric of our existence: trust.

    Trust is not just an abstract concept but the foundation upon which relationships, societies, and civilizations are built. It is the invisible contract that binds individuals, communities, and institutions, enabling cooperation and progress. Without trust, even the most advanced technology cannot save us from the disarray of suspicion and alienation. In this critical juncture of AI’s rise, we must grapple with the erosion of trust—not as a technological failure, but as a human challenge.

    The Trust Crisis in the Age of AI

    AI has introduced unparalleled levels of uncertainty into our lives. It influences the decisions of governments, shape’spublic opinion, and powers the platforms through which we connect and communicate. Algorithms decide which news we see, which job candidates are shortlisted, and which medical treatments are recommended. Yet, for all its power, AI operates as a black box for most people—a mysterious, unaccountable force.

    This opacity breeds doubt. How can we trust decisions made by systems we neither understand nor control? When AI gets it wrong—discriminating against minorities, spreading misinformation, or amplifying harmful ideologies—it feels less like a glitch and more like a betrayal. But the true betrayal lies elsewhere: in the misuse of AI by those who design, deploy, and benefit from it.

    It’s not AI that deceives us, but humans using AI to deceive. The erosion of trust in this era is not solely about the technology; it is about the intentions behind it. Who decides how AI is used, and for whose benefit? And how do we hold those people accountable?

    The Alienation of the Algorithmic Age

    AI has not only shifted how we make decisions but also how we relate to one another. In a world mediated by machines, the directness of human connection feels increasingly out of reach. Social media, powered by AI algorithms, has redefined “friendship” and “community,” often turning them into commodities. The platforms that promised to bring us closer have, in many ways, driven us apart, replacing empathy with echo chambers and dialogue with division.

    This alienation goes beyond personal relationships. Institutions that once commanded trust—governments, media, and corporations—now feel distant and unapproachable, obscured by layers of algorithmic decision-making. The result is a society where scepticism is the default, and cynicism thrives. Trust, once a shared foundation, becomes fragmented, leaving individuals feeling unmoored and isolated.

    Rebuilding Trust: A Human Endeavor

    The crisis of trust in the age of AI is not a technological problem but a human one. It is not AI that has failed us but our own stewardship of its potential. To rebuild trust, we must look inward and ask hard questions about our values, intentions, and priorities. Technology, for all its power, is a tool—a reflection of those who wield it. If trust is to be restored, it must begin with us.

    A. Transparency and Accountability
    Trust flourishes in the light of transparency. Organisations deploying AI must prioritise openness about how their systems work, what data they use, and the potential risks involved. Accountability mechanisms should be robust and accessible, ensuring that those harmed by AI have recourse. Transparency is not just a technical requirement; it is a moral imperative.

    B. Ethical Leadership
    The leaders shaping AI policy and development have a profound responsibility. Their choices will define whether AI serves humanity or exploits it. Ethical leadership means prioritizing long-term societal well-being over short-term gains, ensuring that AI aligns with values like fairness, inclusivity, and respect for human dignity.

    C. Education and Empowerment
    A key driver of mistrust is the knowledge gap between those who understand AI and those who do not. Bridging this gap requires widespread education initiatives, equipping people with the tools to critically evaluate AI’s role in their lives. Empowering individuals to engage with AI responsibly fosters trust through understanding.

    D. Reclaiming Human Connection
    While AI can enhance our lives, it should never replace the human connections that define us. Trust begins in the simple act of looking each other in the eye, unmediated by screens or algorithms. As we embrace technology, we must also prioritize spaces for authentic human interaction—moments where trust can grow naturally and meaningfully.

    A Test of Humanity

    The rise of AI is not a test of technology but a test of humanity. It challenges us to reflect on what we value and how we act. Will we allow technology to erode the fragile threads of trust that bind us, or will we rise to the occasion, using AI to strengthen rather than weaken those threads?

    The answer lies in our choices. Trust is not something we can demand from others or from machines—it is something we must cultivate, protect, and earn. It is a shared responsibility, requiring effort and intention from individuals, organizations, and societies. We must ask ourselves: What kind of world do we want to create? One where trust is a relic of the past, or one where it becomes the foundation of a brighter, more connected future? The choice is ours to make—not AI’s.

    Rediscovering the Fragile Threads of Trust

    As we navigate this era of rapid technological change, let us not lose sight of the most essential lesson: Trust is not broken by machines but by the hands that operate them. It is up to us to reclaim the ability to trust, to see beyond the algorithms and find the humanity in each other.

    The journey to rediscover trust will not be easy, but it is necessary. It begins with honesty, accountability, and a commitment to our shared values. It requires us to remember that, in the end, trust is not a technological problem—it is a profoundly human one.

    In the End

    Dante’s journey teaches us that even in the darkest moments, trust and purpose can guide us toward redemption. At Bolgiaten, we embrace this lesson as our guiding principle. As we blend cutting-edge technology with human ingenuity, our mission is to confront the challenges of this AI-driven age with transparency, accountability, and integrity. Like Dante emerging into the light, we believe that by rediscovering trust—both in each other and in our tools—we can shape a future where technology serves humanity, not divides it, and where our shared values shine brighter than any algorithm.