AI Agents in an AI-Native World: A Governance-First Guide

Preface

This article reflects my perspective on the responsible adoption of AI agents within the context of TM Forum’s AI & Data mission, the AI Governance Toolkit, the Data Governance Framework, the AI-Native Blueprint and my own private consultancy work – in particular the Security & Governance and Agentic AI workstreams.

As Global Ambassador for AI/ML Governance at TM Forum, my focus is on helping Communications Service Providers (CSPs), ecosystem partners and adjacent industries (government and enterprise) move beyond isolated proofs of concept towards safe, secure and scalable AI-native operations. That means anchoring every innovation – including the current wave of agentic AI – in clear governance, trusted data, transparent decisioning, and demonstrable business value.



The purpose of this article is pragmatic. It offers a simple, governance-aligned pathway for leaders and practitioners who want to begin experimenting with AI agents today, without compromising on control, compliance or trust. It connects the practical ‘how’ of getting started with agents to the ‘why’ and ‘so what’ of TM Forum’s AI Governance assets and the AI-Native Blueprint Security & Governance approach.

Professor Paul Morrissey
Global Ambassador – AI/ML Governance, TM Forum & Chairman Bolgiaten Limited.


AI Agents in an AI-Native World: A Governance-First Guide

AI agents are rapidly emerging as one of the most powerful patterns in the AI landscape.

They can plan, decide, act and collaborate across systems in ways that move us beyond simple chatbots or fixed automation. For Communications Service Providers, Digital Service Players and the whole enterprise business domain, this is directly aligned with the shift towards AI-native operations and Open Digital Architecture (ODA).

However, the fact that something is technically possible does not make it operationally wise. TM Forum’s AI Governance Toolkit, Data Governance Framework and AI & Data Governance project all emphasise the same point: AI at scale must be governed by design, not bolted on as an afterthought. That is even more important when we move into the world of agentic AI – systems that can invoke tools, call APIs, read and write data, and trigger real-world actions with relatively little human intervention.

In parallel, the TM Forum AI-Native Blueprint defines the foundational capabilities, principles and operational enablers needed to embed AI natively into CSP architectures and operations. Its workstreams – including Agentic AI, Security & Governance, Data Architecture and AI Operations – provide a coherent industry blueprint for how agents, data and models should be designed, secured and governed end-to-end.

In this context, the practical question is: how do we start working with AI agents in a way that is aligned with these Global Best Practice frameworks, delivers measurable value, and does not introduce uncontrolled risk? The rest of this article provides a five-step, governance-first roadmap to do exactly that.


From Automation to Agentic AI in an AI-Native Architecture

Traditional software automates well-defined tasks by following static, hard-coded rules. It is fast and reliable within the boundaries we specify, but it does not reason or adapt.

Conversational AI systems – the first wave of large language model (LLM) chatbots – advanced this model by interpreting natural language prompts. They excel at answering questions and generating content but still tend to operate in a single-turn or narrow multi-turn paradigm: you ask, they respond.

Agentic AI changes this. Agents can:

  • Understand and align to high-level goals rather than just isolated instructions.
  • Break goals into sub-tasks, plan multi-step workflows and adapt as they go.
  • Orchestrate tools, APIs, data sources and enterprise systems.
  • Operate continuously and contextually rather than in a ‘one-and-done’ mode.

Within an AI-native architecture, agents become first-class citizens. They sit alongside traditional services and ODA components, consuming data products, invoking intent-based APIs and collaborating with human operators. This is why governance is non-negotiable: agents are not just answering questions, they are influencing – and sometimes executing – real business operations.



For this reason, I always advise Clients and their partners to think of agents not as ‘virtual workers’ to be substituted for people, but as governed capabilities that augment human judgment and organisational intelligence. The role of TM Forum’s AI and data assets is to make that augmentation safe, observable and accountable.


A Five-Step, Governance-Aligned Roadmap for Getting Started with Agents

To cut through the hype and complexity, it helps to start small and structured. The following five steps are designed to be simple enough for immediate experimentation, but robust enough to map directly to TM Forum’s AI Governance, Data Governance and AI-Native Blueprint principles.


Step 1: Define a Governed, Value-Driven Use Case

The first decision is not which platform to use, but which problem to solve – and whether solving it with agents is both valuable and governable.

Start with a task that is:

  • Repetitive and time-consuming.
  • Well understood by your domain experts.
  • Measurable in terms of cost, time or quality.
  • Low to moderate risk if something goes wrong.

Typical examples in a CSP or digital enterprise context include:


  • Generating periodic performance or sales summaries from existing data.
  • Turning meeting transcripts or tickets into structured actions and follow-up communications.
  • Triaging customer requests, routing them and suggesting responses.
  • Monitoring competitor, network or market signals and producing concise insight briefs.

From a TM Forum governance perspective, even at this early stage you should be asking:


  • What is the business objective for this agentic use case?
  • Which KPIs or value levers will we use to measure success?
  • What are the risks – operational, regulatory, reputational, security – and who owns them?

This aligns directly with the AI Governance Toolkit and AI Risk Atlas thinking: value and risk must be framed together from the outset, not separately or sequentially. It is useful to use the AI Canvas for this purpose.


Step 2: Choose Tools That Fit an AI-Native, Governed Architecture

Once the use case is clear, the next step is to select the right implementation path. There are two broad approaches:

  1. Visual and low-code agent platforms
    These allow non-specialists to design agents and workflows using natural language and simple configuration. They are ideal for early experimentation, provided they can integrate with your security, identity and data governance controls.
  2. Code-first and framework-based approaches
    Frameworks such as LangChain, Autogen or CrewAI give engineering teams greater control over how agents plan, call tools, manage memory and interact with each other. They are well suited to embedding agents deeply into ODA-conformant architectures and AI-Native design patterns.

Whichever route you choose, align it with the AI-Native Blueprint and TM Forum’s Open Digital Framework:

  • Treat agents as components within a governed architecture, not side-channel experiments.
  • Use standardised APIs and intent interfaces wherever possible.
  • Ensure platform choices do not bypass your existing security and compliance controls.

In practice, this often means:

  • Integrating agent platforms with enterprise identity and access management.
  • Ensuring data access is mediated via governed data products and catalogues.
  • Designing for observability: logs, traces, events and metrics that can feed into SecOps and AIOps.

The AI-Native Blueprint Security & Governance workstream is precisely about standardising this kind of end-to-end approach so that agentic innovation never sits outside your control plane.


Step 3: Prepare and Govern the Data the Agent Will Use

No agent is better than the data and signals it consumes. TM Forum’s Data Governance Framework is very clear on the need for ethical, secure and accountable use of data. For agentic use cases, this translates into a few practical checks:

1. Discover and classify the data

  • Where does the data live (CRM, billing, network, OSS/BSS, data lake, external feeds)?
  • What is its sensitivity level (customer-identifiable, commercially sensitive, public)?
  • Who is accountable for its quality and use?

2. Ensure lawful, policy-aligned access

  • Does the agent’s access comply with your data governance policies, privacy regulations and customer expectations?
  • Are there role-based or purpose-based access controls that must be enforced?

3. Clean and standardise

  • Remove obviously duplicate, obsolete or corrupted records.
  • Align basic formats (dates, IDs, naming conventions) so that the agent’s reasoning is not derailed by noise.

4. Start with a safe subset

  • For early experimentation, use sampled or synthetic data where possible.
  • Avoid directly connecting agents to production-critical or highly sensitive datasets before the patterns are well understood.

From a TM Forum standpoint, you should treat data not as an afterthought but as a governed product. Your agents should consume data via well-defined, catalogued data products that carry metadata on lineage, quality and policy constraints. This makes it far easier to prove compliance and to diagnose issues when behaviour is unexpected.


Step 4: Design the Agentic Workflow with Trust, Risk and Security Controls

With the data and tools in place, the next step is to design the agentic workflow itself – not just in terms of tasks, but also in terms of controls.

At a functional level, define:

  • Inputs – the triggers that cause the agent to act (a new ticket, a status change, an event from the network, a customer query).
  • Tasks – the steps the agent will take (retrieve data, call tools, analyse, summarise, recommend, draft).
  • Outputs – the artefacts or actions produced (a report, an email draft, a classification, an API call, an alert).

Now overlay the governance and security lens:

Trust and explainability
– Can we explain, at a business level, what the agent is doing and why?

– Are its recommendations traceable back to data and policies we understand?

Risk controls and guardrails
– Are there thresholds beyond which the agent must escalate to a human?

– Are high-risk actions (credits, discounts, configuration changes, access changes) always human-approved?

Security and privacy
– Are secrets, credentials and tokens managed securely and never exposed in prompts or logs?

– Are prompts and responses monitored for data leakage, policy violations or adversarial behaviour?

TM Forum’s AI assets, together with broader security frameworks, can be used as guardrails here. They encourage an approach where agentic workflows are designed with observability, assurance and controllability built in. Think of each agent as having a ‘control envelope’ – a clearly defined scope of authority, visibility and accountability that can be tested and audited.


Step 5: Iterate, Assure and Scale – Never ‘Set and Forget’

If you have chosen a focused use case, it should be relatively straightforward to compare the agent’s performance against the previous manual or semi-automated process. But in an AI-native, governed environment, iteration is not just about performance – it is about assurance.

Use this stage to:

Validate value
– Is the agent genuinely saving time, reducing errors or improving experience?

– Are there measurable uplifts in the KPIs you defined in Step 1?

Validate safety and compliance
– Review logs to understand how the agent is reasoning and which tools it is invoking.

– Check for policy breaches, hallucinations, mis-routings or unexpected behaviours.

Calibrate guardrails
– Tighten or loosen thresholds for human approval depending on observed behaviour.

– Refine prompts, constraints and escalation paths.

Once you have a stable, trustworthy pattern, you can begin to scale:

  • Apply the same pattern to adjacent processes or channels.
  • Introduce multiple specialised agents that collaborate, each with a clearly defined scope.
  • Integrate outputs into your broader AIOps, SecOps and assurance environment.

Crucially, scaling must not mean relinquishing control. TM Forum’s AI Governance and Responsible AI initiatives emphasise continuous monitoring and post-production oversight as essential components of any serious AI programme. Agentic systems are no exception – they require ongoing operational governance, not one-time sign-off.


Security, Governance and the AI-Native Security & Governance Stream

The AI-Native Blueprint recognises that security and governance are not separate from innovation – they are enablers of safe innovation. For agentic AI, this translates into several practical imperatives:

Align with established security frameworks
– Map agentic risks and controls to recognised standards rather than inventing everything from scratch.

– Treat LLMs, tools and agents as assets in your security architecture, with defined owners and controls.

Establish clear trust boundaries
– Define which systems and zones agents may interact with, and under what conditions.

– Use gateways, policy engines and API management to enforce those boundaries.

Instrument for observability and incident response
– Ensure that agent activity feeds into your monitoring and incident response processes.

– Treat agentic misuse, prompt injection or data leakage as security incidents, not just ‘model quirks’.

Govern models and prompts as first-class artefacts
– Maintain model, prompt and configuration lineage: who changed what, when and why.

– Apply change management and testing to agent updates just as you would to production software.

In other words, the Security & Governance stream of the AI-Native Blueprint is not an add-on; it is the fabric within which safe agentic innovation happens. CSPs that get this right will be able to adopt agents faster, with greater confidence and far less risk of regulatory or customer backlash.


Keeping Humans in the Loop: Culture, Skills and Accountability

One of the biggest risks with agents is not technical – it is human. If we start to treat agents as autonomous employees rather than governed tools, two things happen:

  • We over-trust their outputs and under-invest in oversight.
  • Our people begin to feel displaced rather than augmented.

TM Forum’s work on AI Governance and Responsible AI consistently reinforces the human dimension:

  • Clear accountability – there is always a named human owner for each use case and each agentic workflow.
  • Skills and literacy – teams must understand both the power and the limitations of agents.
  • Transparency – users and customers should know when they are interacting with an agent and what that implies.

In practice, this means designing your operating model so that:

  • Agents handle the repetitive, structured, automatable parts of a process.
  • Humans focus on edge cases, empathy, negotiation, judgment and strategy.
  • Feedback from human operators is systematically captured to improve and govern the agent over time.

The goal is not to remove humans from the loop, but to move them to the right part of the loop – overseeing, steering and enriching the system rather than manually repeating tasks that an agent can perform more efficiently.


Conclusion: The Agentic Future is AI-Native, Governed and Human-Centred

AI agents are not a theoretical curiosity; they are already reshaping how digital businesses operate. For CSPs and the broader ecosystem, they sit at the intersection of AI-native architecture, data products, automation and customer experience.

But there is a choice to be made. We can either deploy agents as ad hoc experiments – fast but fragile, powerful but poorly governed – or we can adopt them within the governance, security and architectural principles embodied in TM Forum’s AI & Data mission, AI Governance Toolkit, Data Governance Framework and AI-Native Blueprint.

The five-step roadmap outlined here is intended as a practical starting point:

  1. Define a governed, value-driven use case.
  2. Choose tools that fit an AI-native, governed architecture.
  3. Prepare and govern the data the agent will use.
  4. Design the agentic workflow with trust, risk and security controls.
  5. Iterate, assure and scale – never ‘set and forget’.

Those who take this governance-first path will not only move faster, they will move with confidence – able to demonstrate to boards, regulators, partners and customers that their agentic innovation is safe, explainable, ethical and aligned with long-term value creation.

The agentic revolution is already underway. The question is not whether it will arrive, but whether we will shape it deliberately and responsibly. TM Forum’s frameworks, toolkits and blueprints exist to ensure that we do.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *