Category: Uncategorised

  • AI, Creativity, and the Next Rights Settlement: Why We Must Build the Future Without Hollowing Out the Artists

    AI, Creativity, and the Next Rights Settlement: Why We Must Build the Future Without Hollowing Out the Artists


    Alternate title:  From Tools to Teammates: AI’s Creative Upside — and the Rights Reckoning We Can’t Avoid 

    I’ve spent much of my professional life watching industries change when a new “general‑purpose” technology arrives. Telecoms did it with digitisation and the smartphone. Media did it with streaming. Now the creative industries are doing it with generative AI — tools that can draft, compose, visualise, summarise, mimic and remix at a scale that would have sounded implausible a few years ago. 

    When I speak with artists, producers, commissioners, publishers, and the engineers building these systems, I hear two truths at once. First: AI is expanding what creative people can do. Second: the current economics and governance of AI risk extracting value from the creative ecosystem faster than it can replenish itself. The optimistic story and the cautionary story are both real. The question is whether we can hold on to the upside while fixing the terms of trade. 

    A vivid example captures the moment. When will.i.am and Mercedes‑Benz set out to re‑imagine the electric driving experience, they built a system where music can be separated into components — drums, melody, vocals, synth — and then recomposed in real time using live signals from the vehicle: acceleration, braking, steering and suspension travel. The result isn’t a playlist; it’s an adaptive soundtrack shaped by the way you drive. Projects like MBUX Sound Drive are a clue: AI’s most interesting creative applications are rarely about replacing people. They’re about new formats that weren’t previously possible. 

    That kind of work depends on people comfortable living in two worlds at once: code and culture. One of the most compelling thinkers I’ve read at this intersection is Manon Dave, who leads the Future World Design team within BBC Research & Development — a remit focused on what “public service creativity” becomes in an age of AI, immersive media and creator economies. 

    Spending time listening and reading people like Dave shifts how you think about AI. It’s not a single tool; it’s a new layer of capability. Used well, it compresses the distance between idea and execution. It lowers the cost of iteration. It expands the palette. It gives you a collaborator that never runs out of patience — a sounding board you can ask for ten variations, then a hundred more in a different style. For early adopters, that matters most at the exact points where creative work often stalls: writer’s block, a sonic idea you can’t quite capture, a concept that needs “one more angle” to land. 

    This is where the public debate sometimes misses the point. Too much of it is framed as “will AI replace creators?” In most real creative workflows, replacement is not the right model. Collaboration is. Contemporary pop is commonly written by teams; major productions involve dozens of specialist roles. Creative work is already multi‑author. AI becomes another participant — but one whose contribution must be governed and accounted for if we want the ecosystem to remain fair. 

    Historical analogies help us stay calm, but they don’t let us be complacent. When the synthesizer arrived, it provoked predictable anxiety. When Auto‑Tune became mainstream, it was treated as scandalous by some and indispensable by others. In time, both technologies became part of the standard toolkit, and the world didn’t end. What audiences ultimately rewarded was taste, originality and emotional truth. 

    Generative AI differs from prior creative technologies in one crucial respect: how it learns. A synthesizer doesn’t need millions of recordings to be ingested. Auto‑Tune doesn’t require training on the back catalogue of human voices. Generative models, by contrast, are built by training on large datasets — and those datasets often contain copyrighted works. That’s why rights, consent and attribution aren’t side issues. They are the central issues. 

    If AI becomes a system that can ingest the world’s creative output, learn from it, and then compete with it — while creators have no practical way to see what was taken, no practical way to license it, and no practical way to be paid — the long‑term result is a slow hollowing out. We get more content, cheaper content, faster content — and fewer sustainable careers to create the next generation of high‑quality work. 

    We can already see the same tension in journalism, where publishers argue that large‑scale scraping and reuse by AI systems is undermining the economics of original reporting. When major UK news organisations coordinate publicly to push for standards around consent, attribution and licensing, that is a signal that the basic value exchange is breaking down. 

    At the same time, we have to engage honestly with the arguments on the other side. AI developers — and some policymakers — claim broad access to data is necessary for innovation; that training is “transformative” rather than substitutive; and that heavy disclosure requirements could slow progress or expose commercial secrets. In the United States, at least one significant court ruling has leaned toward the view that training on copyrighted books can be fair use in certain circumstances, even while condemning the storage of pirated copies — a reminder that the legal landscape is contested and evolving. 

    So what do we do? I think we need to treat “AI and creativity” as three problems with three kinds of remedies. 

    The first is the fun one: keep building genuinely new formats — work that is additive rather than extractive. Sound Drive is interesting because it’s about interaction, not imitation. The same is true of experiments that make audio more immersive, make education more adaptive, or make accessibility features more powerful. In a BBC context, the most interesting question isn’t “can a model write a script?” It’s “what does public service storytelling look like when information can be contextual, conversational and responsive — and when audiences can participate rather than merely receive?” A modern re‑imagining of Ceefax for the age of conversational systems isn’t about replacing journalists. It’s about adding a layer of context that helps audiences make sense of what they’re already watching, without destroying the shared experience of watching together. 

    The second is the “boring plumbing”: attribution, provenance and authenticity. If we can’t say where media came from, how it was edited, and what tools were used, trust collapses — and with it, the ability to pay creators for verified work. That’s why open provenance standards such as C2PA matter. They are not a silver bullet, but they are the kind of infrastructure that makes a healthier ecosystem possible in a world of cheap synthetic media. 

    The third is the hard one: an enforceable rights settlement for training data and downstream use, built on four basics — meaningful consent, workable transparency, scalable remuneration, and accountability across the value chain. 

    If those principles feel demanding, consider the alternative. Without them, we will drift into a market where a small number of platform companies capture most of the value, while creative labour is treated as an unpriced input. That outcome is not inevitable — but it will happen by default if we don’t actively design against it. 

    I’m also wary of the lazy claim that AI will “level the playing field” automatically. It can, but only under certain conditions. AI gives superpowers to people who already have taste, craft and domain knowledge. A strong writer uses it to explore structure and argument faster. A skilled producer uses it to audition sonic ideas and refine arrangement choices. A great designer uses it to test composition and iterate. But when the foundation isn’t there, you often get a glossy imitation: technically passable, emotionally empty, instantly forgettable. In a market flooded with that kind of output, genuine skill becomes more valuable — but only if the economics of skill remain viable. 

    I’m cautiously optimistic about the next decade. Entertainment will become more adaptive. Interfaces will become more personalised. Media will become more conversational. The best experiences will be those that treat AI as a co‑pilot, not an author — a system that helps humans do more human things, not less. 

    But optimism is not a plan. A plan requires institutions — broadcasters, publishers, labels, collecting societies, regulators, standards bodies, and responsible AI developers — to align on foundations: workable licensing models, provenance standards embedded into tools and platforms, and transparency requirements that don’t collapse under lobbying. Above all, we need to make it easy for a creator — not just a major corporation — to set the terms under which their work can be used. 

    The best future is one where creators can experiment with AI freely, where new forms flourish, and where rights are respected not as an afterthought but as a design constraint. If we get that right, AI will not be the end of creativity. It will be the beginning of a new creative era — one that rewards imagination and craftsmanship while ensuring the people who make culture can still make a living from it. 

  • The Strategic Role of Human Annotators in Telecommunications AI and Data Governance

    The Strategic Role of Human Annotators in Telecommunications AI and Data Governance


    The telecommunications sector sits at the heart of national infrastructure. It connects citizens, enterprises, governments and critical services in real time. As operators accelerate their adoption of artificial intelligence across networks, customer operations, security, and enterprise services, the quality and integrity of underlying data becomes mission-critical. In my view, AI readiness in telecoms does not begin with algorithms. It begins with understanding the data in the first instance — what it represents, how it behaves, where it originates, and how it is governed.

    AI systems are now embedded across network optimisation, predictive maintenance, fraud detection, customer experience management, churn prediction, spectrum allocation, and even autonomous network management. These use cases depend on vast streams of structured and unstructured information: call detail records, network performance counters, OSS/BSS data, customer interaction logs, geospatial feeds, IoT telemetry, and security event streams. Without disciplined human interpretation of this information, organisations risk automating confusion at scale.

    This is where human annotators play a pivotal role. Human annotation is the structured process of adding context, classification, interpretation and corrective feedback to raw data so that AI systems learn the right signals. It is not a peripheral activity. It is a core operational control within a mature data governance framework.

    Understanding the data before training the model

    Telecommunications data is complex, noisy and highly interdependent. A single customer event may span multiple systems: CRM platforms, billing engines, network management systems and external partner feeds. A dropped call may be caused by radio interference, device configuration, congestion, or even a billing restriction. If we do not first understand the semantic meaning of each data element, we cannot responsibly train AI models to act on it.

    Human annotators help organisations interpret what the data actually represents. They validate definitions, identify inconsistencies across systems, and reconcile conflicting signals. They ensure that performance metrics are aligned to operational reality and that anomalies are not simply artefacts of system integration errors. In short, they prevent models from learning the wrong lessons.

    In telecoms, misunderstanding data can have serious consequences. An AI model that incorrectly flags legitimate network traffic as fraudulent could disrupt customer services. A predictive maintenance model trained on poorly interpreted fault codes could misallocate engineering resources. An automated customer resolution system that misunderstands sentiment or intent could escalate complaints rather than resolve them. Human annotation mitigates these risks by embedding contextual judgement into the training process.

    Human annotation as a governance control

    From a governance standpoint, human annotators operationalise policy. Data governance frameworks define ownership, data standards, quality thresholds, privacy constraints, retention rules, and auditability requirements. However, those controls only become real when applied to live datasets. Human annotators ensure consistent taxonomies are applied, sensitive information is correctly classified, and edge cases are treated in accordance with regulatory and ethical standards.

    Telecommunications operators operate under strict regulatory oversight, including privacy obligations, lawful intercept requirements, cybersecurity mandates, and increasingly, AI accountability frameworks. When AI systems influence customer pricing, service eligibility, fraud detection, or network prioritisation, organisations must be able to evidence how those systems were trained and validated. Human annotation creates the documentation, review artefacts, and quality assurance records that make such accountability possible.

    Reinforcement learning and operational alignment

    Modern AI systems in telecoms increasingly rely on structured human feedback to refine performance. Whether evaluating chatbot responses, ranking network optimisation recommendations, or assessing automated ticket resolutions, human annotators compare outputs, highlight inaccuracies, and recommend improvements. This reinforcement process ensures that models remain aligned with operational policies and customer expectations.

    Importantly, this cannot be fully automated. Telecommunications environments are dynamic. Network architectures evolve, product portfolios expand, regulatory interpretations change, and threat landscapes shift. Human judgement is required to recognise when historical patterns no longer reflect current reality. Annotators serve as a continuous calibration mechanism between AI outputs and operational truth.

    The skill set required in telecom environments

    The nature of telecom data means that effective human annotators must possess more than generic analytical capability. They require domain knowledge: understanding radio access networks, core infrastructure, OSS/BSS architectures, roaming agreements, billing logic, and service-level agreements. They must interpret KPIs such as latency, packet loss, throughput, call setup success rates, and mean time to repair within operational context.

    In addition, they need strong data literacy. They must understand how structured tables relate to unstructured logs, how time-series data behaves, and how errors propagate through integrated systems. Critical thinking is essential, particularly when evaluating AI-generated insights that may appear statistically valid but operationally flawed.

    Equally important is governance fluency. Annotators in telecoms must understand privacy classifications, customer consent boundaries, cross-border data transfer constraints, and cybersecurity handling procedures. They must document decisions clearly and consistently, ensuring traceability from raw data through to model output.

    Human oversight in autonomous networks

    As the industry moves toward autonomous and self-optimising networks, tolerance for error decreases dramatically. AI systems may dynamically reroute traffic, adjust power levels, prioritise slices in 5G environments, or trigger automated remediation actions. If these decisions are based on poorly interpreted data, the impact can scale across millions of users within seconds.

    Human annotators provide the assurance layer. They identify ambiguous patterns, review automated decisions, validate training sets, and stress-test outputs against real-world scenarios. Their role is not to slow innovation, but to ensure that innovation is trustworthy.

    Building AI readiness in telecommunications

    In my view, AI readiness in telecoms requires three foundational commitments.

    First, organisations must invest in understanding their data estates before deploying AI at scale. That includes clear metadata management, consistent definitions across systems, lineage tracking, and measurable quality controls.

    Second, human annotation capabilities must be embedded within the operating model. This means structured guidelines, calibration sessions, quality sampling, peer review mechanisms, and integration with governance artefacts such as audit trails and compliance reporting.

    Third, leadership must recognise that human oversight is not a temporary bridge until automation improves. It is a permanent design principle in high-stakes environments.

    Conclusion

    Telecommunications operators are custodians of critical infrastructure. As AI becomes embedded across network operations and customer services, the margin for error narrows. Models can process data at extraordinary speed, but they do not inherently understand context, regulatory nuance, or operational complexity.

    Human annotators bridge that gap. They translate raw signals into governed knowledge. They ensure that AI systems are trained on correctly interpreted data. They embed accountability into the development lifecycle. And they provide the disciplined judgement required to deploy AI safely at scale.

    For the telecommunications sector, the message is clear: before we automate decisions, we must first understand the data. And that understanding depends on structured human insight, rigorous governance, and a mature approach to AI readiness.

  • From Task Disassembly to Startup Advantage: Building AI-Native Businesses the Right Way

    From Task Disassembly to Startup Advantage: Building AI-Native Businesses the Right Way

    How AI’s task-level automation reshapes work – and how founders can turn that shift into an unfair advantage.


    The basic message: AI is not replacing jobs, it is unbundling them

    Most debate about AI and employment asks, “Will AI replace my job?” A more useful question is, “Which parts of my job are being automated, and what happens to the work that remains?”

    Organizations don’t truly buy “job titles”. They buy outcomes, and outcomes are produced by tasks. Jobs are convenient bundles of tasks that used to fit together because tools, data access, and coordination costs made that the most efficient arrangement.

    Generative AI pries those bundles apart. Some tasks become near-instant (drafting copy, summarizing performance, generating a first-pass analysis). Others shift from “doing” to “supervising” (reviewing, editing, approving, escalating). What remains is the role’s “internal logic”: setting direction, choosing constraints, judging trade-offs, and taking accountability.

    That is why many job descriptions now feel out of date. They still list production tasks as human requirements, even when software can do much of the execution. The gap is evidence that the internal structure of work is changing faster than roles, incentives, and training frameworks.


    Why organizations struggle: role fragmentation without redesign creates hidden risk

    When tasks disappear one by one, companies often don’t register the overall change. Headcount stays the same, reporting lines remain, and the org chart looks stable – but the day-to-day work shifts underneath. People quietly take on new responsibilities that aren’t reflected in job specs or evaluation criteria.

    This “silent role drift” creates predictable failure modes: measuring effort instead of outcomes; rewarding legacy production skills instead of higher-value judgment; and automating opportunistically (“because it’s possible”) rather than strategically (“because it improves value creation”).

    For founders, the same fragmentation is an opening. Whenever an old system is being pulled apart, there is space to rebuild it for the new reality: modular tasks, AI-augmented execution, and explicit human accountability where it matters.


    A founder’s lens: treat industries as workflows, not org charts

    If AI breaks jobs into tasks, startup opportunities look less like “AI for marketing” and more like “a tool that transforms one painful workflow step”. The fastest wins come from narrow tasks with clear inputs and outputs, where an order-of-magnitude improvement is easy to prove.

    This lens also fixes a common founder mistake: starting with the model (“we can generate X”) rather than the outcome (“customers lose Y hours and Z revenue because X is slow or error-prone”). In the AI era, prototypes are cheap; choosing the right problem, buyer, and adoption path is hard.

    So, the strategic starting point is market architecture: who feels the pain, who controls budget, and what must be true for switching to happen. Only then should you decide what to automate, what to augment, and what must remain human-led for quality, brand, or compliance reasons.


    A practical playbook: build from problem to market to route to market – then product

    1) Start with a business problem that is task-shaped

    Strong AI startup ideas map to a concrete task: “turn messy input into a structured output that a decision depends on.” Examples include triage, extraction, reconciliation, monitoring, and drafting.

    Look for tasks with five properties:

    • High frequency (daily/weekly, not quarterly).
    • High cost of delay or error (margin, churn, risk, or blocked revenue).
    • Clear inputs and outputs (even if the inputs are messy).
    • Measurable improvement (time, accuracy, throughput, or cost).
    • A natural human checkpoint (review/approval) to manage risk.

    Notice what is missing: the algorithm. If the task is valuable and well-scoped, the technology choice becomes an engineering decision – not the business thesis.

    2) Validate the addressable market in buyer terms, not user terms

    In task-disassembled workplaces, “user” and “buyer” often diverge. Analysts and agents may use the tool, while a functional leader, COO, CFO, CIO, or risk owner buys it. Your market sizing needs both layers:

    • User-level: how many people do the task, how often, and how much time is spent today?
    • Buyer-level: which budget category funds it, and what Economic Impact (EI) threshold does the buyer require?

    A strong early signal is when the buyer can price the pain in business language (revenue leakage, working-capital delay, regulatory exposure), not just “it’s annoying.”

    3) Define route to market before you write serious code

    A credible route to market answers four questions:

    • Adoption path: self-serve, team tool, or enterprise workflow change?
    • Trust path: how will you prove accuracy, safety, and compliance for this task?
    • Integration path: what must connect on day one, and what can wait?
    • Expansion path: once you win the first task, what adjacent tasks can you grow into?

    This prevents “solution drift” – endless pilot features without a repeatable sales motion – and it shapes onboarding, pricing, and your security posture from the outset.

    4) Package and price around outcomes, not tokens

    Because AI costs can be variable, early-stage teams sometimes price on usage (tokens, calls, minutes). Buyers rarely think that way. They buy reduced cycle time, fewer errors, higher conversion, or lower risk. Whenever possible, tie pricing to a unit the business already managers per case, per claim, per ticket, per shipment, per report, or per seat with usage guardrails.

    Outcome-oriented packaging also forces strategic clarity. If you can’t specify what “one unit of value” looks like, you will struggle to make a clean offer and your sales cycle will drift into custom consulting. The goal is a product that can be bought repeatedly, not a project that must be re-sold from scratch each time.

    5) Then build the product – and treat low technical debt as strategy

    Once problem, market, and route to market are clear, technology becomes your compounding advantage. Technical debt is uniquely expensive in AI products because models, data, and expectations shift quickly. The goal is speed without fragility: ship improvements rapidly without breaking reliability, trust, or costs.

    Design choices that reduce debt (and increase defensibility) include:

    • Clear task boundaries: each capability is modular, testable, and replaceable.
    • Data discipline: log inputs/outputs, track versions, and build feedback loops.
    • Evaluation and monitoring: define production metrics (accuracy, latency, cost, escalation rate) and measure them continuously.
    • Fallbacks and governance: when confidence is low, route to a human or safer baseline and record why.
    • Security by default: isolate customer data, implement least-privilege access, and be explicit about retention and training use.

    This is where the technology-driven entrepreneur gains an edge. With strong interfaces, tests, and observability, you can change models, improve retrieval, or fine-tune components without rewriting the product – while rushed competitors get trapped in regressions, rising costs, and security retrofits.


    Productising “internal logic”: automate preparation, not accountability

    The “internal logic” idea is a helpful product design rule. The most adoptable tools don’t try to remove the accountable decision-maker. They automate the preparation around decisions: gathering evidence, structuring options, highlighting anomalies, and drafting recommendations that a human can accept, edit, or reject.

    This aligns with procurement reality (leaders want accountability), reduces adoption friction (people still recognize their judgment in the loop), and creates a natural moat (your system improves as it learns domain patterns and captures feedback from real decisions).


    The “task wedge” pattern: start small to earn the right to expand

    A reliable AI startup pattern is to win a single wedge task that is narrow enough to deploy quickly but connected enough to expand. Automate one step (for example, summarizing and routing inbound requests in a call centre) and prove a measurable improvement. That earns trust, integration access, and data.

    From there, expand into adjacent tasks that share the same workflow context: suggested replies, quality assurance, knowledge gap detection, risk flags, or reporting. You become a platform because you started as the best tool for one task – and you built the architecture to add the next task without destabilizing the system.

    Over time, customers stop describing you as “the AI tool” and start describing you as “how we run this workflow now”. That shift – from feature to default operating layer – is where durable enterprise value is created.


    The core thesis for founders

    AI is changing work by disassembling roles into tasks. Organizations experience the disruption as unmanaged fragmentation. Entrepreneurs can treat it as a blueprint for rebuilding workflows: automate what is repeatable, augment what benefits from speed, and protect human accountability where judgment matters.

    The winning sequence is market-first, then product: understand the business problem in task-level detail; validate the addressable market in buyer terms; design a realistic route to market; then build the technology with minimal technical debt. Maintainable speed – the ability to evolve quickly without breaking trust – is the compounding advantage in an AI-native world.

    Acknowledgement: This article was inspired by themes, “AI Is Breaking Jobs Into Tasks, And That Changes Everything.” By Bernard Marr