Alternate title: From Tools to Teammates: AI’s Creative Upside — and the Rights Reckoning We Can’t Avoid
I’ve spent much of my professional life watching industries change when a new “general‑purpose” technology arrives. Telecoms did it with digitisation and the smartphone. Media did it with streaming. Now the creative industries are doing it with generative AI — tools that can draft, compose, visualise, summarise, mimic and remix at a scale that would have sounded implausible a few years ago.
When I speak with artists, producers, commissioners, publishers, and the engineers building these systems, I hear two truths at once. First: AI is expanding what creative people can do. Second: the current economics and governance of AI risk extracting value from the creative ecosystem faster than it can replenish itself. The optimistic story and the cautionary story are both real. The question is whether we can hold on to the upside while fixing the terms of trade.
A vivid example captures the moment. When will.i.am and Mercedes‑Benz set out to re‑imagine the electric driving experience, they built a system where music can be separated into components — drums, melody, vocals, synth — and then recomposed in real time using live signals from the vehicle: acceleration, braking, steering and suspension travel. The result isn’t a playlist; it’s an adaptive soundtrack shaped by the way you drive. Projects like MBUX Sound Drive are a clue: AI’s most interesting creative applications are rarely about replacing people. They’re about new formats that weren’t previously possible.
That kind of work depends on people comfortable living in two worlds at once: code and culture. One of the most compelling thinkers I’ve read at this intersection is Manon Dave, who leads the Future World Design team within BBC Research & Development — a remit focused on what “public service creativity” becomes in an age of AI, immersive media and creator economies.
Spending time listening and reading people like Dave shifts how you think about AI. It’s not a single tool; it’s a new layer of capability. Used well, it compresses the distance between idea and execution. It lowers the cost of iteration. It expands the palette. It gives you a collaborator that never runs out of patience — a sounding board you can ask for ten variations, then a hundred more in a different style. For early adopters, that matters most at the exact points where creative work often stalls: writer’s block, a sonic idea you can’t quite capture, a concept that needs “one more angle” to land.
This is where the public debate sometimes misses the point. Too much of it is framed as “will AI replace creators?” In most real creative workflows, replacement is not the right model. Collaboration is. Contemporary pop is commonly written by teams; major productions involve dozens of specialist roles. Creative work is already multi‑author. AI becomes another participant — but one whose contribution must be governed and accounted for if we want the ecosystem to remain fair.
Historical analogies help us stay calm, but they don’t let us be complacent. When the synthesizer arrived, it provoked predictable anxiety. When Auto‑Tune became mainstream, it was treated as scandalous by some and indispensable by others. In time, both technologies became part of the standard toolkit, and the world didn’t end. What audiences ultimately rewarded was taste, originality and emotional truth.
Generative AI differs from prior creative technologies in one crucial respect: how it learns. A synthesizer doesn’t need millions of recordings to be ingested. Auto‑Tune doesn’t require training on the back catalogue of human voices. Generative models, by contrast, are built by training on large datasets — and those datasets often contain copyrighted works. That’s why rights, consent and attribution aren’t side issues. They are the central issues.
If AI becomes a system that can ingest the world’s creative output, learn from it, and then compete with it — while creators have no practical way to see what was taken, no practical way to license it, and no practical way to be paid — the long‑term result is a slow hollowing out. We get more content, cheaper content, faster content — and fewer sustainable careers to create the next generation of high‑quality work.
We can already see the same tension in journalism, where publishers argue that large‑scale scraping and reuse by AI systems is undermining the economics of original reporting. When major UK news organisations coordinate publicly to push for standards around consent, attribution and licensing, that is a signal that the basic value exchange is breaking down.
At the same time, we have to engage honestly with the arguments on the other side. AI developers — and some policymakers — claim broad access to data is necessary for innovation; that training is “transformative” rather than substitutive; and that heavy disclosure requirements could slow progress or expose commercial secrets. In the United States, at least one significant court ruling has leaned toward the view that training on copyrighted books can be fair use in certain circumstances, even while condemning the storage of pirated copies — a reminder that the legal landscape is contested and evolving.
So what do we do? I think we need to treat “AI and creativity” as three problems with three kinds of remedies.
The first is the fun one: keep building genuinely new formats — work that is additive rather than extractive. Sound Drive is interesting because it’s about interaction, not imitation. The same is true of experiments that make audio more immersive, make education more adaptive, or make accessibility features more powerful. In a BBC context, the most interesting question isn’t “can a model write a script?” It’s “what does public service storytelling look like when information can be contextual, conversational and responsive — and when audiences can participate rather than merely receive?” A modern re‑imagining of Ceefax for the age of conversational systems isn’t about replacing journalists. It’s about adding a layer of context that helps audiences make sense of what they’re already watching, without destroying the shared experience of watching together.
The second is the “boring plumbing”: attribution, provenance and authenticity. If we can’t say where media came from, how it was edited, and what tools were used, trust collapses — and with it, the ability to pay creators for verified work. That’s why open provenance standards such as C2PA matter. They are not a silver bullet, but they are the kind of infrastructure that makes a healthier ecosystem possible in a world of cheap synthetic media.
The third is the hard one: an enforceable rights settlement for training data and downstream use, built on four basics — meaningful consent, workable transparency, scalable remuneration, and accountability across the value chain.
If those principles feel demanding, consider the alternative. Without them, we will drift into a market where a small number of platform companies capture most of the value, while creative labour is treated as an unpriced input. That outcome is not inevitable — but it will happen by default if we don’t actively design against it.
I’m also wary of the lazy claim that AI will “level the playing field” automatically. It can, but only under certain conditions. AI gives superpowers to people who already have taste, craft and domain knowledge. A strong writer uses it to explore structure and argument faster. A skilled producer uses it to audition sonic ideas and refine arrangement choices. A great designer uses it to test composition and iterate. But when the foundation isn’t there, you often get a glossy imitation: technically passable, emotionally empty, instantly forgettable. In a market flooded with that kind of output, genuine skill becomes more valuable — but only if the economics of skill remain viable.
I’m cautiously optimistic about the next decade. Entertainment will become more adaptive. Interfaces will become more personalised. Media will become more conversational. The best experiences will be those that treat AI as a co‑pilot, not an author — a system that helps humans do more human things, not less.
But optimism is not a plan. A plan requires institutions — broadcasters, publishers, labels, collecting societies, regulators, standards bodies, and responsible AI developers — to align on foundations: workable licensing models, provenance standards embedded into tools and platforms, and transparency requirements that don’t collapse under lobbying. Above all, we need to make it easy for a creator — not just a major corporation — to set the terms under which their work can be used.
The best future is one where creators can experiment with AI freely, where new forms flourish, and where rights are respected not as an afterthought but as a design constraint. If we get that right, AI will not be the end of creativity. It will be the beginning of a new creative era — one that rewards imagination and craftsmanship while ensuring the people who make culture can still make a living from it.
