• Do Start-ups Still Need a CTO in the Age of AI? Yes — But Timing Now Matters More Than Ever. 

    Do Start-ups Still Need a CTO in the Age of AI? Yes — But Timing Now Matters More Than Ever. 

    One of the most interesting questions I am hearing from founders at the moment is this: 

    “With tools such as OpenAI, Claude and Gemini capable of generating code, debugging applications, and helping design software architecture, do start-ups still need a CTO?” Or has technology effectively become something you can treat as a flexible working resource — something you simply rent when required? 

    It is a fair question, and one that reflects how profoundly the technology landscape has changed over the past two years. 

    My answer is nuanced. If your idea depends on technology to demonstrate a proof of concept, build a product, or scale a platform business, then you absolutely need CTO capability somewhere in the organisation. But that does not necessarily mean you should appoint a traditional CTO on day one. 

    That distinction is becoming increasingly important for founders. 

    The fundamental mistake many people make in this discussion is confusing software production with technology leadership. Tools such as OpenAI, Claude and Gemini are becoming extremely capable engineering assistants. They can generate code, help structure applications, review security issues, and accelerate development cycles dramatically. 

    For early-stage founders, this means something important: the journey from idea to prototype has never been shorter. With the right prompts and some disciplined thinking, founders can now get surprisingly far in building proof-of-concept systems without maintaining a large internal engineering team. 

    In many cases this allows start-ups to validate their ideas faster and with far less capital than was historically required. 

    But there is an important caveat. 

    AI tools can help build software, but they do not provide technical judgement. They do not carry responsibility for architectural decisions. They do not worry about how a platform will behave under scale, whether the data model creates strategic value, or whether a design choice today will create catastrophic technical debt two years later. 

    That is where CTO capability remains critical. 

    A good CTO does not simply write code. A good CTO makes decisions about what should be built, what should never be built, how systems should be structured, how security and resilience are embedded, and how technology aligns with the business strategy of the company. 

    Those are leadership responsibilities, not coding tasks. 

    However, there is another dimension founders need to think carefully about — and one that is often overlooked in the excitement of early-stage building. 

    Bringing in a CTO very early in a company’s life has consequences. 

    First, there is the question of equity. Early technical co-founders are often granted significant ownership stakes in the company. In many cases that can mean 20–30% of the company’s equity being allocated before the business model, market validation, or funding strategy has been fully proven. If the technical capability required in the early stages could have been achieved through modern AI tools combined with outsourced engineering, founders may find they have diluted themselves far earlier than necessary. 

    Second, there is the very real risk of founder conflict . A commonly cited estimate is that roughly 65% of high-potential technology startups fail primarily because of co-founder conflict according to research popularised by Harvard’s Noam Wasserman.  

    Co-founder relationships are complex. When a technical co-founder joins very early, before the product direction, market focus, and organisational culture have fully formed, there is often significant potential for strategic disagreement. Differences in pace, product philosophy, funding strategy, or simply personality can become structural tensions within the business. Many start-ups fail not because the idea is wrong, but because the founding team fractures. 

    Introducing that dynamic prematurely can be a risk founders should think about carefully. 

    None of this means that technical leadership is unimportant. Quite the opposite. In technology-driven businesses, strong technical leadership eventually becomes essential. But the timing and structure of that leadership has become more flexible. 

    In today’s environment there are several models that founders can consider. 

    The first is AI-assisted prototyping combined with outsourced development. This can allow a founder to move quickly from idea to proof of concept without immediately committing to a full-time CTO or large engineering team. 

    The second is the use of a fractional CTO or senior technical adviser — someone who provides architectural guidance and oversight while the product and business model are still being validated. 

    The third model, of course, remains the traditional technical co-founder, but it is one that founders should enter into with clarity about long-term alignment and the implications for ownership and control. 

    What has changed is not the need for technology leadership. What has changed is the economics of early-stage software development. 

    AI tools have dramatically lowered the cost and speed of building early systems. They have made it possible for founders to explore ideas, test workflows, and validate customer needs without committing immediately to permanent technical leadership structures. 

    But they have not eliminated the need for someone who understands the deeper implications of technology choices. 

    If your business ultimately depends on scale, reliability, security, data advantage, or defensible intellectual property, then CTO-level thinking will eventually become indispensable. 

    The key lesson for founders today is therefore simple. 

    Do not confuse the ability to generate code with the ability to build a technology company. 

    AI tools are extraordinary accelerators. They can compress months of work into days and allow small teams to produce remarkable outputs. But technology leadership — the judgement that determines how systems are designed, how they evolve, and how they support the long-term strategy of the business — remains a fundamentally human responsibility. 

    So the real answer to the question is this: 

    Yes, technology start-ups still need CTO capability. 

    But thanks to the emergence of powerful AI development tools, founders now have far greater freedom to decide when that capability becomes permanent — and how much equity, control, and risk they are prepared to attach to that decision. 

    1A commonly cited estimate is that roughly 65% of high-potential technology startups fail primarily because of co-founder conflict or broader founding-team “people problems,” according to research popularized by Harvard’s Noam Wasserman.  

  • The Importance of Board Disagreements

    The Importance of Board Disagreements

    Corporate boards exist at the heart of modern governance. They sit between ownership and management, responsible for ensuring that organisations are directed and controlled in ways that create long-term value while protecting the interests of stakeholders. The board’s responsibilities include oversight of strategy, monitoring performance and risk, and ensuring accountability to shareholders, regulators and society at large. Directors are therefore not merely advisers to management; they are stewards of the enterprise and must exercise independent judgement in the interests of the organisation’s future.

    In practice, this responsibility requires boards to do far more than simply endorse the views of executives. The board’s purpose is to challenge, test and refine management thinking. Good governance depends on maintaining a clear distinction between those who run the company day-to-day and those who oversee its direction. Executives manage operations, while the board provides oversight, strategic guidance and accountability, ensuring that management decisions are aligned with the long-term interests of the company and its stakeholders.

    One of the most misunderstood aspects of board effectiveness is the role of disagreement. Many people unfamiliar with governance assume that a well-functioning board should be harmonious and unified. In reality the opposite is often true. Healthy disagreement is not a sign of dysfunction but of engagement. When directors bring different perspectives, experiences and expertise into the room, debate becomes a powerful tool for better decision-making. Research on boardroom dynamics shows that “vigorous dissent” around strategic issues improves decision quality and helps boards avoid groupthink.

    The danger of excessive consensus is that it can allow the status quo to persist unchallenged. Organisations, particularly successful ones, can easily fall into patterns of thinking that go unquestioned over time. Boards are uniquely positioned to disrupt this complacency. Non-executive directors and chairs are deliberately placed one step removed from daily management so that they can bring independence of thought and a broader perspective. Their role is to ask difficult questions: Why are we pursuing this strategy? What risks are we overlooking? What alternative options should be considered?

    Throughout my own career as a Chair and Non-Executive Director across multiple organisations, I have repeatedly seen how constructive disagreement strengthens decision-making. Boards are composed of individuals with different backgrounds, sectors of experience and personal insights. When those perspectives collide respectfully, they force deeper analysis and more robust conclusions. The best boardrooms I have been part of were not silent or overly polite; they were intellectually demanding environments where directors felt confident enough to question assumptions and challenge the executive team.

    This dynamic is essential because boards carry responsibilities that extend beyond shareholders alone. Directors must consider the impact of decisions on employees, customers, suppliers, communities and other stakeholders. Modern corporate governance frameworks emphasise the duty of directors to act in good faith and in the best interests of the company while balancing the expectations of multiple stakeholder groups. Such complexity inevitably generates differing viewpoints. A strategy that benefits shareholders in the short term may carry risks for employees or long-term sustainability. Debate in the boardroom allows those competing considerations to be surfaced and evaluated properly.

    The role of the Chair is particularly important in managing this process. Encouraging disagreement does not mean allowing conflict to become personal or destructive. Effective chairs create an environment where directors feel able to express opposing views while maintaining respect and trust among board members. Governance research distinguishes between “task conflict,” which focuses on differing ideas and strategies, and “relationship conflict,” which becomes personal and damaging. The challenge is to foster the former while preventing the latter.

    In practice, this often means structuring discussions carefully and ensuring that every voice in the room is heard. Some directors are naturally more vocal than others, and the Chair must ensure that quieter members are invited into the debate. Diverse boards—whether in terms of professional background, gender, nationality or sector experience—tend to generate richer discussions precisely because they bring different mental models to the table. Diversity, therefore, is not only a social or ethical consideration but also a governance advantage.

    Yet disagreement is only the first step. Ultimately, a board must reach decisions. One of the defining features of effective governance is the ability of directors to debate vigorously and then unite behind a collective conclusion. Once a board decision is made, it becomes the responsibility of all directors to support that outcome publicly, even if individual members initially held different views. This principle of collective responsibility ensures that management receives clear direction and that the organisation benefits from decisive leadership.

    This pattern—robust debate followed by unified commitment—is one I have observed repeatedly across boards in different sectors. The discussions may be intense, the perspectives strongly held, and the analysis detailed. But when the process is conducted professionally and respectfully, the final outcome is almost always stronger than any single viewpoint brought into the room at the beginning.

    In an era of increasing complexity—technological disruption, regulatory change, sustainability pressures and geopolitical uncertainty—the importance of strong board governance has never been greater. Boards must guide organisations through uncertain terrain while safeguarding long-term value and stakeholder trust. To do this effectively, they must resist the temptation of easy consensus.

    The most effective boards are those where disagreement is not feared but welcomed. When directors challenge each other and the executive team with intellectual rigour, the board fulfils its true purpose: ensuring that decisions are examined from multiple perspectives and that the organisation moves forward with clarity and confidence. In that sense, disagreement in the boardroom is not a weakness. It is one of governance’s greatest strengths.

  • AI, Creativity, and the Next Rights Settlement: Why We Must Build the Future Without Hollowing Out the Artists

    AI, Creativity, and the Next Rights Settlement: Why We Must Build the Future Without Hollowing Out the Artists


    Alternate title:  From Tools to Teammates: AI’s Creative Upside — and the Rights Reckoning We Can’t Avoid 

    I’ve spent much of my professional life watching industries change when a new “general‑purpose” technology arrives. Telecoms did it with digitisation and the smartphone. Media did it with streaming. Now the creative industries are doing it with generative AI — tools that can draft, compose, visualise, summarise, mimic and remix at a scale that would have sounded implausible a few years ago. 

    When I speak with artists, producers, commissioners, publishers, and the engineers building these systems, I hear two truths at once. First: AI is expanding what creative people can do. Second: the current economics and governance of AI risk extracting value from the creative ecosystem faster than it can replenish itself. The optimistic story and the cautionary story are both real. The question is whether we can hold on to the upside while fixing the terms of trade. 

    A vivid example captures the moment. When will.i.am and Mercedes‑Benz set out to re‑imagine the electric driving experience, they built a system where music can be separated into components — drums, melody, vocals, synth — and then recomposed in real time using live signals from the vehicle: acceleration, braking, steering and suspension travel. The result isn’t a playlist; it’s an adaptive soundtrack shaped by the way you drive. Projects like MBUX Sound Drive are a clue: AI’s most interesting creative applications are rarely about replacing people. They’re about new formats that weren’t previously possible. 

    That kind of work depends on people comfortable living in two worlds at once: code and culture. One of the most compelling thinkers I’ve read at this intersection is Manon Dave, who leads the Future World Design team within BBC Research & Development — a remit focused on what “public service creativity” becomes in an age of AI, immersive media and creator economies. 

    Spending time listening and reading people like Dave shifts how you think about AI. It’s not a single tool; it’s a new layer of capability. Used well, it compresses the distance between idea and execution. It lowers the cost of iteration. It expands the palette. It gives you a collaborator that never runs out of patience — a sounding board you can ask for ten variations, then a hundred more in a different style. For early adopters, that matters most at the exact points where creative work often stalls: writer’s block, a sonic idea you can’t quite capture, a concept that needs “one more angle” to land. 

    This is where the public debate sometimes misses the point. Too much of it is framed as “will AI replace creators?” In most real creative workflows, replacement is not the right model. Collaboration is. Contemporary pop is commonly written by teams; major productions involve dozens of specialist roles. Creative work is already multi‑author. AI becomes another participant — but one whose contribution must be governed and accounted for if we want the ecosystem to remain fair. 

    Historical analogies help us stay calm, but they don’t let us be complacent. When the synthesizer arrived, it provoked predictable anxiety. When Auto‑Tune became mainstream, it was treated as scandalous by some and indispensable by others. In time, both technologies became part of the standard toolkit, and the world didn’t end. What audiences ultimately rewarded was taste, originality and emotional truth. 

    Generative AI differs from prior creative technologies in one crucial respect: how it learns. A synthesizer doesn’t need millions of recordings to be ingested. Auto‑Tune doesn’t require training on the back catalogue of human voices. Generative models, by contrast, are built by training on large datasets — and those datasets often contain copyrighted works. That’s why rights, consent and attribution aren’t side issues. They are the central issues. 

    If AI becomes a system that can ingest the world’s creative output, learn from it, and then compete with it — while creators have no practical way to see what was taken, no practical way to license it, and no practical way to be paid — the long‑term result is a slow hollowing out. We get more content, cheaper content, faster content — and fewer sustainable careers to create the next generation of high‑quality work. 

    We can already see the same tension in journalism, where publishers argue that large‑scale scraping and reuse by AI systems is undermining the economics of original reporting. When major UK news organisations coordinate publicly to push for standards around consent, attribution and licensing, that is a signal that the basic value exchange is breaking down. 

    At the same time, we have to engage honestly with the arguments on the other side. AI developers — and some policymakers — claim broad access to data is necessary for innovation; that training is “transformative” rather than substitutive; and that heavy disclosure requirements could slow progress or expose commercial secrets. In the United States, at least one significant court ruling has leaned toward the view that training on copyrighted books can be fair use in certain circumstances, even while condemning the storage of pirated copies — a reminder that the legal landscape is contested and evolving. 

    So what do we do? I think we need to treat “AI and creativity” as three problems with three kinds of remedies. 

    The first is the fun one: keep building genuinely new formats — work that is additive rather than extractive. Sound Drive is interesting because it’s about interaction, not imitation. The same is true of experiments that make audio more immersive, make education more adaptive, or make accessibility features more powerful. In a BBC context, the most interesting question isn’t “can a model write a script?” It’s “what does public service storytelling look like when information can be contextual, conversational and responsive — and when audiences can participate rather than merely receive?” A modern re‑imagining of Ceefax for the age of conversational systems isn’t about replacing journalists. It’s about adding a layer of context that helps audiences make sense of what they’re already watching, without destroying the shared experience of watching together. 

    The second is the “boring plumbing”: attribution, provenance and authenticity. If we can’t say where media came from, how it was edited, and what tools were used, trust collapses — and with it, the ability to pay creators for verified work. That’s why open provenance standards such as C2PA matter. They are not a silver bullet, but they are the kind of infrastructure that makes a healthier ecosystem possible in a world of cheap synthetic media. 

    The third is the hard one: an enforceable rights settlement for training data and downstream use, built on four basics — meaningful consent, workable transparency, scalable remuneration, and accountability across the value chain. 

    If those principles feel demanding, consider the alternative. Without them, we will drift into a market where a small number of platform companies capture most of the value, while creative labour is treated as an unpriced input. That outcome is not inevitable — but it will happen by default if we don’t actively design against it. 

    I’m also wary of the lazy claim that AI will “level the playing field” automatically. It can, but only under certain conditions. AI gives superpowers to people who already have taste, craft and domain knowledge. A strong writer uses it to explore structure and argument faster. A skilled producer uses it to audition sonic ideas and refine arrangement choices. A great designer uses it to test composition and iterate. But when the foundation isn’t there, you often get a glossy imitation: technically passable, emotionally empty, instantly forgettable. In a market flooded with that kind of output, genuine skill becomes more valuable — but only if the economics of skill remain viable. 

    I’m cautiously optimistic about the next decade. Entertainment will become more adaptive. Interfaces will become more personalised. Media will become more conversational. The best experiences will be those that treat AI as a co‑pilot, not an author — a system that helps humans do more human things, not less. 

    But optimism is not a plan. A plan requires institutions — broadcasters, publishers, labels, collecting societies, regulators, standards bodies, and responsible AI developers — to align on foundations: workable licensing models, provenance standards embedded into tools and platforms, and transparency requirements that don’t collapse under lobbying. Above all, we need to make it easy for a creator — not just a major corporation — to set the terms under which their work can be used. 

    The best future is one where creators can experiment with AI freely, where new forms flourish, and where rights are respected not as an afterthought but as a design constraint. If we get that right, AI will not be the end of creativity. It will be the beginning of a new creative era — one that rewards imagination and craftsmanship while ensuring the people who make culture can still make a living from it.