In his King’s Speech last month, Charles III outlined Labour’s plans to “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. While not quite the firm commitment to AI legislation many expected, it marks a clear shift away from the previous government’s “wait-and-see” approach.
More recently, Peter Kyle, the tech secretary, told major technology companies that the AI bill – expected later this year – would focus on making existing voluntary agreements between companies and the government legally binding. It will focus on “ChatGPT-style models” and turn the AI Safety Institute into an arm’s length government body, the Financial Times reported.
It’s a laudable goal, to be sure. But as the government embarks on this legislative journey, we must ask ourselves a fundamental question: do we even know what we’re talking about when we say “AI”?
It may seem a daft query. After all, AI is everywhere these days, from the chatbots handling our customer service queries to the algorithms recommending our next binge-watch. Yet, as a recent MIT Technology Review article states, “AI has come to mean all things to all people, splitting the field into fandoms. It can feel as if different camps are talking past one another, not always in good faith.”
This lack of consensus is more than just academic navel-gazing. It has real-world implications for regulating this technology and, crucially, for how businesses – especially the UK’s vibrant startup ecosystem – can innovate within these boundaries.
Consider the EU’s AI Act, which was approved in March and came into force on 1 August. It defines AI systems as “machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment”. It then categorises AI applications into four risk levels: minimal, limited, high, and unacceptable.
It’s a start, but is it enough? Does it capture the full scope of what AI could become?
Futurist Gerd Leonard offered a thought-provoking perspective on this question when I interviewed him a couple of days after the King’s Speech. He pointed out that as AI advances in various cognitive tasks, and if it reaches artificial general intelligence (AGI) – something he reckons will happen in a handful of years – it would be like “a meteor coming down from above, closing in on our days of being the superior species with knowledge”.
This view aligns with historian Yuval Noah Harari’s argument that we should view AI not as artificial intelligence but as “alien intelligence” – evolving at breakneck speed in ways fundamentally different from human cognition. Building on these ideas, Leonard raises a vital question: “Should we value intelligence over humanity?”
Leonard’s concern echoes Stuart Russell’s definition of intelligence as “having the power to shape the world in your interest”. If we can shape the world in our interest, why would we want to give more intelligence to a machine that might shape the world in its own interest?
These varying viewpoints highlight the complexity of Labour’s task as the governing party drafts AI legislation. How do we create a regulatory framework flexible enough to accommodate rapid technological change yet robust enough to protect against potential risks?
The stakes for UK businesses, particularly our AI startups, couldn’t be higher. A definition that is too rigid could stifle innovation, putting our companies at a disadvantage globally. Conversely, an overly broad or vague definition might create regulatory uncertainty, deterring investment and growth.
Recent developments in the US underscore the urgency of this task. According to a Washington Post report in mid-July, Donald Trump’s camp is drafting plans for a “Manhattan Project on AI” to advance US interests in the technology. This includes creating industry-led agencies to study AI models and protect them from foreign powers, with a section ominously titled “Make America First in AI”.
Such moves highlight the geopolitical dimensions of AI development and the potential for an AI arms race. As Leonard warns: “The wolf you feed is the wolf that wins.” We must ensure we’re feeding the right wolf – one that prioritises human values and ethical considerations alongside technological advancement.
As we progress, policymakers must engage in robust dialogue with AI researchers, ethicists, and business leaders. We need to develop a shared vocabulary and understanding of AI that can form the foundation of effective regulation.
Starmer’s pledge is a step in the right direction, but the real work lies ahead. We must ensure that our AI legislation is built on a solid definitional foundation that can adapt as the technology evolves. Only then can we harness AI’s benefits while mitigating its risks.
For UK businesses, the message is clear: engage in this process. Your voices, your experiences, and your concerns need to be heard. The future of AI in Britain isn’t just about algorithms and data sets. It’s about people, innovation, and the kind of society we want to build.
As we navigate, and build, this new world, let’s make sure we’re all speaking the same language. The success of our AI sector – and perhaps our society – depends on it.
The post Getting AI legislation right for startups must start with a shared definition appeared first on UKTN.
previous post