AI legislation has been officially announced in the King’s Speech on Wednesday as Keir Starmer’s Labour government moves away from the wait-and-see approach of the previous Conservative administration.
During the general election campaign, Labour made a handful of promises to take legislative action to ensure AI safety. The King’s confirmation of this, made during the State Opening of Parliament, is the first concrete step in the UK establishing a new AI regulatory regime.
What do we know so far?
In the King’s Speech, Charles III said his government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.
This isn’t an explicit commitment to introducing an AI bill, as was previously expected. However, it does indicate that labour is looking to get the ball rolling on a complex area that will have ramifications for the UK’s AI startups.
Labour’s manifesto included a short reference to its AI plans. Ahead of the general election, the party said it intended to introduce “binding regulation on the handful of companies developing the most powerful AI models”.
The manifesto also said Labour would ban the “creation of sexually explicit deepfakes”.
Tech Secretary Peter Kyle expanded on Labour’s position on AI. In an interview with BBC News, Kyle – as shadow tech secretary – said a Labour government would impose a “statutory code” requiring firms developing AI to share safety test data with the government and its AI Safety Institute.
This would be a stricter approach than the previous government, which has relied on a voluntary, non-binding agreement from tech firms on AI safety.
Under the Conservatives, the AI Safety Institute has received information from some AI developers, however, there is no legal requirement for companies such as OpenAI and Microsoft to give it or the rest of the government access to its safety information.
In February, Kyle spoke at a policy event hosted by industry body techUK, where he said Labour would create a “regulatory innovation office” to encourage greater speed and adaptability to new technologies from regulators.
Chief Secretary to the Treasury Darren Jones previously told UKTN that existing regulators “don’t have the capacity” to oversee AI regulation and lack “formal coordination”.
There remain many questions about the details, notably over whether the government will back open-source requirements on AI models and the legislative timeline for the bill.
How does the EU AI Act fit in?
Details on the UK’s proposed AI legislation remain limited. However, officials will likely be looking closely at the EU’s AI Act, which was approved in March and provides binding rules for AI developers.
The EU AI Act is structured into four levels of risk: minimal, limited, high and unacceptable. AI use in the unacceptable category, including intentional misinformation, social scoring and web scraping of facial images, is banned outright.
The UK has existing legal frameworks for certain areas covered by the recent EU law, notably the use of facial recognition technology outside of law enforcement.
It is possible that the UK’s AI bill may borrow from elements of the EU AI Act, such as the requirement for developers to maintain detailed logs of safety testing shared with regulators.
What does the tech sector think?
Much of the tech industry, including those in AI, is pleased that legislative progress is being made, though few expect great swiftness in its implementation.
“It’s clear that we do not have answers up front and there is a level of risk. However, the fact that we are talking about it – that research into AI explainability continues – and that legislation is being drafted: these are all encouraging,” said Snowflake’s principal data strategist Jennifer Belissent.
However, as is the case when regulating any advanced technology, there are fears that binding rules could stifle innovation. Luminance CEO Eleanor Lightbody noted that the “multifaceted” nature of AI means blanket regulations will not be effective.
“There is a breadth of AI technology and varying applications of large language models. A one-size-fits-all approach to AI regulation risks being rigid, and given the pace of AI development, quickly outdated,” Lightbody said.
Ekaterina Almasque, general partner at tech VC OpenOcean, pointed out that while the “previous government’s ‘light touch’ approach had its merits,” legislation in the UK is needed as other international bodies develop their own systems.
Almasque said if the UK aligns its legislation to some degree with the EU and US “it can promote interoperable reporting systems and offer a clear roadmap for AI companies operating within the UK”.
The post King’s Speech: UK AI legislation plans explained appeared first on UKTN.