The European Union’s delight in being the first to regulate AI anywhere in the world may well see a less-than-happy ending.
Much like the fable of the tortoise and the hare, the EU hare has rushed into detailed regulation on something it really doesn’t – and couldn’t – understand.
Although it is quite an achievement to create the mammoth piece of regulation, in the long run winning this race may not be helpful to the EU. The UK’s tortoise approach to AI regulation, recognising it as a pro-innovation marathon, may well deliver the UK a ‘late-mover advantage’ in the long run.
As with the proverbial hare, rushing to regulate the risk of AI before we have a clear understanding of AI and associated risk seems more than likely to end badly.
The UK is understandably proud of its global third place in AI businesses, but must face up to the fact that this position is a marked third place and a very long way behind the US and China. If the UK hopes to even catch a glimpse of its rivals’ heels, it needs to take a pro-innovation approach by making access to AI as open and democratised as possible.
Its circa £4bn in investment into AI is a drop in the ocean compared to the Big Tech companies. Meta alone will spend $40bn on AI this year, orders of magnitude more than the UK government and the collective investment into AI by the Big Tech companies will be an eye-watering multiple of this.
Where the UK does lead is on open source in AI. A recent OpenUK report found that the fastest-growing open repositories for AI worldwide are UK-owned. In this space, there’s no room for the UK to be a tortoise and it must act fast to leverage this advantage, avoid over-regulation and back innovation in this space as much as possible – from stemming talent flight to capital spending and enabling grassroots skills.
Understanding risk
Even if companies are able to innovate and build AI, in the European continent, the likelihood of those small innovators being able to break into heavily regulated markets like the EU will be slim. Firstly, compliance is so complex only a few have the skilled staff to achieve this and effectively there is regulatory capture.
Practically, providers must meet the regulations in their contractual relations with their customers by bearing risk against regulatory compliance and so be able to stand behind the associated contractual indemnities customers will expect. Bearing that risk requires funding at a scale that is simply not available to new market entrants or for them to have insurance that money cannot buy.
Giving an indemnity is only as good as the wording and its meaning, as well as the depth of the pockets or insurance backing the pockets of the business that grants the indemnity. In the case of Big Tech, the lack of insurance is not an inhibitor to their commercialisation of tech products as they simply self-insure. Microsoft is already offering a level of indemnity to its customers on ChatGPT4, for example.
Lessons from the past
Back in the late 1990s and early 2000s, the then-new Distance Selling and Ecommerce regulation in Europe, the US and the UK was fit for purpose. In the case of the US’s Digital Millennium Copyright Act or DMCA, its name time-stamped it. Whilst fit for purpose in 2000, the challenges faced in keeping those laws up to date as the online world evolved have been significant. The waterfall approach is reflected in the tech approach and contracts of the day. But 20 years later in a world of agile technology, that legislation was unable to flex to the needs of the day.
As a consequence, we have seen a handful of tech companies and the people behind these tech companies positioned to control our digital present. The question for the government is whether they will learn from this? Will they be bold in shaping tech policy and regulation for our AI world to enable innovation and take the pro-innovation approach that the UK is so keen on, or will they fail to learn the lesson of the last twenty years of technology evolution and simply repeat the mistakes of the past?
To avoid these mistakes being repeated, clarity is required, as well as policy and lawmakers who are willing to be decisive. Being decisive also includes not acting when that’s appropriate.
Baroness Stowell’s recent letter to Michelle Donelan, the tech secretary, revealed a lot about the push and pull on AI policy happening in Westminster. A pressing, often overlooked issue for AI, is clarity in the use of publicly available copyrighted work. Government needs to act in making a decision on copyright and clarifying the ability to scrape publicly available data, if we are to enable LLMs to be trained in the UK – or not.
Either way, the AI code of conduct which has been delayed a year, must now be delivered by DSIT – whoever is in government – as soon as practically possible. This is a necessary decision and requires the application of “political heft”, as the Baroness described it.
Where next
Sarah Munby, the Permanent Secretary to DSIT, recently gave a lecture in which she bravely began with an honest statement about AI – “I don’t know”. Without a doubt, this statement was both accurate and honest as frankly none of us know or could know where AI will evolve as a technology and at what pace.
It is the duty of our government and lawmakers to protect the citizens of its state and at the same time to safeguard our digital innovation and economy. This necessitates a balance between exponential risk and the real and present danger of regulatory capture. Regulatory capture happens when over-regulation creates a compliance landscape in which only those large companies with big compliance teams or a lot of money to buy in skills will thrive.
Undoubtedly we do need UK regulators to deal with AI’s challenges like copyright and other pressing concerns. Yet building a way forward will come through embracing Munby’s honest approach, building a flexible and open AI regime to build competence across the economy, and making AI like every other digital tool.
Let’s see if that takes the UK’s third-place tortoise over the line in the marathon that AI is going to become.
Amanda Brock is the CEO of OpenUK.
The post The UK has a late-mover advantage in AI regulation appeared first on UKTN.
previous post