The UK Government has been accused of being out of step with the rest of the world on the regulation of artificial intelligence amid calls for advanced AI systems such as OpenAI’s GPT to be paused to halt what has been branded a “dangerous” arms race.
Yesterday UK ministers set out a new whitepaper on AI regulation, with the aim of driving responsible innovation and maintaining public trust in the technology. But they ruled out establishing a new central regulator for the technology, instead preferring to split responsibility among existing bodies.
The new whitepaper (‘A pro-innovation approach to AI regulation’) notes that the UK’s AI industry is already well developed, employing more than 50,000 people and contributing £3.7bn to the economy last year.
The Government claims the “light touch” set out in the AI Regulation White Paper “will help create the right environment for artificial intelligence to flourish safely in the UK”.
Existing regulators, including the Health & Safety Executive, Equality & Human Rights Commission and Competition & Markets Authority will be tasked with building their own approaches that suit the way their respective sectors are using artificial intelligence.
Science, Innovation & Technology Secretary Michelle Donelan said: “The pace of AI development is staggering, so we need to have rules to make sure it is developed safely. Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”
But not everyone is convinced this is the correct approach.
Mishcon de Reya partner and technology lawyer Ashley Williams said that the white paper can be neatly summarised by the following statement: no new legislation, no new regulator.
He added: “For the UK approach to really work, it is important to acknowledge that some regulators will be under-resourced and lack AI experience to really deliver. Others may be too heavy-handed in their approach without a clear steer on how they should implement the framework.
“Supporting regulators will be critical in making this approach workable and ensuring specific sector guidance is issued in a timely manner with real cooperation across the regulators. Regulators will be supported by a centralised function which will require substantive investment in terms of resource and expertise.”
Jacob Gatley, a solicitor at law firm BDB Pitmans added: “While the Government seeks to foster a trailblazing AI sector, it is difficult to look past the fact that there is a fundamental regulatory lacuna.
“Specifically, how UK regulators will monitor AI development, how the existing statutory framework applies to data stored and utilised by AI programmes, and whether regulatory bodies such as the HSE and ICO require legislative and financial support so they are properly equipped to guide and police AI development.
Gatley says there is also the concern that the UK could in fact be an anomaly rather than a market leader, as the US, China and the EU are already putting in place AI specific laws.
In fact, the EU is preparing its own act that will govern how AI is used in Europe, with companies that violate the bloc’s rules facing fines of up to €30m or 6% of global annual turnover, whichever is larger.
The UK white paper coincides with the publication of an open letter – signed by more than 1,100 tech industry executives – demanding that the development of advanced AI systems is paused.
The letter, published by non-profit campaign group the Future of Life Institute, states: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Co-signatories include top AI professors Stuart Russell and Yoshua Bengio, the co-founders of Apple, Pinterest and Skype, the founder of AI start-up Stability AI, Elon Musk and executives from Microsoft, Google, Amazon, Meta and Alphabet-owned DeepMind.
The letter follows a stampede of AI launches over the past five months, including Microsoft-backed OpenAI’s ChatGPT in November and GPT-4, a sophisticated model that underpins the chatbot.
The letter calls for the creation of shared safety protocols that are audited by independent experts to “ensure that systems adhering to them are safe beyond a reasonable doubt”, concluding that “AI systems with human-competitive intelligence can pose profound risks to society and humanity”.
Related stories
Even ‘slow year’ of 2022 saw AI start-ups raise $50bn
CMOs join stampede for ChatGPT as FOMO escalates
Spooner on…should copywriters worry about ChatGPT?
AI to make agencies lean, keen and driven by machine
The year ahead: Driving the adoption of AI applications
The $460bn question: Can you get AI to work for you?
Metaverse on the rise, but AI reigns supreme for now