Govt warned AI ‘light touch’ will cause weighty issues

Parliament_11The Government must have an urgent rethink on its proposed “light touch” regulation of artificial intelligence in the UK, due to severely limited legal protections for consumers to seek redress when AI goes wrong and makes a discriminatory decision.

So says a new report by think tank the Ada Lovelace Institute – named after the daughter of poet Lord Byron who was a trailblazer for women in maths and science – which has made 18 recommendations it wants included in the Data Protection & Digital Information Bill (No 2), is currently going through Parliament.

The report suggests a range of solutions and protections for the UK to implement, including establishing an AI ombudsman to regulate disputes, similar to the financial and energy sectors; enabling civil society groups like unions and charities to be a part of regulatory processes; expanding the definition of “AI safety”; and ensuring that existing GDPR and intellectual property laws are enforced.

Ada Lovelace Institute UK public policy lead Matt Davies told The Standard: “If you’re a business and you make an important decision about an individual’s access to products or services like mortgages or loans using AI, or you’re an employer and you terminate someone’s employment because AI makes a decision about their productivity — at the moment, it’s prohibited by law, there has to be human insight.

“Instead there will be an expectation that there are safeguards in place, it’s changing in the draft legislation, so instead of the burden of proof being on the organisation that they didn’t do this, the burden of proof is now on the individual.”

Among other things, the researchers warn it is unlikely that international agreements will be effective in making AI safer and preventing harm, unless they are underpinned by “robust domestic regulatory frameworks” able to shape corporate incentives and AI developer behaviour in particular.

The report also highlights the need to avoid speculative claims about AI systems and, rather than panicking about “existential risks” like the idea that AI could kill mankind in just two years, to take comfort from the fact that solutions to any harms can be achieved by working more closely together with AI developers as they devise new products.

A Department for Science, Innovation & Technology spokesman said: “As set out in our AI White Paper, our approach to regulation is proportionate and adaptable, allowing us to manage the risks posed by AI whilst harnessing the enormous benefits the technology brings.

“The Data Protection & Digital Information Bill preserves protections around automated decision-making. The existing safeguards will continue to apply to all relevant use of data, and ensure individuals are provided with information about automated decisions, can challenge them, and have such decisions corrected, where appropriate.”

Related stories
Regulators urged to act now on ‘generative AI harms’
Creatives embrace ChatGPT but ‘AI anxiety’ escalates
Robot wars: Brits spooked over ad industry’s use of AI
Marketers ditch metaverse to embrace generative AI
ICO bares teeth in battle for regulation of AI industry
Bosses told to invest now or miss out on tech revolution
UK sits back despite fears of ‘dangerous’ AI arms race