Consumers are becoming increasingly alarmed about the rise of artificial intelligence, and are demanding tougher regulation and more accountability of the technology, which is now widely used by industries from marketing, financial services and healthcare to education, manufacturing and transport.
A new survey of 2,000 UK adults by Fountech.ai has revealed that over three-fifths (64%) of consumers want more regulation to make AI safer, and even more (69%) insist a human being should always be monitoring and checking decisions that are made by the technology.
Those aged over 55 appear more sceptical, with almost three quarters (73%) keen to see additional guidelines introduced to improve safety standards. This is in comparison to just over half (53%) of those aged between 18 and 34 who held this view.
When questioned about the chances of AI making a miscalculation, 45% of UK adults believe it is harder to forgive mistakes that are made by machines than it is human mistakes. This figure is similar across all demographics.
Elsewhere, Britons also want to see companies take more accountability. Some 72% of people believe companies that develop AI should be held responsible for any mistakes that the technology makes. At 81%, those aged over 55 were the most likely to hold this view, while at 60%, millennials were the least likely to agree.
Currently there is no specific legislation covering AI, although the European Commission has pledged to act. Commission president Ursula von der Leyen vowed to introduce GDPR-style laws to regulate AI during her first 100 days in charge, but 202 days in and there is still no sign. In the UK, the technology is governed by the Data Protection Act 2018.
Fountech.ai founder Nikolas Kairinos said: “We are increasingly relying on AI solutions to power decision making, whether that is improving the speed and accuracy of medical diagnoses, or improving road safety through autonomous vehicles. As a non-living entity, people naturally expect AI to function faultlessly, and the results of this research speak for themselves: huge numbers of people want to see enhanced regulation and greater accountability from AI companies.
“It is reasonable for people to harbour concerns about systems that can operate entirely outside human control. AI, like any other modern technology, must be regulated to manage risks and ensure stringent safety standards. That said, the approach to regulation should be a delicate balancing act.
“AI must be allowed room to make mistakes and learn from them; it is the only way that this technology will reach new levels of perfection. While lawmakers may need to refine responsibility for AI’s actions as the technology advances, over-regulating AI risks impeding the potential for innovation with AI systems that promise to transform our lives for the better.”
The study comes as the Information Commissioner’s Office has revealed it is joining forces with the Office of the Australian Information Commissioner (OAIC) for a joint investigation into the personal information handling practices of Clearview AI Inc, focusing on the company’s use of “scraped” data and biometrics of individuals.
Clearview’s facial recognition app allows users to upload a photo of an individual and match it to photos of that person collected from the Internet. It then links to where the photos appeared. It is reported that the AI system includes a database of more than 3 billion images that Clearview claims to have taken from various social media platforms and other websites.
A report by Buzzfeed earlier this year claimed that a number of UK law enforcement agencies had registered with Clearview, including the Metropolitan Police and the National Crime Agency as well as other regional police forces. At the time, the Met denied it had used Clearview’s services.
The move has raised a few eyebrows with data protection experts, who question what powers the ICO and OAIC have over the US company.
Related stories
Authorities split on how to regulate surging AI market
Data goldmine: Brands are just scratching the surface
Digital what? Most boards still stuck in the dark ages
Digital transformation spend to hit $7.4 trillion by 2022
Why CMOs are putting their faith in digital and data
Why artificial intelligence will not be taking your job