The Information Commissioner’s Office has waded into the row over the regulation of artificial intelligence, insisting it will come down hard on firms which abuse personal data in developing AI systems, despite the Government’s pledge to pursue a light touch approach.
Last week, UK ministers set out a new whitepaper on AI rules, with the aim of driving responsible innovation and maintaining public trust in the technology. But they dismissed the need for a new central regulator for the technology, instead preferring to split responsibility among existing bodies.
The whitepaper (‘A pro-innovation approach to AI regulation’) noted that the UK’s AI industry is already well developed, employing more than 50,000 people and contributing £3.7bn to the economy last year.
The move coincided with the publication of an open letter – signed by more than 1,100 tech industry executives – demanding that the development of advanced AI systems is paused.
And it seems the ICO is determined to set down a marker.
The regulator’s executive director of regulatory risk Stephen Almond has warned that organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach. “This isn’t optional,” he stressed, “if you’re processing personal data, it’s the law.”
Almond added that data protection law still applies when the personal information that is being processed comes from publicly accessible sources.
He then laid out eight questions that those developing or using generative AI that processes personal data need to ask themselves.
What is your lawful basis for processing personal data? If you are processing personal data you must identify an appropriate lawful basis, such as consent or legitimate interests.
Are you a controller, joint controller or a processor? If you are developing generative AI using personal data, you have obligations as the data controller. If you are using or adapting models developed by others, you may be a controller, joint controller or a processor.
Have you prepared a Data Protection Impact Assessment (DPIA)? You must assess and mitigate any data protection risks via the DPIA process before you start processing personal data. Your DPIA should be kept up to date as the processing and its impacts evolve.
How will you ensure transparency? You must make information about the processing publicly accessible unless an exemption applies. If it does not take disproportionate effort, you must communicate this information directly to the individuals the data relates to.
How will you mitigate security risks? In addition to personal data leakage risks, you should consider and mitigate risks of model inversion and membership inference, data poisoning and other forms of adversarial attacks.
How will you limit unnecessary processing? You must collect only the data that is adequate to fulfil your stated purpose. The data should be relevant and limited to what is necessary.
How will you comply with individual rights requests? You must be able to respond to people’s requests for access, rectification, erasure or other information rights.
Will you use generative AI to make solely automated decisions? If so – and these have legal or similarly significant effects (e.g. major healthcare diagnoses) – individuals have further rights under Article 22 of UK GDPR.
The ICO has flagged up its own guidance and regulatory sandbox. It said it is also in the process of piloting a Multi-Agency Advice Service for digital innovators needing joined up advice from multiple regulators with organisations in the Digital Regulation Cooperation Forum (comprising the ICO, the Competition & Markets Authority, Ofcom and the Financial Conduct Authority),
However, Almond stressed: “As the data protection regulator, we will be asking these questions of organisations that are developing or using generative AI. We will act where organisations are not following the law and considering the impact on individuals.
“There really can be no excuse for getting the privacy implications of generative AI wrong. We’ll be working hard to make sure that organisations get it right.”
Related stories
Bosses told to invest now or miss out on tech revolution
UK sits back despite fears of ‘dangerous’ AI arms race
Even ‘slow year’ of 2022 saw AI start-ups raise $50bn
CMOs join stampede for ChatGPT as FOMO escalates
Spooner on…should copywriters worry about ChatGPT?
AI to make agencies lean, keen and driven by machine
The year ahead: Driving the adoption of AI applications
The $460bn question: Can you get AI to work for you?