The Information Commissioner’s Office has reiterated its warning to organisations that they must not ignore data protection risks associated with launching AI-driven tools, despite concluding an investigation into Snapchat’s “My AI” chatbot by taking no action against the company.
The issue first emerged in October last year, when, following an investigation, the regulator slapped a preliminary enforcement notice on Snapchat owners Snap Inc and Snap Group UK over a potential failure to properly assess the privacy risks posed by the GenAI chatbot.
The notice set out the steps which the Commissioner “may” have required, subject to Snap’s representations on the preliminary notice. If the final enforcement notice had been adopted, Snap could have been forced to stop processing data in connection with My AI. meaning it would have been barred from offering product to UK users until it had carried out an adequate risk assessment.
However, the ICO insists its investigation resulted in Snap taking “significant steps” to carry out a more thorough review of the risks posed by ‘My AI’ and demonstrate that it had implemented appropriate mitigations.
The regulator said it is now satisfied that Snap has now undertaken a risk assessment relating to ‘My AI’ that is compliant with data protection law, although the ICO stressed it will continue to monitor the rollout of ‘My AI’ and how emerging risks are addressed.
ICO executive director of regulatory risk Stephen Almond said: “Our investigation into ‘My AI’ should act as a warning shot for industry.
“Organisations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
The ICO said the final Commissioner’s decision in this case will be published in the coming weeks.
The warning comes as the regulator has confirmed it is “making enquiries with Microsoft” over a new feature that can take screenshots of a user’s laptop every few seconds.
Microsoft says Recall, which will store encrypted snapshots locally on your computer, is exclusive to its forthcoming Copilot+ PCs.
But the regulator says it is contacting Microsoft for more information on the safety of the product, although Microsoft maintains it is committed to privacy and security.
In a statement, the tech giant said: “Recall data is only stored locally and not accessed by Microsoft or anyone who does not have device access,” adding that any would-be hacker would need to gain physical access to the device, unlock it and sign in before they could access saved screenshots.
But an ICO spokesperson said firms must “rigorously assess and mitigate risks to peoples’ rights and freedoms” before bringing any new products to market. “We are making enquiries with Microsoft to understand the safeguards in place to protect user privacy.”
Related stories
ICO issues ChatGPT warning in Snapchat privacy probe
ICO bares teeth in battle for regulation of AI industry
AI will give us more time to think creatively, say CMOs
JP Morgan boss: We’ll live to 100 and work less with AI
Ad industry using ChatGPT more than any other sector
Industry launches Taskforce to tackle AI ethics concerns
Ethics issues block roll-out of AI despite ROI bonanza
Generative AI ‘now essential part of marketer’s toolkit’
CMOs join stampede for ChatGPT as FOMO escalates