
The move follows widespread condemnation of X when the platform’s account for the Grok AI tool was used to mass-produce 3 million nudified images of girls and women in just 11 days. The standalone Grok app was also used to generate sexualised deepfakes.
Musk initially said critics were looking for “any excuse for censorship”, however, within days X said it would “geoblock” the ability of users “to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X”, in countries where it was illegal.
It is not known how the move has affected advertising on the site.
The ICO said the creation and circulation of such content raises serious concerns under UK data protection law and presents a risk of significant potential harm to the public.
These concerns relate to whether personal data has been processed lawfully, fairly and transparently, and whether appropriate safeguards were built into Grok’s design and deployment to prevent the generation of harmful manipulated images using personal data.
The ICO added that where those safeguards fail, individuals lose control of their personal data in ways that expose them to serious harm. Examining these risks is central to its role in protecting people’s rights and holding organisations to account as they design and deploy AI technology.
ICO executive director of regulatory risk and innovation William Malcolm commented: “The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualised images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this. Losing control of personal data in this way can cause immediate and significant harm. This is particularly the case where children are involved.
“Our role is to address the data protection concerns at the centre of this, while recognising that other organisations also have important responsibilities. We are working closely with Ofcom and international regulators to ensure our roles are aligned and that people’s safety and privacy are protected. We will continue to work in partnership as part of our coordinated efforts to create trust in UK digital services.
“Our investigation will assess whether XIUC and X.AI have complied with data protection law in the development and deployment of the Grok services, including the safeguards in place to protect people’s data rights. Where we find obligations have not been met, we will take action to protect the public.”
X and xAI announced measures to counter the abuses, but regulatory and legal investigations continue.
Just yesterday (February 3) the French offices of X were raided by the Paris prosecutor’s cyber-crime unit, as part of its investigation into alleged unlawful data extraction and complicity in the possession of child pornography.
Meanwhile, Ofcom has set out the next steps in its investigation into X, and the limitations of the UK’s Online Safety Act in relation to AI chatbots.
Ofcom maintains it was one of the first regulators in the world to act on the reports, and its launched a formal investigation on January 12 into whether the company had done enough to assess and mitigate the risk of this imagery spreading on its social media platform, and to take it down quickly where it was identified.
Since then, X has said it has implemented measures to try and address the issue but Ofcom has been in close contact with the ICO.
Its investigation remains ongoing, however, it is not investigating xAI, as not all chatbot activities are covered by the Online Safety Act, the legislation that covers sites such as X. If a chatbot interacts with one individual and no other users, for example, it is not within the scope of the act.
Porn sites, however, are covered by the Act, leaving Ofcom with a potential route to widen its investigation into whether xAI complied with rules requiring the age-gating of pornographic content.
Meanwhile, a cross-party group of MPs led by Labour’s Anneliese Dodds has written to the technology secretary Liz Kendall, urging the Government to introduce AI legislation to prevent a repeat of the Grok scandal. The legislation will require AI developers to thoroughly assess the risks posed by their products before they are released.
Dodds commented: “The scandal would not have happened in the first place if proper testing and risk assessment had been undertaken. This episode shows existing safeguards are not sufficient.”
A Department for Science, Innovation & Technology spokesperson said: “We have strengthened the Online Safety Act so services have to take proactive action to tackle this content. And we will ban the supply of tools designed to create non-consensual intimate images – targeting the problem at its source.”
Related stories
Brands axe X but ultimately consumers prefer print ads
Brits crave omnichannel retail but beware the data grab
Brits’ trust in how brands use data ‘lowest in the world’
Brits want personalisation but believe AI is too creepy
Bot rot sets in as Brits crave human CX over robot hell
Marketing AI revolution ‘still three to five years away’


Be the first to comment on "ICO wades into Grok AI row with data protection probe"