Industry in peril as Schrems declares war on ChatGPT

chatGPT2_newThe man who has almost single-handedly been responsible for bringing US tech giants to heel over their privacy practices has now set his sights firmly on ChatGPT, amid claims that the AI tool is in breach of GDPR because it provides false information about people and cannot be corrected.

Having already forced Meta to change its advertising practices – twice – as well as shot down two transatlantic data transfer pacts, Max Schrems’ privacy organisation, NOYB, has become the first to file a GDPR complaint about the artificial intelligence tool.

Quite how this will pan out for the marketing industry is still to be determined, but with a new IAB Europe report claiming over 90% of marketers are already using or experimenting with GenAI, a legal challenge will send shockwaves through the sector.

NOYB maintains that GDPR requires that information about individuals is accurate and that they have full access to the information stored, as well as information about the source. Yet ChatGPT parent company OpenAI openly admits it is unable to correct incorrect information, cannot say where the data comes from or what data ChatGPT stores about individual people.

The organisation claims the OpenAI is well aware of this problem, but does not seem to care. Instead, OpenAI simply argues that “factual accuracy in large language models remains an area of active research”.

The organisation has now filed its GDPR complaint with the Austrian data protection regulator, calling on the DSB to investigate OpenAI’s data processing and the measures taken to ensure the accuracy of personal data processed in the context of the company’s large language models.

It has also asked the DSB to order OpenAI to comply with the complainant’s access request and to bring its processing in line with the GDPR. Last but not least, NOYB requests the authority to impose a fine to ensure future compliance. It is likely that this case will be dealt with via EU cooperation.

The crux of NOYB’s argument is that the tool only generates “responses to user requests by predicting the next most likely words that might appear in response to each prompt”. In other words, while the company has extensive training data, there is currently no way to guarantee that ChatGPT is actually showing users factually correct information. On the contrary, generative AI tools are known to regularly “hallucinate”, meaning they simply make up answers.

NOYB argues that while inaccurate information may be tolerable when a student uses ChatGPT to help them with their homework, it is unacceptable when it comes to information about individuals.

Since 1995, EU law requires that personal data must be accurate. Currently, this is enshrined in Article 5 GDPR. Individuals also have a right to rectification under Article 16 GDPR if data is inaccurate, and can request that false information is deleted. In addition, under the “right to access” in Article 15, companies must be able to show which data they hold on individuals and what the sources are.

NOYB data protection lawyer Maartje de Graaf said: “Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals.

“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

To illustrate this issue, NOYB cites the complainant in its case against OpenAI. When asked about his birthday, ChatGPT repeatedly provided incorrect information instead of telling users that it does not have the necessary data.

Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it was not possible to correct data.

OpenAI says it can filter or block data on certain prompts (such as the name of the complainant), but not without preventing ChatGPT from filtering all information about the complainant. OpenAI also failed to adequately respond to the complainant’s access request.

Although GDPR gives users the right to ask companies for a copy of all personal data that is processed about them, OpenAI failed to disclose any information about the data processed, its sources or recipients.

De Graaf added: “The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used at least have an idea about the sources of information. It seems that with each ‘innovation’, another group of companies thinks that its products don’t have to comply with the law.”

OpenAI has yet to comment on the case.

In March last year, the European Data Protection Board (EDPB) set up a task force on ChatGPT to coordinate national efforts, although so far no action has been taken.

Related stories
Brussels blows ‘pay or consent’ models out of the water
Meta hit by double whammy over ‘illegal’ data practices
Meta ad-free service faces data protection showdown
Meta dragged kicking and screaming into ad opt-out
Meta GDPR consent fine €4bn short, says Max Schrems
Meta the villain again as consent for ads is ruled illegal
CMOs join stampede for ChatGPT as FOMO escalates
Spooner on…should copywriters worry about ChatGPT?

Print Friendly