LinkedIn has become the latest tech giant to cave into pressure over its artificial intelligence training programme following an intervention from the UK’s Information Commissioner’s Office, although there is a growing backlash against data regulators, amid claims they risk stifling the market.
LinkedIn, which is owned by Microsoft, joins Meta, X, and Google to either suspend or scrap their programmes on the back of claims they have been using user data without consent.
Privacy organisation NOYB, backed by Austrian lawyer Max Schrems, has also lodged complaints about ChatGPT pioneer OpenAI, claiming it is in breach of GDPR.
The crux of NOYB’s argument is that, while the company has extensive training data, there is currently no way to guarantee that ChatGPT is actually showing users factually correct information. On the contrary, generative AI tools are known to regularly “hallucinate”, meaning they simply make up answers.
In response to LinkedIn’s move, which brings it in line with the EU, ICO executive director of regulatory risk Stephen Almond said: “We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.
“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.
“We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”
However, Meta is now mounting a fightback against what it claims is “inconsistent regulatory decision making” and has persauded more than 50 companies – including Ericsson, SAP, Spotify – to join the battle.
In a open letter coordinated by Meta, and published as an ad in the Financial Times, the companies argue that Europe’s excessive bureaucracy and characteristic lack of urgency on regulating AI means companies risk falling behind other regions in seizing the opportunities presented by the technology.
The missive states: “Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making.”
Highlighting a number of specific issues, the letter goes on: “The first are developments in ‘open’ models that are made available without charge for everyone to use, modify and build on, multiplying the benefits and spreading social and economic opportunity.
“Open models strengthen sovereignty and control by allowing organisations to download and fine-tune the models wherever they want, removing the need to send their data elsewhere.
“The second are the latest ‘multimodal’ models, which operate fluidly across text, images and speech and will enable the next leap forward in AI. The difference between text-only models and multimodal is like the difference between having only one sense and having all five of them.
“Frontier-level open models – based on text or multimodal – can turbocharge productivity, drive scientific research and add hundreds of billions of euros to the European economy.”
On a less technical level, the letter laments the uncertainty around what data can be used to train AI models created from interventions by the numerous data protection authorities in the EU, led by the Irish DPC which governs many tech giants. That, in turn, will mean those models will not have Europe-specific training data, it claims.
The letter concludes: “We need harmonised, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans. Decisive action is needed to help unlock the creativity, ingenuity and entrepreneurialism that will ensure Europe’s prosperity, growth and technical leadership.”
Neither the European Commission nor the European Data Protection Board – the ultimate data privacy overlord made up of all EU regulators – have yet commented on the claims.
Related stories
Google faces GDPR probe over PaLM2 AI programme
Musk bows to pressure to ditch ‘illegal’ AI training plan
Schrems guns for Musk over X ‘illegal’ AI training data
Mass GDPR complaints force Meta to pause AI data grab
Meta’s mega AI data grab sparks mass GDPR complaint
Germans back fight against ChatGPT data inaccuracies
Industry in peril as Schrems declares war on ChatGPT