
While there is no new legislation on the horizon – with the Government and the House of Lords currently locking horns over the Data (Use & Access) Bill – the regulator has launched a new AI and biometrics strategy, which aims to ensure organisations are developing and deploying new technologies lawfully, supporting them to innovate and grow while protecting the public.
While critics will no doubt point out that this is all too little, too late, the move comes as new research reveals that the public expect to understand exactly when and how AI powered systems affect them, and they are concerned about the consequences when these technologies go wrong – for example, if facial recognition technology (FRT) is used inaccurately, or a flawed automated decision impacts their job application.
Over half (54%) of people surveyed shared concerns that the use of FRT by police would infringe on their right to privacy.
Information Commissioner John Edwards said: “Our personal information powers the economy, bringing new opportunities for organisations to innovate with AI and biometric technologies.
“But to confidently engage with AI-powered products and services, people need to trust their personal information is in safe hands. It is our job as the regulator to scrutinise emerging technologies – agentic AI, for example – so we can make sure effective protections are in place, and personal information is used in ways that both drive innovation and earn people’s trust.”
The ICO is focusing on uses of AI and biometrics that are prevalent today and may benefit people’s everyday lives yet cause the most concern and potential for harm if misused.
There are a number of measures the ICO is planning, which it insists will provide organisations with certainty and the public with reassurance.
First up, the ICO will review the use of automated decision making (ADM) systems by the recruitment industry and work with early adopters in central government such as the Department for Work & Pensions.
The regulator will also conduct audits and produce guidance on the lawful, fair and proportionate use of facial recognition technology (FRT) by police forces.
In addidtion, it will set clear expectations to protect people’s personal information when used to train generative AI foundation models.
Next up, the ICO will develop a statutory code of practice for organisations developing or deploying AI responsibly to support innovation while safeguarding privacy, and, finally, scrutinise emerging AI risks and trends, such as the rise of agentic AI as systems becoming increasingly capable of acting autonomously.
Edwards added: “The same data protection principles apply now as they always have – trust matters and it can only be built by organisations using people’s personal information responsibly.
“Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails. We are here, as we were 40 years ago, to make compliance easier and ensure those guardrails are in place.”
Related stories
Data reforms hit as Lords mount revolt over AI copyright
Data reforms inch closer but Govt fails to defuse AI row
‘Henry VIII’ clauses threaten to derail data reform bill
IPA: Copyright shake-up must not kill creative industries
Bryant insists UK won’t sacrifice copyright for AI boom
Getty and AP spearhead war on Govt AI copyright plans

