ICO launches pre-emptive strike against emotional AI

artificialThe Information Commissioner’s Office has warned firms over the use of “emotional analysis” technologies, claiming it will come down hard on any organisation which is found to have not acted responsibly, especially by targeting vulnerable people.

The pre-emptive strike against the likes of gaze tracking, sentiment analysis, facial movements, gait analysis, heartbeats, facial expressions and skin moisture might appear premature to some.

Quite what has prompted the move is not known as even the ICO concedes debate is still raging about whether these technologies actually work.

Examples the ICO cites include the potential monitoring the physical health of workers by offering wearable screening tools or using visual and behavioural methods including body position, speech, eyes and head movements to register students for exams, although real-life examples remain scarce.

The technology relies on collecting, storing and processing a range of personal data, including subconscious behavioural or emotional responses, and in some cases, special category data.

The regulator maintains this kind of data use is far more risky than traditional biometric technologies that are used to verify or identify a person, such as fingerprint and voice recognition already widely adopted by the financial services industry.

The ICO claims the inability of algorithms which are not sufficiently developed to detect emotional cues, means there is a risk of systemic bias, inaccuracy and even discrimination.

In a blog post, deputy commissioner Stephen Bonner said: “Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever.

“While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination.

“The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science. As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.

“The ICO will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance, whilst encouraging trust and confidence in how these systems work.”

The regulator has confirmed it is currently developing guidance on the wider use of biometric technologies, including facial, fingerprint and voice recognition, which are already successfully used in industry.

The biometric guidance, which is due to be published in spring 2023, will aim to further help businesses, as well as highlight the importance of data security. Biometric data is unique to an individual and is difficult or impossible to change should it ever be lost, stolen or inappropriately used.

This guidance will also have people at its core, with the assistance of public dialogues held in liaison with both the Ada Lovelace Institute and the British Youth Council. These will explore public perceptions of biometric technologies and gain opinions on how biometric data is used.

Even so, the ICO has also warned that it is difficult to apply data protection law when technology such as gaze tracking or fingerprint recognition “could be deployed by a camera at a distance to gather verifiable data on a person without physical contact with any system being required”. Gathering consent from every single passenger passing through a station, would be all but impossible.

Related stories
Clearview AI gets £7.5m fine; is facial recognition dead?
Clearview AI faces £17m fine for abusing UK data laws
Facial recognition tech slapped down in privacy rulings
Clearview AI whacked with multiple GDPR complaints
CMA widens probe into use of ‘murky’ online algorithms
Consumers call for AI crackdown as ICO begins probe
Authorities split on how to regulate surging AI market