Fight for AI fairness and inclusivity has only just begun

Bill_Portlock1Artificial intelligence can easily identify minority groups using proxies. Take my experience as an example: I’m a former rugby player and ex-Navy member who enjoys health and fitness. My Instagram profile is basic, featuring just one picture of a cartoon character. When I signed up a year ago, I started following bodybuilders, American pit bulls, Staffordshire bull terriers, and Joan Collins. Based on this activity, Instagram’s AI algorithm deduced I ‘licked the wrong side of the stamp’ and began sending me ads for gay personal products. What clued them in? Was it my heartthrob Nick Walker or my interest in Joan Collins?

This scenario exemplifies precise targeting rather than explicit bias, but it raises an important question: how can we prevent bias in AI-driven data selection?

AI’s data mining and selection processes have the potential to amplify biases related to age, race, gender, and other factors. Simply omitting variables like gender, age, and race from the dataset does not eliminate bias. Sophisticated AI models can still identify the most suitable subjects for a marketing campaign through indirect indicators, perpetuating existing biases.

As AI becomes more advanced and pervasive, concerns about potential dangers and bias are growing. Princeton computer science professor Olga Russakovsky recently highlighted in an interview with the New York Times that AI bias extends beyond gender and race. AI systems are developed by humans who inherently possess biases. She said: “AI researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities.”

The limited experiences of AI developers may explain why speech-recognition AI often fails to understand certain dialects and accents, or why companies sometimes overlook the consequences of a chatbot impersonating notorious figures. Developers and businesses must exercise greater caution to avoid recreating powerful biases and prejudices that can put minority populations at risk.

Addressing bias in data selection is complex. The primary goal of AI and machine learning is to identify and target the most profitable customers, which can inherently lead to biased outcomes. If we were to insist on a representative sample of the entire population, it might result in targeting a random sample, potentially diminishing the effectiveness of the campaign.

Finding a solution to this issue is challenging, and it may not be entirely clear how to tackle bias in AI data selection. Nonetheless, it is essential to continue striving for greater fairness and inclusivity in AI systems. Kudos to Instagram for their clever targeting, but the broader implications of AI bias demand our attention and action.

Bill Portlock is founder and CEO of Metrix Data Science