Why diversity is the secret ingredient in effective AI

mark_hugeArtificial intelligence and machine learning are often touted as the way to automate a wide range of tasks, from predicting what someone’s going to buy to approving bank loans. But it isn’t immune to the sorts of biases (liminal or subliminal) that can make an organisation less efficient.
This is just one of the topics that we will be exploring at a series of roundtables at CLX, the new immersive and interactive programme with the world’s best content creators from Cannes Lions and Medialink.
In fact, AI can amplify prejudices, despite the best intentions. Take, for example, fintech. By relying on AI, many fintech companies have aimed to automate the process of underwriting loans, often without any in-person interaction between a lender and borrower. At least in theory, AI should be less prone to making prejudicial lending decisions based on factors like race than a human would be, making these loans less discriminatory and more fair.
That’s not how it actually plays out. A study last year from UC-Berkeley concluded that while so-called algorithmic lending made it more likely for Latin and African-American borrowers to be accepted for loans, they paid up to 9 points higher interest rates compared to comparable borrowers. To quote Senator Elizabeth Warren, who has launched an inquiry into the matter: “The algorithms used by fintech lenders are as discriminatory as loan officers.”
This isn’t by design, of course. The people who programmed these algorithms meant well. But these examples show the ways in which AI can amplify biases we don’t even necessarily know we have.
This is where the lack of diversity among coders can become a serious liability for a company looking to embrace AI. It has been documented that the technology sector is way behind in diversity: women hold only 25% of computing jobs. Black and Latinx employees account for less than 10% of all Silicon Valley employees as of 2016, and while Asians are better represented, they are much less likely than their white peers to attain leadership roles.
With so many primarily white males deciding how the gears turn, it shouldn’t be much of a surprise when AI spits out decisions that primarily skew towards people similar to its coders. Which is why true diversity is a superpower any company looking to embrace AI needs to have.
To teach a machine to be diverse, it needs diverse teachers. There need to be as many different people at the table as possible, using their own unique viewpoints and experience to talk through and challenge one another’s assumptions. As with a creative project, working with a set of diverse people is the only way to get the calculus right.
And do not mistake what true diversity means here. Yes, racial, sexual, and gender diversity are all extremely important. But it’s also important for companies entering the AI space to factor in different professional viewpoints when they are deciding how to automate decision making. Same with ethical diversity.
Consider the famous AI trolley problem: if an automated car needs to decide between swerving to the right and hit a baby, swerve to the left and hit an elderly woman, or plow head-on and kill its passengers, what’s the “correct” decision for the AI to make? A lawyer in your company might well have a different answer to that question than a coder.
AI isn’t a single tool; it’s a broad collection of algorithmic methods and APIs used to automate decision making. But all of those methods are ultimately based upon a human somewhere, plugging logic into a machine. Humans need to tell a system how to make their decisions: what factors are important, which are incidental, and how to form a calculus from them.
Most AI systems are a bit of a black box. We don’t really understand how the decisions inside most algorithms are being made, or why. That’s why diversity is so important.
The companies that will ultimately gain most from the AI revolution will be the ones most willing to probe their biases from as broad a spectrum of human perspective as possible. It’s the only way to make sure that the black box of AI is transparent.

Marc Maleh is group vice-president of technology and emerging experiences at Huge