Each month brings us new ways in which artificial intelligence has run spectacularly awry. The latest – and now most high profile – example being the A-Level exam results fiasco in the UK.
According to reports, an algorithm took into account factors that seemingly unfairly penalised students from poorer backgrounds. In short, the algorithm was biased. Of course, the truth is a little more complex and it (surprisingly) provides a good lesson for data-driven marketers.
First, let’s get some terminology issues out of the way. When the media talks about AI – they really mean machine learning. This is a data science technique where algorithms analyse data to answer a query, learning as they do. A perfect example is your Netflix recommendations getting more in line with your personal preferences.
One of the virtues of this approach is that once you create the algorithm and feed it data it will go on its merry way, producing results with little additional human interaction.
So where does this go wrong? If we accept that no one intentionally sets out to create an algorithm to discriminate against a group of people, the problem is either in the data or how the algorithm was designed and applied.
Data is nearly always biased. It is, after all, a reflection of society. If there are inequalities linked to different groups, it will be there hidden in the numbers. Even if you strip out factors that are morally repugnant or uncomfortable to include, algorithms can infer these trends from other data points and include them in the results.
Consequently, you must take precautions by recognising the risk of bias and stripping out these results or collecting additional data points to give the algorithm more information to work with. It also means you should not simply be controlled by what your algorithm says – it is there to inform you, not to control your decision-making process.
From what we know about the Government’s approach they attempted to standardise their results nationally to mitigate against the chance of certain schools being overly generous with predicted grades. This appears to have resulted in the algorithm failing to take into account individual circumstances and instead passing broad judgements which resulted in a lot of unfairness. Bluntly, it seems the wrong algorithm was designed and used for the job.
Herein lies the cautionary tale for marketers. Algorithms are not created equal. Just because something is “data-driven” does not inherently mean it is accurate, or even useful. The real power of data science is assessing the problem and understanding how an answer could be created. An algorithm is merely a tool to reach that answer.
Data science is at its most powerful when it provides highly personal and accurate results which look beyond generalisations. Diversity in your team is also critical, the greater the diversity of thought going into the way you design your algorithm, the less likely you are to be caught out by missing potential for bias.
The problem was not that the Government used an algorithm. The real issue is that they did not use the full potential of data science to make the results highly personal and, therefore, accurate. Whether this was down to a lack of data, time or understanding is up for debate.
For marketers, it is a reminder that the basis of all good data-driven marketing campaigns starts with the breadth of data sources used, the strategy and the diversity of the thought that goes into creating the algorithm.
Natalie Cramp is chief executive of data science company Profusion