Here’s an often-overlooked risk of following blindly AI

Businesses becoming heavily reliant on AI could be running the risk of blindly following biased and potentially destructive decisions, specialists have warned

Research by IBM has suggested that there are as many as 180 classified human biases such as race or gender. And they are now finding their way into unsupervised AI systems. 

The root cause of bias in AI systems is data. In 2016, Microsoft was forced to shut down its Tay chatbot after it learned to be racist. While people assume this is down to the system, the data used to train the bot contained racist content. 

AI today uses supervised learning. It takes masses of data, learns from it, and provides the best response without human intervention. It is how Amazon recommends products we like, or Netflix knows what we want to stream. AI spots patterns that humans never would. Even Netflix found its AI to be racially biased. 

Whenever the historical data is not accurate, clean, complete or already based on biased information, AI is destined for failure. 

To negate problems with bias, as well as having strong data quality processes, businesses are finding ways to create solutions with appropriate human control. Instead of ambiguous algorithms, companies are ensuring they create “glass box” solutions with easy to understand features, so even non-technical people know how they work and what is driving results. 

For more information about combating bias in AI, check out this video from documentary filmmaker Robin Hauser