What are the risks of emotional AI?

What: Using AI to analyze emotions is prone to bias due to emotions’ subjective nature, according to Harvard Business Review.

Why: Emotional AI is being used by businesses to gather real-time feelings and reactions. It does so by analyzing facial expressions, voice patterns or body language via complex AI algorithms. The aim is to better understand customers and their needs. However, studies such as one on NBA players, show signs of bias in their emotional recognition software when it comes to factors like race, culture or gender.

So what: The consequences of bias could have huge ramifications. If AI is not refined enough to understand different races or cultures it will be hard to draw accurate conclusions. For example, a smile may indicate a request for help in Japan and an expression of politeness in Germany. Due to cultural bias, emotional AI used in retail might offer the wrong service to a Japanese tourist in Berlin.  

Emotional bias leads AI to develop stereotypes. 

What now: Companies need to be aware of the potential for bias in any emotional AI system that they deploy. The results of a study by Nielsen which tested the accuracy of neuroscience technology are quite telling. AI accuracy level was at 77% when using facial coding, biometrics and EEG data together, the study found. For comparison, when using each of those indicators alone, accuracy was 9%, 27% and 62% respectively.

Another way to counter AI bias is to make sure that the teams behind the technology are diverse, and that they are using non-biased data when training AI algorithms.