What are the unknowns around explainable AI?

What: The desire to get more than just raw data from machine learning algorithms has created a new focus on what experts are calling “explainable AI”, according to Singularity Hub

Algorithms are starting to accelerate beyond human capability in critical functions like medical diagnostics and image recognition. The rapid pace, however, is raising a number of important questions. Most notably, can we justify putting machines in charge of critical decisions if we don’t know exactly how they make those decisions? Machines need to be able to provide insight and reasons behind their choices.

How: A new approach from Google AI explains how image classifiers make their decisions. In short, this algorithmic method looks at images in several resolutions and searches for “sub-objects” against a “main object.” For example, it decides an object is a cat based on recognizing its related sub-objects as whiskers and a tail. 

In most cases that Google tested, the machine intuitively learned to flag sub-objects like humans would e.g. it picked out the law enforcement logo on police vans. In one case, however, the algorithm picked jerseys as more important than a ball when recognizing a basketball player. 

What now: An issue comes in relating concepts as the basketball case proves. For this process to work, AI would have to know that cats have whiskers and tails, or basketball players are linked to basketballs. To create something that recognizes all the rules as a human, would be a labor-intensive process.

The above, then, requires finding a happy medium where AI can be explained without any restriction on the vast amount of data it needs to consume.