Will AI admit when it fails to know something?

“Should we trust algorithms?” is one of the questions Cambridge statistics professor Sir David Spiegelhalter asks in his paper in the Harvard Data Science Review.

He further ponders if an algorithm knows “[…] when it is on shaky ground, and can it acknowledge uncertainty?”, and whether “[…] people use it appropriately, with the right level of skepticism?”

The most important one, however, is if a machine knows when it doesn’t know, and admit to the fact, he argues. Many people usually fail in this scenario. The author gives an example of being misled while getting directions from “Mrs. Google” who nudged him into a flight of steps while in a vehicle.

As the Guardian has recently commented, Spiegelhalter speaks on matters of AI and machine learning more plainly and sensibly than the “geeks” who design algorithms. His questions are especially pressing considering the increasing use of AI anywhere from healthcare to the pharmaceutical industry and even in courts. Spiegelhalter proposes a safety evaluation framework for AI, similar to the one used for new drugs.

His paper comes in the wake of a series of EU policy proposals aimed at reigning in technology companies’ vast control and giving give guidance on the sober use of tools like AI, The internet of things, and robotics. 

The proposals recognize AI’s benefits and productivity gains, making the EU more competitive and improving the well-being of its citizens. They do go into detail about what the EU wishes to address when engaging the member states’ civil society, industry and academics in coming up with concrete proposals for the bloc’s approach to AI.