A project launched by researchers at Columbia and Lehigh Universities has yielded effective error-correction processes, intended to correct mistakes and biases in Artificial Intelligence deep learning networks.

The tool, known as DeepXplore, is dubbed “white-box testing” because its goal is to expose errors in AI “thinking.” The software exposes flaws in the AI algorithm of a neural network by causing it to make mistakes. The researchers throw a series of problems at the system that AI experts consider “corner cases”: situations the AI algorithm can’t quite deal with. A frequently-cited corner case for self-driving cars can occur with incremental increases or decreases in lighting or other environmental factors, sometimes bringing an AI vehicle to a stop, unable to decide which direction to go.

Kexin Pei, one of the developers at Columbia University, says, “We plan to keep improving DeepXplore to open the black box and make machine learning systems more reliable and transparent. As more decision-making is turned over to machines, we need to make sure we can test their logic so that outcomes are accurate and fair.”

Much of the concern about AI has centered on the fact that AI decisions are made in a “black box” – that is, the algorithms that drive the decision-making are not transparent to human users. This has resulted in mysterious outcomes, including what appears to be inherent bias in AI decision-making systems. How the bias is introduced is not easy to track down, because AI systems arrive at decisions by performing millions of tests in very short amounts of time, well beyond the human capacity to process or understand. An AI system might discard a solution in seconds, and move on to testing a second solution using millions of tests once again, continuing to move beyond the human capacity to evaluate and test these solutions.

It’s very difficult to find out why [a neural net] made a particular decision,” according to robot ethicist Alan Winfield at the University of the West of England in Bristol, England.

It is thought that the biases in AI systems either reflect the biases of the programmers or the state of bias in the data being processed. Any bias in these decisions has the potential to impact the lives of humans to a greater or lesser extent. The benefits of AI, such as speech recognition, smart homes, and personal assistants are motivating innovators to make large, fundamental changes in how many industries operate, but experts have been cautioning against AI that depends on machine learning – that is, AI that can learn, make decisions, and in effect alter its own programming as a result, in accordance with its decision-making algorithms.

Because so many calculations and decision trees are involved in a machine decision, only the AI system itself can declare whether or not a solution is optimal. The AI Now Institute recently warned that government facilities should avoid using black box AI systems until such systems can be made less opaque. Their concern is not only that the systems themselves are not transparent, but also that government regulators, not being technical experts, have little understanding of what they would be regulating. Being able to debug and error-check AI would lessen the risks involved in integrating AI into critical fields such as healthcare, transportation, security, and defense.

Leave a Reply

Your email address will not be published.

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.