HomeARTIFICIAL INTELLIGENCEThe New Method Should Create More Trust In AI

The New Method Should Create More Trust In AI

Trust In AI- AI systems can learn independently. So-called confounders, also known as the Clever Hans phenomenon, can be an obstacle. That means: AI learns to draw the right conclusions for the wrong reasons. A team of researchers from the Technical University of Darmstadt has found out how to get around this.

The Clever Hans phenomenon goes back to a horse that supposedly could calculate at the beginning of the last century. It was able to read the correct answer from the body language of the questioner. Applied to AI, this means that a system learns to draw the right conclusions. The underlying reasons for this are wrong.

An example: You want to use artificial intelligence to ensure that the system recognizes a horse in photos. Several thousand recordings of horses serve as training. At some point, AI will be able to identify this animal even on unknown images. How exactly AI came to the result is unclear. Now it turned out, for example, that in the corner of the photos, there was always information about the copyright with a link to a horse website. The copyright is there to protect those who have the right to the commercial exploitation of the work.

Trust In AI- Disruptive Factors Limit The Applicability Of AI

With this help, AI has made it very easy. The copyright information was practically the learning content. Scientists refer to such things as so-called confounders. They have nothing to do with the actual identification. In the example, the system could no longer recognize the horses in images without a copyright notice.

AI Works Best In Cooperation With Experts

Together with his team, Kerstin has developed a solution: interactive learning. Scientists involve specialists in the learning process. To understand what AI is doing, it has to inform professionals about active learning and its content. At the same time, there must be an explanation of how AI derives a prediction from this information. Researchers with the appropriate expertise can check these two facts.

The test can produce three results:

  1. The prediction is wrong, but AI can learn new rules.
  2. Predictions and explanations are correct. In this case, there is nothing further for the experts to do.
  3. The forecast may be accurate, but the description may be incorrect.

How can artificial intelligence scientists clarify that the explanation is wrong and that a new approach is needed?

Tests Of AI Systems With Sample Data

There is already an answer to this question in the form of a strategy – the so-called Explanatory Interactive Learning (IXL). The principle behind it: an AI expert feeds the system with sample data. It must be clear from them that the differentiating characteristics assumed so far do not play a role. So you deliberately show AI the confounders.

The example mentioned at the beginning would mean that it indicates that copyright data is important in analysis. The technical experts would now have to provide the system with images on which other image information happens to be located exactly at the location of the copyright notice. As a result of this training, AI would assess copyright information as less and less relevant.

The New Method Should Create More Trust In AI

This is exactly the method used by researchers in their series of tests. The subject of the series was Cercosporin leaf spot disease. It damages sugar beets and is already widespread around the world. According to a plant expert, the system first learned to focus on areas in hyperspectral data that are not decisive for identifying the damage – even if predictions in this context appeared reliable.

In the second step, the researchers used the XIL method. The hit rate dropped slightly, but the system drew the right conclusions for the right reasons. In some cases, this means that AI learns more slowly, but it provides more reliable predictions in the long term, according to scientists. “Interaction and comprehensibility are therefore of central importance for trust in machine learning processes.” The connection between interaction, explanation, and building trust has so far been largely ignored in research.

ALSO READ: Robots Improve Hygiene In The Production

Techno News Feedhttps://www.technonewsfeed.com
Technonewsfeed is an innovative and inventive tech platform that provides users with vivid and well-researched tech content.
RELATED ARTICLES

LATEST POSTS