These artificial intelligences designed to reassure

“Artificial intelligence (AI), it works, but we don’t know why it works”. The formula has been coming back often for ten years and the advent of deep learning techniques, which are not the only ones in AI but remain the most emblematic. Hence the emergence of a certain uneasiness to see such technologies being deployed. It is in this context that the Confiance.AI consortium was set up in January 2021, a four-year research program that is part of the national strategy for artificial intelligence launched by the French government in 2018. It brings together manufacturers (Airbus, Thalès, Renault, etc.) and academic research organizations (CEA, Inria, technological research institutes, etc.) around use cases.

A series of projects were presented on October 5 and 6, 2022 at CentraleSupélec on the Paris-Saclay campus. Either a set of methods, applications, software intended to add a layer of transparency, security or explainability to an artificial intelligence function. “We have to get out of the situation where an AI recognizes a husky not for what it is but because there is snow in the image. Put the husky on a beach and the AI ​​no longer knows what what it is”, summarizes Bertrand Braunschweig, scientific coordinator of Confiance.AI, citing a famous example of poor image interpretation.

Reduced right to error

Not all artificial intelligence applications are affected, but those where human lives, defense missions, financial transactions, for example, are at stake. “It’s not about creating AI tools but tools for critical industrial systems, where the right to error must be reduced”adds David Sadek, chairman of the consortium’s management committee and vice-president in charge of innovation and technology at Thalès.

One of the projects thus concerns the question of “adversary attacks”that is to say, these disturbances injected into a data, invisible or not embarrassing for the human, but capable of causing an AI to slip because it does not base its understanding on the same criteria as us. This is the classic example of a few pixels added to an image to interfere with its correct interpretation by a computer vision algorithm. To counter this kind of maneuver, a tool scans the files, their brightness, applies various filters, to check the integrity of the data.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *