These artificial intelligences designed to reassure

“Artificial intelligence (AI), it works, but we don’t know why it works”. The formula has been coming back often for ten years and the advent of deep learning techniques, which are not the only ones in AI but remain the most emblematic. Hence the emergence of a certain uneasiness to see such technologies being deployed. It is in this context that the Confiance.AI consortium was set up in January 2021, a four-year research program that is part of the national strategy for artificial intelligence launched by the French government in 2018. It brings together manufacturers (Airbus, Thalès, Renault, etc.) and academic research organizations (CEA, Inria, technological research institutes, etc.) around use cases.
A series of projects were presented on October 5 and 6, 2022 at CentraleSupélec on the Paris-Saclay campus. Either a set of methods, applications, software intended to add a layer of transparency, security or explainability to an artificial intelligence function. “We have to get out of the situation where an AI recognizes a husky not for what it is but because there is snow in the image. Put the husky on a beach and the AI no longer knows what what it is”, summarizes Bertrand Braunschweig, scientific coordinator of Confiance.AI, citing a famous example of poor image interpretation.
Reduced right to error
Not all artificial intelligence applications are affected, but those where human lives, defense missions, financial transactions, for example, are at stake. “It’s not about creating AI tools but tools for critical industrial systems, where the right to error must be reduced”adds David Sadek, chairman of the consortium’s management committee and vice-president in charge of innovation and technology at Thalès.
One of the projects thus concerns the question of “adversary attacks”that is to say, these disturbances injected into a data, invisible or not embarrassing for the human, but capable of causing an AI to slip because it does not base its understanding on the same criteria as us. This is the classic example of a few pixels added to an image to interfere with its correct interpretation by a computer vision algorithm. To counter this kind of maneuver, a tool scans the files, their brightness, applies various filters, to check the integrity of the data.
“Artificial intelligence (AI), it works, but we don’t know why it works”. The formula has been coming back often for ten years and the advent of deep learning techniques, which are not the only ones in AI but remain the most emblematic. Hence the emergence of a certain uneasiness to see such technologies being deployed. It is in this context that the Confiance.AI consortium was set up in January 2021, a four-year research program that is part of the national strategy for artificial intelligence launched by the French government in 2018. It brings together manufacturers (Airbus, Thalès, Renault, etc.) and academic research organizations (CEA, Inria, technological research institutes, etc.) around use cases.
A series of projects were presented on October 5 and 6, 2022 at CentraleSupélec on the Paris-Saclay campus. Either a set of methods, applications, software intended to add a layer of transparency, security or explainability to an artificial intelligence function. “We have to get out of the situation where an AI recognizes a husky not for what it is but because there is snow in the image. Put the husky on a beach and the AI no longer knows what what it is”, summarizes Bertrand Braunschweig, scientific coordinator of Confiance.AI, citing a famous example of poor image interpretation.
Reduced right to error
Not all artificial intelligence applications are affected, but those where human lives, defense missions, financial transactions, for example, are at stake. “It’s not about creating AI tools but tools for critical industrial systems, where the right to error must be reduced”adds David Sadek, chairman of the consortium’s management committee and vice-president in charge of innovation and technology at Thalès.
One of the projects thus concerns the question of “adversary attacks”that is to say, these disturbances injected into a data, invisible or not embarrassing for the human, but capable of causing an AI to slip because it does not base its understanding on the same criteria as us. This is the classic example of a few pixels added to an image to interfere with its correct interpretation by a computer vision algorithm. To counter this kind of maneuver, a tool scans the files, their brightness, applies various filters, to check the integrity of the data.
Developed for the Renault group, the Companion web application inspects weld image files to assess their quality and, above all, list a series of criteria that led to its verdict. “It looks like a supervision panel, displaying various measures, and these correspond to trust properties that will have been studied and specified in the tool” explains Cyprien de la Chapelle, from the SystemX Institute for Technological Research.
Explain anomaly detection
In another area, Naval Group has technologies for detecting anomalies on submarine pumps, based on the analysis of vibration signals. But in addition to detection, from random anomaly sequences, the AI model learned vibration patterns corresponding to very specific problems in order to be able to explain them.
At Air Liquide, the concern is quite different. A computer vision system counts gas canisters moving in and out of warehouses, with an overhead camera framing objects on truck beds. But to simplify its system and not have to change the processors embedded in the cameras, the manufacturer wanted the computer calculation to be done locally, on the device, and not in the cloud. Which is generally infeasible with neural networks. “So we created a software suite that takes a neural network as input, compresses it by 10, 20, 40, 100, to minimize its energy consumption and maximize its deployability”explains Xavier Fischer, co-founder of Datakalab, one of the start-ups involved in the Confiance.AI program. The approach maintains the performance of the neural network.
More broadly, it has the interest of no longer circulating anything on the Internet, of no longer hosting anything remotely, solving problems of confidentiality and security. “We can eventually imagine putting cameras in people’s homes to detect that elderly people are falling: there will be no images on the Internet, just alerts that will be triggered”notes Xavier Fischer. Behind this approach looms the thinly veiled ambition of no longer having recourse to the major cloud players such as Google or Amazon. There are situations where having confidence in artificial intelligence starts there.