“Confiance.ai is industrializing the trust AI production chain,” says David Sadek, head of this R&D program.

Can you recall the objective of Confiance.ai?

Confiance.ai aims to industrialize the production chain of trusted artificial intelligence, i.e. an AI that can be used in critical products and services. More specifically, it involves developing tools, methods and processes to design, implement, verify, validate, qualify, certify and maintain AI technologies in operational conditions. We must equip this entire chain, it is essential to accelerate the integration of AI.

You emphasize the specificity of this R&D program. What’s so special about it?

Confiance.ai is first of all the only program of the national strategy in artificial intelligence which is made by industrialists and for industrialists. That is to say that its roadmap is aligned with the roadmaps of the 9 industrialists who contribute to it. Four sectors are represented – auto, aero, energy and defense – which means that the tools that are developed are generic, agnostic in terms of applications. But they are tested on real use cases provided by manufacturers. Renault, for example, wants to improve the detection of weld defects. For Thales, it is the recognition of objects of interest in aerial images…

The transfer of developed technologies will be all the easier…

Yes, another specificity of the program should be noted: in addition to 15 million euros in funding, the manufacturers make their own staff available. As a result, the transfer of the results of Confiance.ai is done directly, via these personnel, over the water: we recover in real time the tools developed to test them in our use cases and provide feedback. almost immediate experience.

In addition, we recover high-level scientific results produced by the 3IA Aniti institute, in Toulouse, which is a partner. Confiance.ai thus plays a role in maturing technologies, taking them from a TRL 2-3 to a TRL 6. For all these reasons, and adding to it its scale – some 300 engineers and researchers are involved -, Confiance. ai is truly a unique program in France and even internationally.

The main tools already developed will be presented during these 3 days, can you give some examples?

The program is divided into 7 major projects. Thales, for example, is leading the one on the overall methodology for developing an AI system from scratch. In support of this methodology, a tool called Companion.ai has been developed to guide the system developer through each step. Another example is an AI training data qualification tool chain. In particular, it uses the Pixano data labeling tool that the CEA had already developed, as well as the DebiAI tool from SystemX, which makes it possible to counter learning data biases.

We have also worked on validating the safety and robustness properties of algorithms from Aniti and IRT Saint-Exupéry which use machine learning to ensure an anti-collision function for drones. In order to embed this function, it must be possible to demonstrate compliance with the degree of safety required in the aerospace – typically a probability of occurrence of a feared event of less than 10^-9 per hour of flight.

Scheduled to last four years, Confiance.ai is halfway through. How will future work be staged?

We are organized to deliver results in batches, each year. We started by working on the so-called data-driven AI approach, the AI ​​of machine learning and neural networks, in areas of low criticality. We will continue, but we are currently starting to work on so-called symbolic AI, based on knowledge and rules, in areas of medium criticality.

Then, the third phase of the project will focus on hybrid AI, which combines machine learning and symbolic AI, as well as reinforcement learning, for high criticality applications. That is to say, the tools for validating AI systems will have to take into account the vital nature of these systems.

Selected for you

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *