New assessment for the Confiance.ai collective

At CentraleSupélec, on the Paris-Saclay campus last week, members of the Confiance.ai collective shared the scientific and technological advances of the program. Back to this milestone.

Where are the advances in the program dedicated to trusted AI in critical systems? Led by the Ile-de-France Institute for Technological Research (IRT) SystemX, the Confiance.ai program is driven by a group of 13 French industrialists and academics ( Air Liquide, Airbus, Atos, Naval Group, Renault, Safran, Sopra Steria, Thales, Valeo, as well as CEA, Inria, IRT Saint Exupéry and SystemX). Since the beginning of 2021, the date of its launch, around its 13 founders, a group of nearly fifty industrial partners (including SMEs and start-ups, etc.) and academics has quickly formed.

Also read on Alliancy: Julien Chiaroni (Grand Défi IA): “We are making trusted AI operational in companies”

With a budget of 45 million euros over four years, this program aims to meet the challenge of the industrialization of AI in critical products and services, whose accidents, breakdowns or errors could have consequences. serious on people and property. In total, over its duration, more than 300 people will work on the project representing 150 FTEs, including researchers and engineers seconded by industrial partners.

Over the 2021-2024 period, the objective is to have a platform of software tools dedicated to the engineering of innovative industrial products and services integrating AI (software and system engineering, data and knowledge engineering, algorithmic engineering, safety and security engineering and human-system interaction). This platform concerns the automotive sectors (detection of obstacles and pedestrians, etc.), aeronautics (anti-collision system for drones), energy, digital technology, industry 4.0 (detection of faults in production, validation of weld quality, etc.), defense and health. AI components that will also have to meet the criteria and requirements of the future “AI Act”, a normative framework currently being defined at European level, expected by 2024.

To date, more than a hundred software components for qualifying data sets, algorithms, etc. are being designed as part of the program, at different levels of maturity. Progressively evaluated and integrated, they are also made available to partners in order to allow their integration into their own engineering workshops for those who have them. Some of them (Renault, Safran, Thales, etc.) have already used them for several business use cases, with initial operational feedback with a view, for example, to consolidating a global data management approach (help annotation, data visualization or even measuring the operational acceptability of AI on an industrial workstation).

“A dozen concrete use cases in the field of “data-driven AI” and more particularly based on machine learning on image data and on time series have already been validated and qualified. , that is to say that they can be shared and replicated in the program and that they raise real questions around trust such as robustness, explainability, etc. Other industrial use cases are being integrated , around the processing of natural language for example… From 2023, we will attack the field of hybrid artificial intelligence”, explains Juliette Mattioli, Senior Expert IA Thales and member of the steering committee of Confiance.ai.

In its current version, this set of tools offers a software platform addressing four main areas enriched with a methodological framework to guarantee trust throughout the life cycle of the system (from end to end while respecting the ODD – Operational DomainDesign). The first axis devoted to the management of the life cycle of data offers components contributing to the quality of data sets (visualization, labelling, augmentation, etc.); the second is dedicated to explainability (in order to make the choices and decisions made by an AI understandable by a human); the third is intended for the embeddability of AI components, which should make it possible both to identify – based on the material specificities of the target system – the design constraints to be respected; and to accompany throughout the realization and this, until the deployment of the component in the system; and the last concerns a set of libraries dedicated to the robustness and monitoring of AI-based systems.

A common frame of reference with the Germans

Today, the collective of the Confiance.ai program wishes to get closer to the German consortium bringing together ten industrial, academic and civil society partners (including Bosch, Siemens, Technische Universität Darmstadt, SAP, ITAS/KIT, iRights.Lab, Ferdinand- Steinbeis-Institut, BASF, TÜV-SÜD, IZEW Universität Tübingen) conducted by VDE, one of Europe’s leading technology organizations based in Germany.

This alliance between French and German players aims both to support the future European regulation on artificial intelligence (AI Act), but also to create, during 2023, a common Franco-German label on trusted and responsible AI. which will be closely linked to future harmonized standards.

“The Confiance.ai collective has been working with the German ecosystem for many months. The strength of this alliance between leading industrial and academic partners lies in our desire to promote a common vision of trustworthy and responsible AI, and to create a dedicated label initially at the Franco-German level, but which will be used on a European scale,” explains Julien Chiaroni, director of the Grand Défi IA within the General Secretariat for Investment.

In concrete terms, these players will propose a common repository on trusted AI (characteristics necessary for reliability, evaluation repository and key performance indicators). This repository, based on the pooling of previous work carried out by each, will address issues of ethics, responsibility, safety and may also extend to environmental issues related to AI. Finally, this alliance will propose a governance structure, a future European industrial alliance on AI, in order to ensure its promotion and dissemination on a European scale.

“With our AI Trust Standard & Label, we have developed a practical approach from a German perspective and are looking forward not only to further development, but also to integrate into French and, ultimately, European work. The potential for synergies is considerable,” adds Sebastian Hallensleben, Head of Digital Transformation and AI at VDE.

European pioneers at world level

In the philosophy of the Confiance.ai program, Juliette Mattioli considers that the French are precursors, including at the international level. “The Germans and other countries have similar approaches, but are often driven by application domains whereas Confiance.ai is generic for the critical systems domain. In terms of standardization, alliances are being made so that the European ecosystem becomes strong. Discussions with the Quebec ecosystem are underway for a similar program (Confiance.IA) with themes around social responsibility, ethics and sustainable development, therefore very complementary to the work carried out by confidence.ai on trust in the so-called “safety-critical” domains”.

Faced with the Americans (including the Gafam) who have not yet thought in this way, the Europeans are therefore ahead, and “are even precursors”. “As opinion leaders on the definition of risks and the embeddability of AI in systems, we must keep this lead over trusted AI, says Juliette Mattioli. It is now up to us to become economic leaders. The AI ​​Act, led by risks, seeks – in the same philosophy as the GDPR in the field of privacy – to impose the taking into account of trust in technologies, uses and certification… But trust is multifaceted, hence the Confiance.ai program. »

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *