The French collective “” takes action

Supported by funds from France 2030, the “” program presented its first achievements, a base of tools and methods for developing AI components of … confidence.

The question of the safe, explainable and fair operation of artificial intelligences is a critical subject for all companies in all sectors. AI will only be able to claim its transformative role if we can grant it measurable, even certifiable confidence. The detection of biases, the explainability of the reasoning that leads to the results presented, the understanding of cognitive mechanisms, the questions of formal proof of learning systems are critical subjects at the border between computer science, statistics, mathematics, data science and cognitive science.

Launched in early 2021 for a period of 4 years, the program today brings together some fifty industrial and academic partners. Funded to the tune of 30 million euros as part of the France 2030 plan, it is the technological pillar of the great challenge ” Securing, making reliable and certifying systems based on artificial intelligence that the French state has set itself.

More than 300 people are involved in the project today, representing 150 FTEs. The teams are made up of researchers and engineers seconded by industrial partners.

The “” collective in a few key figures

Concretely, the objective is to build a toolbox of methods and software intended to become the basis for the development of AI components for critical systems. The contractual framework facilitates the transfer of intellectual property, from R&D teams to industrial partners. All these components are intended to meet the strictest requirements to ensure confidence in critical systems, for piloting Airbus to more or less autonomous vehicles, including assembly lines.


Governance wants to design a “trust” AI

In addition to competitiveness for companies, these AI components will also have to meet the criteria and requirements of the future “IA Act”, scheduled for 2024. Requirements which will be based on a normative framework currently being defined. This is of course to ensure the robustness of the developments, their “transparency” and their explainability without forgetting their reliability. AI components will not only need to be qualified, but also able to be maintained under operational conditions.

Alongside Afnor and other similar European organisations, several members of the Confiance.ia program take part in this normative work. If the project managers hope to offer generic tools, the four sectors, aeronautics, defence, energy and automotive are targeted initially. The projects are divided around three themes, systems, data and human-system interaction, a theme mainly centered on explainability.

A first series of concrete tools

Twenty months after its launch, those in charge presented a first series of tools and methods. Unsurprisingly, one of the first projects to consist of providing tools for the definition of trust, which differs depending on the use case. Among other things, the tool makes it possible to define the objects of the project and its essential “properties” of trust.

In its current version, this toolset offers four platforms dedicated to trusted AI issues:
– One devoted to data lifecycle management (acquisition, storage, specifications, selection, augmentation);
– One dedicated toexplainabilitywhose objective is to render in terms understandable by a human the choices and decisions made by an AI;
– A destinyembeddability of AI components which must make it possible on the one hand, to identify on the basis of the material specificities of the target system the design constraints to be respected, and on the other hand, to accompany throughout the realization and this, until the deployment of the component in the system;
– And, finally, a set of libraries dedicated to the robustness and to monitoring of AI-based systems. In particular, they make it possible to ensure that the system and its AI component do indeed evolve in the previously defined context (Operational Domain Design).


Data / AI

How to trust the AI…

With these tools which form the “1.0” version of the “” framework, eleven use cases have already been the subject of work and about twenty are about to be launched. They are also beginning to be used in partners. Safran has, for example, installed “on prem” the first versions of tools dedicated to testing robustness and explainability while also evaluating their interoperability with its own Machine Learning tools. For his part, Yves Nicolas, Deputy Group CTO of Sopra Steria believes that “’s 2022 work is already enabling us to fulfill the promise of a trusted AI that can be deployed in production. On several business use cases, we were indeed able to assess several trust parameters such as explainability and robustness within an industrial MLOps chain, ready to comply with upcoming regulations such as the AI ​​Act “.

Last example, Renault. For Antoine Leblanc, AI Industry 4.0 expert of the DSII PESI at Renault Group, “ the challenge of adopting and integrating AI solutions into industrial systems is a challenge that is all the more important since it is accompanied, for the Renault Group’s Manufacturing teams, by a change in culture and methods. The program provides us with turnkey tools, tested on industrial use cases, proposed by our teams, which allow us to consolidate our global approach to industrial data management. Help with annotation quality, data visualization or even measuring the social acceptability of AI on an industrial workstation, the solutions offered by the partners of the program reinforce the robustness of our processes and the time of exploitation of the data “.

The collective continues its work and accelerates

A “2.0” version of the framework is already under development. And the collective is focusing its attention on new themes that are just as critical to establishing real confidence in AI. He is particularly interested in the very complex relationships between humans and AI with experiments on user trust and studies on the mapping of the moral situation, on the interfaces of algorithmic systems, on trust “by design” , etc.

The sovereignty of digital technologies is at the heart of the ambitions of France 2030. We must both protect our assets and our research, promote our values ​​and also dedicate our strengths to the development of a sovereign offer. I note that there is already a real dynamic around the program, which brings together 50 partners already able to offer technological solutions on a market estimated at 50 billion euros. This work makes France the leader in trusted AI in Europe, it is important to emphasize this concludes Bruno Bonnell, Secretary General for Investment, in charge of France 2030.

An alliance for a future Franco-German label on trusted and responsible AI

The collective of the program bringing together 13 leading industrial and academic founding partners (Air Liquide, Airbus, Atos, CEA, Inria, Naval Group, Renault, Safran, IRT Saint Exupery, Sopra Steria, IRT SystemX, Thales and Valeo) announces this week the signature of a Memorandum of Cooperation with the German consortium led by the powerful technological organization VDE (including including Bosch, Siemens, Technical University of Darmstadt, SAP, ITAS/KIT, iRights.Lab, Ferdinand Steinbeis Institute, BASF, TÜV-SÜD, IZEW University of Tübingen).

This alliance aims to support the future European regulation on artificial intelligence (AI Act), by creating in 2023 a joint Franco-German label on trusted and responsible AI. The latter will of course be closely linked to future harmonized standards. It aims to provide guidelines and specifications for AI applications and prepare ecosystems for compliance with the AI ​​Act by offering a common repository.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *