To generalize responsible AI, we need certifications

Between transformative innovations and dangerous drifts, AI must imperatively lose its black box side and rely on certifications or even labels to establish the trust without which it will not be able to impose itself in the long term.

By Gwendal BihanPresident & CTO of Axionable
and
Eric BonifaceDirector of Labelia Labs

Artificial Intelligence (AI) presents a paradox for our society: AI techniques open up the prospect of new innovations and concrete advances to face certain major societal and environmental challenges (for example in the fields of health, risks climatic conditions, fake news, etc.) but also give rise to new risks that can lead to unsustainable drifts. To deal with it, and to enable their useful development, AI systems must be responsible, sober and trustworthy.

Let us come back to two emblematic cases of bias. COMPAS software, designed as a decision aid for judges about the likelihood of a defendant becoming a repeat offender, and still used today in some jurisdictions in the United States. Academic studies have demonstrated glaring biases toward certain populations, consistently assigning them a higher risk. Let’s also mention the Apple Card, which from its launch was accused of sexist bias following reports of cases where the service offered men a spending limit 20 times higher than that of women sharing their joint account!

In contrast to these shocking examples, a study published in Nature reveals that AI, framed by a regulatory and ethical vision, can help achieve 79% of the goals of the UN Sustainable Development Agenda.

The development and deployment of AI systems in all sectors and its potential impact on citizens raises the alarm: it is imperative to resolve the growing tension between the potential of these techniques and the legitimate fears they arouse . Organizations must therefore seize the subject without further delay.

READ ALSO :

Data / AI

Keys to deploying responsible AI

Like the Montreal declaration, the Villani report on AI in 2018 and more recently the draft European regulation “AI Act” which should come into force by 2025, there is an abundance of literature on the subject and Structuring elements of the requirements have already emerged: non-discrimination, transparency, robustness, traceability, human control, governance, etc.

Faced with these challenges, it seems to us that three reflexes are essential: integrating end-to-end risk management of AI products and systems (not just AI models), considering social and environmental issues “by design within AI projects. Finally, rely on reference codes of conduct, and certifications and labels provided by trusted third parties in order to maintain state-of-the-art.

That’s good, France is at the forefront in this area! The most telling examples are, from our point of view, the certification of AI design processes by the LNE, the responsible and trusted AI label by Labelia Labs, and the normative effort initiated by AFNOR.

From a favorable innovation context to a restrictive framework

If the logic of AI certification must become a prerequisite, it is first of all to anticipate the regulations that are inevitably in motion. Because the European Commission wishes to makeAI Law a political marker of the Union’s values ​​(particularly human rights) but also an economic one in terms of harmonization of the internal market. As for companies, they want to avoid a second “GDPR experience”. While this regulation is to be welcomed, many companies regret not having anticipated its entry into force more upstream by working further upstream on the compliance of their processes, systems and the training of their teams. They must therefore, this time, anticipate this “AI GDPR”.

While two-thirds of French employees feel ill-informed and want more awareness of AI (Impact AI 2020), organizations must anticipate topics of acculturation and training in AI issues and practices. Responsible AI. And while 55% of organizations overestimate their maturity in terms of responsible AI and 78% say they are ill-equipped to guarantee the ethics of AI systems (BCG 2021), they will necessarily have to carry out audits and design action plans. action to progress.

READ ALSO :

Developer

DevSecOps, automations and AI are reshaping software quality issues.

Along this emerging path, certification provides companies with enforceable proof of their commitment. This will address both their talents and help retain them and attract new ones. But also to gain the trust of customers, partners and prospects. It is an immediately actionable response, tailored to the company’s size, industry and existing ethical culture.

Engaging in an audit is certainly more restrictive than signing a charter, but initiates a virtuous approach that leads to the optimization of the methods used and the associated internal processes. It can even lead to a significant increase in internal efficiency!

Finally, generalizing the audit of AI by and for French companies will contribute to making France, and with it Europe, a world reference in this area.

Encourage without coercing

It is, at this stage and pending the European regulation AI Lawto encourage voluntary approaches and virtuous practices. To encourage companies to submit to a label, the standards must adapt to changes in the regulations in force and above all respond to the operational reality of companies.

Several works in this area converge towards a structure in 7 pillars of trusted AI: human control, technical robustness and security, respect for privacy and data governance, transparency (traceability, explainability), diversity, non-discrimination and equity, environmental and societal well-being and finally appropriate governance with all stakeholders.

READ ALSO :

Governance

Confiance.ai wants to design a “trust” AI

Within a defined framework that adapts to the context of the applicant, these pillars define a common base embellished with levels of maturity.

To encourage support, the path of a certification process will have to be co-constructed with manufacturers and service providers. It will provide keys to understanding and actions, through references and concrete tools that companies can easily appropriate and implement.

Finally, it must promote collaborations to develop reference technical tools (for example by encouraging free software approaches), and openness to suggestions to constantly remain at the state of the art and in phase with new tools. of the market.

It is also up to companies to find the right balance between accessibility and the requirements of the standards offered to them.

|item ids=48307]

Several repositories are freely accessible. A company can therefore refer to it and carry out its “standardization” independently, but without opposable proof. The next step is to get closer to a recognized and enforceable label that is nevertheless easy to access in terms of process, time and production to be allocated (the label ” Responsible and Trusted AI from Labelia Labs for example). Finally, the last level corresponds to the certification which is based on stricter standards, more demanding in its process and in the proofs requested but which guarantees a strong opposability… (LNE Process Certification for example).

None of these initiatives is perfect or exhaustive and they do not require the same level of requirement. They are nevertheless complementary because they do not assess exactly the same things and are therefore aimed at all types of companies.

Beyond this offer Made in France which has the merit of existing and growing stronger, the key point is to anticipate! Organizations must take up the subject now, ahead of legislation, before it becomes too costly in terms of time and investment. It is also the best way not to relive the trauma of the GDPR…

To date, three companies are LNE certified and/or labeled Labelia IA (Axionable, MAIF, Artefact), 10 to 20 months after the availability of the repositories… it is therefore becoming urgent for AI players to accelerate the pace at ‘time when the countdown’AI Law is launched.

READ ALSO :

Data / AI

Infrequent AIs…

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *