Espace Presse – Who is AI really?

​This article is an excerpt from an exchange that took place within the framework of the “Science yourself! », organized by the CEA and CENTQUATRE-PARIS.

What is your definition of artificial intelligence?

Francois Terrier : According to the OECD, artificial intelligence (AI) is a set of techniques allowing a machine to perform tasks usually reserved for human beings. The term “usually” is interesting because it implies that the perception of AI can evolve over time. This definition introduces the notion of‘specific AI (or weak AI) which is targeted on a problem for which we seek to exceed the capacities of the human being in terms of its speed, its endurance. It leads to another concept, that ofGeneralist AI (Strong AI) based on the myth of the system endowed with human, functional and emotional qualities. To come back to the concrete, one of the specific AI technologies is machine learning (or deep learning) which consists of correlating the data entered into a system with the tasks it will perform. If I told you automatic correlation rather than AI, you would ask yourself fewer questions!

Raphael Granier de Cassagnac : In my novels, I make the same distinction, naming weak AI “artificial intelligence” and strong AI “artificial consciousness”. The latter tracing human behavior in all its variety and complexity. “My” scientists develop this consciousness by reproducing the functioning of the brain in silicon; while AI is rather software designed for specific tasks that assists humanity. I also really like the temporal relativity mentioned by François because, at the time, we could have considered a calculator to be an AI, but not anymore!

In your opinion, will the AI ​​be able to equal the human or even take power like many science fiction scenarios?

Raphael Granier de Cassagnac : In science fiction, strong AI tends to turn against its designer, which conveys fantasies of fear. In the film 2001, A Space Odyssey by Stanley Kubrick or, even earlier, in Alphaville by Jean-Luc Godart, the machine is afraid of being unplugged, like humans and their fear of the death. Today I note an optimistic shift: in the film Her by Spike Jonze, the AI ​​collaborates harmoniously with the human. It is multiple, redundant and since it is in the cloud and no longer in a computer, it no longer fears death! But that remains fiction because I doubt that in fifteen years we will be able to develop a
silicon brain.

Francois Terrier : I totally agree because I find it difficult to consider that intelligence is only calculation and rationality. What about
cognitive, emotional and psychic aspects ? Admittedly, technology makes it possible to give the illusion of a human machine, provided that one is in videoconference and with a scrambled sound! Even the best chatbots (conversational robots on the internet) do not last long: if the discussion continues, we realize that the AI ​​has not understood the first questions correctly, that it does not take into account their semantics nor really their meaning.

Why does AI pique your interest?

Francois Terrier : The AI ​​developed for the Internet does not interest me too much. But
systems designed for industry or for rare disease research are much more motivating: AI on complicated issues, which large groups have not developed precisely because it is complicated, is an exciting scientific challenge.

Why is bias in AI crucial?

Francois Terrier : In a learning-based AI, an algorithm is programmed so that it correlates input data with an execution by the system. But the data is not introduced in its raw state or by chance. They are formatted and annotated, that is to say that the human describes what is there. It is not the machine that invents the concept of the cat – to be found in an image – if it is not told that there is a cat. This annotation step induces
the risk of introducing bias. For example, the Dutch State had set up a system for detecting social assistance fraud. After a few months, he had to backtrack following an avalanche of lawsuits because the AI ​​had created a
statistical biase vis-à-vis foreigners, because of having emphasized a particularity of the data rather than all the criteria provided. This is why it is imperative to qualify the systems before exploiting them. It is a question of checking all the annotations, analyzing what has been learned, detecting unexpected phenomena and major trends to decide if it is appropriate. It’s cutting edge science.

Raphael Granier de Cassagnac : These verification loops, constantly ensuring that the biases observed remain ethical, are indeed crucial. Another fundamental point is that of knowing what means to grant to the AI ​​​​and to provide a big red button allowing the human to stop the machine at any time. Take the case of the autonomous vehicle and the decision that its AI would make in the event of an accident: turn left and die against a plane tree or turn right and mow down a cyclist? Who will be responsible?

How to frame AI ethically and legally? What are the risks in the absence of a framework?

Francois Terrier : We are lucky that Europe has taken up this issue. It began with an ethical reflection and today leads to a regulation, the IA Act, according to which thehe responsibility rests with the manufacturers and the users. And this, even if large groups have sometimes campaigned for only the AI ​​to be responsible (in other words no one!). The parliament considers that it is necessary to qualify the technology, the algorithm but also the uses and the potential risks, which is up to the human.

To pay
the CEA, trust in AI is a key issue. As early as 2017, we understood that beyond its use in research, AI would end up in industrial systems and would require safeguards. This is why we have launched a major program in this area. Just as we have a digital ethics committee. Admittedly, Europe is lagging behind the United States and China in terms of the volume of data acquired and available. But she’s cutting edge on these
ethical and trust issues. In particular, it prohibits any AI not qualified for high-risk uses from entering the European market. With, in subtext, the obvious interests of
economic sovereignty.

Raphael Granier de Cassagnac : What could become worrisome is if the big companies designing AI, with a colossal volume of personal data and enormous financial power, seize political power. In one of my novels, I advance the idea that “sociétés” have their own country, their own militia… Shortly after writing it, I was surprised to read that Larry Page, co-founder of Google, called for a “territory to experiment with new forms of governance”. But I remain optimistic to see that citizens are taking over this debate and can influence it, like the European position and even an awakening of real consciences!

An article taken from Défis du CEA n°250

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *