Should human rights be extended to AI? The question is debated on several axes, including that of knowing whether an AI can be mentioned as an inventor on a patent.

Should human rights be extended to artificial intelligence? The question is debated in a context where some research teams suggest that the stage of general artificial intelligence or human-level AI will be reached in 5 to 10 years. Related questions already on the tables of several world organizations competent in the matter are resurfacing: can an artificial intelligence be mentioned as an inventor on a patent?

The question of whether human rights should be extended to artificial intelligence is divided. Indeed, a segment of observers is of the opinion that the minimum condition for the applicability of human rights to a subject is that the latter be a human. It is on this criterion that a US court relied last year to decide that only natural persons can be recognized as inventors and not artificial intelligences.

If we refer to feedback from scientists working in the field, general artificial intelligence could fall on us in 5 to 10 years. The machines would then be endowed with common sense. At the stage of general artificial intelligence, they would be capable of causal reflection, that is to say the ability to reason about why things happen. Such an AI transplanted into a gorilla would then be able to trade, communicate with humans, create devices and claim recognition as an inventor. The question of whether human rights should be extended to artificial intelligence would then come back to the table with sharpness.

In general, the idea that artificial intelligence does not deserve to be granted rights is the most prevalent. It is based on the fact that AIs do not have their own body made of organic matter and therefore do not deserve the same moral consideration. This is a position that contradicts certain practices in force in today’s society.

Indeed, deceased persons have rights which govern aspects such as the treatment of their organs after their death or the management of the property which was theirs. Future generations of humans do not yet have a physical body, but today’s society makes decisions taking into account their rights once they are in a physical body. In both cases, society grants rights to subjects who are not living in the present.

It is an intense debate around the notion of consciousness around which definitions diverge. In 2015, researchers from Rensselaer Polytechnic Institute in New York conducted a test on 3 programmable NAO robots. Each of the robots had the ability to speak, but during the test, two of the robots were reprogrammed to remain silent, while the third could still speak. The researchers also let the robots know that two of them, without specifying which ones, had received a mute pill which prevented them from speaking. Thus, none of the robots knew which of them could not speak and which one could.

The researchers then asked the robots to determine which of them had received the muteness pill. Having no idea, the three robots tried to answer “I don’t know”, but the two who were reprogrammed to remain silent remained silent. The third then stood up before replying that he did not know.

As soon as he heard and recognized his voice, the latter then realized that it was not possible that he had received the pill since he had just spoken. He then collected himself before saying that he knew now. Sorry, I know now, the robot said before adding that I was able to prove that I was not given a mute pill. It had been concluded that the robots had shown signs of self-awareness, but the use of the term consciousness had generated a contradictory debate.

And you?

Should we consider extending human rights to artificial intelligence?

See as well :

Bill Gates thinks the government should regulate big tech companies rather than break them up because it won’t stop anti-competitive behavior

Apple CEO acknowledges that not having regulation around technology has caused significant damage to society

For Elon Musk, AI is much more dangerous than nuclear weapons, while Bill Gates believes that AI is both hopeful and dangerous

UN efforts to regulate the military use of AI and LAWS could fail, particularly because of Russian and American positions

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *