The pessimism of AI researchers

This response comes from a survey conducted in May and June of this year among 327 of these researchers, chosen from among those who have co-authored one or more studies on automatic language processing in recent years (natural language processing), an area that has seen significant progress.

The expression “of the same magnitude as a nuclear war” can be seen as a metaphor, but can also be taken literally: these researchers foresee military applications for AI, and therefore the possibility that with these recent advances, AI is playing a bigger role in decision-making. “There are plausible scenarios that could lead us this far,” comments in the new scientist security researcher Paul Scharre. “But that would require people to do really dangerous things with military uses of AI. »

In the survey, carried out by a team from the New York University Center for Scientific Data, and the results of which were pre-published on August 26, the number of worried researchers was even higher among women (46 %) and members of visible minorities (53%).

Without going as far as nuclear war, some of the respondents said they would have agreed with the premise that AI poses serious risks, had the scenario discussed been less extreme. For example, the survey mentions the risks posed by weapons guided by artificial intelligence or those posed by mass surveillance.

These researchers obviously did not extract these concerns from their research alone. The risks of AI abuses have been mentioned for several years, for example in terms of discrimination. But the risks that put the very future of humanity at stake have been the subject of more in-depth reflection, for example in the 2018 report by a group of thinkers in the United States, or in the book of a researcher from Oxford University in 2014. Thousands of people signed an open letter from British computer scientist Stuart Russell in 2015, which called for “expanded research aimed at ensuring that systems increasingly advances in AI are robust and beneficial”.

Conversely, those who do not give in to pessimism evoke the example of the autonomous car, which is still far from being able to make informed decisions in all circumstances, despite the enormous sums that have been invested in it.

If, in the 2000s and 2010s, these reflections focused on the theoretical capacities of a “super-intelligence”, they have become more grounded in reality with the recent concepts of machine learning (machine learning) and automatic language processing, which have opened new perspectives on what we can do with AI and what we still don’t know about its potential. Lawyers are also wondering: the Council of Europe’s committee on artificial intelligence has notably ruled on the importance of regulations that leave decision-making to humans “and not to mathematical models”. But it remains to write such laws…

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *