The real purpose of AI may no longer be intelligence

British mathematician Alan Turing wrote in 1950: “I propose to examine the question ‘Can machines think?’ His investigation has framed the debate for decades of research into artificial intelligence. For two generations of scientists who have studied AI, the question of whether it is possible to achieve “real” or “human” intelligence has always been an important part of the work.

AI may be at a turning point where these questions matter less and less to most people.

The emergence in recent years of so-called industrial AI could mark the end of these noble concerns. AI has more possibilities today than at any time in the 66 years since the term “AI” was coined by computer scientist John McCarthy. Therefore, the industrialization of AI shifts the emphasis from intelligence to achievements.

From intelligence to practice

These achievements are remarkable. They include a system capable of predicting protein folding, AlphaFold, from Google’s DeepMind unit, and the GPT-3 text generation program from start-up OpenAI. These two programs are extremely promising from an industrial point of view, whether they are called intelligent or not.

Among other things, AlphaFold makes it possible to engineer new shapes of proteins, a prospect that has electrified the biologist community. GPT-3 is quickly finding its place as a system capable of automating business tasks, such as responding to written requests from employees or customers without human intervention.

This practical success, spurred on by a prolific semiconductor industry, led by chipmaker Nvidia, looks like it could go beyond the old preoccupation with intelligence.

In no corner of industrial AI does anyone seem to care whether these programs will achieve intelligence. It is as if, faced with practical achievements whose value is obvious, the old question “But is it smart?” ceased to matter.

Researchers’ debate

As computer scientist Hector Levesque wrote, when discussing the science of AI versus technology, “Unfortunately, it is AI technology that gets all the attention.”

It is certain that the question of true intelligence remains important for a handful of thinkers. Over the past month, ZDNET has interviewed two prominent researchers who are very concerned about this issue.

Yann LeCun, chief scientist of AI at Meta Properties, owner of Facebook, spoke at length with ZDNET about an article he published this summer which is a kind of reflection on the direction that AI should take. Yann LeCun has expressed concern that the mainstream deep learning work today, if it simply continues on its current course, will not achieve what he calls “true” intelligence, which includes things like than the ability of a computer system to plan a course of action using common sense.

Yann LeCun expresses the concern of an engineer who fears that, without real intelligence, such programs will turn out to be fragile, that is to say that they could break before even doing what we want them to do. do. “You know, I think it’s entirely possible that we’ll have Level 5 self-driving cars without common sense,” Yann LeCun told ZDNET, referring to efforts by Waymo and others to build ADAS. (advanced driver assistance systems) for autonomous driving, “but you’re going to have to design the hell for it.”

New York University professor emeritus Gary Marcus, who has often criticized deep learning, told ZDNET this month that AI as a field is stuck in finding something resembling human intelligence. “I don’t want to quibble about whether it’s smart or not,” Gary Marcus told ZDNET. “But the form of intelligence that we might call general intelligence or adaptive intelligence, I care about adaptive intelligence […] We don’t have machines like that.”

A certain rejection of scientific questions

Increasingly, LeCun and Marcus’ concerns seem outdated. Industrial AI professionals don’t want to ask hard questions, they just want to make sure everything runs smoothly. As more and more people get their hands on AI, people such as data scientists and engineers of self-driving cars, people away from the basic science questions of research, the question “Can machines they think?” becomes less relevant.

Even scientists who realize the shortcomings of AI are tempted to put that aside to savor the practical usefulness of this technology.

Demis Hassabis, co-founder of DeepMind, is a younger researcher than Marcus or LeCun, but aware of the dichotomy between the practical and the deep. In a 2019 lecture at Princeton’s Institute for Advanced Study, Demis Hassabis noted the limitations of many AI programs that could only do one thing well, like a dumb scientist. DeepMind, said Demis Hassabis, is trying to develop a broader and richer capability. “We’re trying to come up with a meta-solution to solve other problems,” he said. And yet, Demis Hassabis is just as enamored with the particular tasks at which DeepMind’s latest invention excels.

When DeepMind recently unveiled an improved method for performing linear algebra, the math at the heart of deep learning, Demis Hassabis praised the achievement regardless of any claims to intelligence. “It turns out that everything is matrix multiplication, from computer graphics to training neural networks,” said writing Demis Hassabis on Twitter. That may be true, but it hints at the possibility of discarding the quest for intelligence in favor of perfecting a tool, as if to say, “If it works, why ask why?

The field of AI is experiencing a change in attitude. There was a time when every achievement of an AI program, no matter how good, was met with a skeptical remark: “But that doesn’t mean it’s smart.” It’s a pattern that AI historian Pamela McCorduck has called “moving the goalposts.”

Things seem to be going the other way these days: People are inclined to casually attribute intelligence to anything labeled AI. If a chat bot like Google’s LAMDA produces enough natural language sentences, someone will argue that it’s sentient.

The British mathematician predicted that “educated general opinion” would eventually accept that machines are intelligent.

Alan Turing himself anticipated this change in attitude. He predicted that ways of talking about computers and intelligence would shift in favor of accepting computer behavior as intelligent. “I believe that by the end of the century, the use of words and educated general opinion will have changed so much that one can speak of thinking machines without expecting to be contradicted”, wrote Alan Turing.

A battle for rhetoric

As the sincere question of intelligence fades, the empty rhetoric of intelligence is allowed to float freely in society to serve other agendas.

In a brilliantly confusing eulogy recently published in Fast Company, authored by computer industry executive Michael Hochberg and retired Air Force general Robert Spalding, the authors make glib assertions about the intelligence as a way to add organ music to their grim warning of geopolitical risk: “The stakes couldn’t be higher in training artificial general intelligence systems. AI is the first tool that reproduces compelling the unique capabilities of the human mind. It has the ability to create a unique and targeted user experience for every citizen. It can potentially be the ultimate propaganda tool, a weapon of deception and persuasion like no other. ever existed in history.”

Most specialists agree that “general artificial intelligence”, if that term has any meaning, is far from being achieved by current technology. Hochberg and Spalding’s claims about what programs can do are greatly exaggerated.

These cavalier claims about what the AI ​​is actually doing obscure the nuanced remarks of people like LeCun and Marcus. We are witnessing the formation of a rhetorical regime that is interested in persuasion, not intelligence.

This may be the direction things will take for the foreseeable future. If AI does more and more things, in biology, in physics, in business, in logistics, in marketing and in warfare, and if society gets used to it, there may be fewer and fewer fewer people who will care to ask, “But is it smart?”

Source: ZDNet.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *