Meta’s Head of AI Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence, Suggests Current Approaches Will Never Lead to True Intelligence

Yann LeCun, machine learning (ML) pioneer and head of AI at Meta, recently published a research paper in which he outlines his vision for AIs that learn to know the world like humans. The article implies, without making it clear, that most current AI projects will never be able to achieve this human-level goal. Furthermore, LeCun points out that fundamental problems always elude many strains of deep learning, including reason (or common sense). His book offers some ideas to bring AI closer to human intelligence.

Yann LeCun is a French researcher in artificial intelligence, vice-president and chief scientist of AI at Meta, owner of the social media platforms Facebook, Instagram and WhatsApp. LeCun pioneered convolutional neural networks, a fundamental principle in the field, which, among other benefits, have been key to making deep learning more efficient. He was awarded the 2018 Turing Prize for his work on deep learning. He shares this award with Canadians Yoshua Bengio and Geoffrey Hinton. However, LeCun seems convinced that the industry is still a long way from “autonomous” AI.

In a research paper published in June in Open Review.net, LeCun proposes a way to solve this problem by training learning algorithms to learn more efficiently, since AI has proven itself not to be very good at predicting. and planning for changes in the real world. In contrast, humans and animals are able to acquire enormous amounts of knowledge about how the world works through observation and with very little physical interaction. According to LeCun, because of these shortcomings, most current approaches to AI will never lead to true intelligence.

Indeed, despite the rapidity with which humans have come to rely on the power of AI, one question has haunted the field almost as long as its beginnings: could these intelligent systems ever acquire sufficient sensitivity to match , or even surpass, humanity? In this debate, an ex-Google engineer recently claimed that a chatbot had acquired sentience, but we are quite far from this reality. The latter was subsequently dismissed for several reasons. Current AI and machine learning systems lack reason, a concept essential to the development of “autonomous” AI systems.

That is, AI systems that can learn on the fly, directly from real-world observations, rather than lengthy training sessions to perform a specific task. During an interview in September, LeCun made it clear that he views current deep learning research with great skepticism. I think they are necessary, but not sufficient. We symbolize everything and train gigantic models to make discrete predictions, and somehow artificial intelligence will emerge from all of this,” said the Turing Prize winner.

The most successful AI research today includes great natural language processing (NPL) models such as GPT-3, based on Transformer and its ilk. As LeCun puts it, followers of the Transformer model believe that “we’re tokenizing everything and training gigantic models to make discrete predictions, and somehow AI will emerge from all of that.” They’re not wrong, in the sense that it can be a component of a future intelligent system, but I think it’s missing some essential pieces,” he pointed out. He suggests starting at the bottom of the ladder.

We see a lot of claims about what we should be doing to move towards human-level AI. And some ideas are, in my opinion, misguided. We’re not even at the point yet where our intelligent machines have as much common sense as a cat. So why not start there? , said LeCun. To advance AI research over the next decade, his research paper proposes an architecture that would minimize the number of actions a system must take to learn and complete a given task.

Just as different sections of the human brain are responsible for different body functions, he proposes a model for creating autonomous intelligence that would consist of five distinct but configurable modules. One of the most complex parts of the architecture proposed by LeCun, the “world model module”, would have the function of estimating the state of the world, but also of predicting imagined actions and other sequences of the world. It would be like a simulator, a device that artificially represents a real operation.

According to him, knowledge about how the world works can thus be easily shared between different tasks. in some ways it might look like memory. That said, there is still a lot of work to do before autonomous systems can learn to deal with uncertain situations. According to LeCun, in a world as chaotic and unpredictable as ours, this is a question that we will undoubtedly have to address sooner or later. But for now, managing that chaos is part of what makes us human. In addition to leading AI research at Meta, LeCun is also a professor at New York University.

He has spent his career developing learning systems on which many modern AI applications are based today. In 2013, he founded the Facebook group AI Research (FAIR), Meta’s first foray into AI research, before becoming the company’s chief AI scientist a few years later. Since then, Meta has had several successes trying to dominate this ever-evolving field. In 2018, their researchers trained an AI to reproduce eyeballs in hopes of making it easier for users to edit their digital photos.

Earlier this year, BlenderBot 3, Meta’s new AI chatbot, proved surprisingly malicious towards Mark Zuckerberg and sparked a debate about AI ethics and biased data. Early testing of BlenderBot 3 revealed that it’s far from the high-performing chatbot that Meta claimed. He called CEO Mark Zuckerberg “scary and manipulative.” He also asserts that “Zuckerberg is a good businessman, but his business practices are not always ethical”. He also described Facebook as having privacy issues and spreading fake news.

More recently, Meta’s Make-a-Video tool is able to animate both text and single or paired images in videos, which spells even more bad news for the once-promising rise of art. generated by AI. Moreover, teenagers can learn to drive with only a few dozen hours of rehearsal. AI systems, especially those in “self-driving” cars, on the other hand, have to be trained with an insane amount of data before they can perform the same task, and are susceptible to mistakes that humans wouldn’t. not.

During a presentation of her work at UC Berkeley, LeCun said: A car would have to fall off the cliff several times before it realized it was a bad idea. And another few thousand times before she realizes how not to fall off the cliff. This distinction lies in the fact that humans and animals are capable of common sense. If the concept of common sense can be summed up as a practical judgment, LeCun describes it in his article as a set of models allowing a living being to make the difference between what is probable, what is possible and what is impossible.

According to him, such a skill allows a person to explore his environment, fill in missing information and imagine new solutions to unknown problems. LeCun suggests that AI researchers take common sense for granted, so the industry has so far failed to equip AI and machine learning algorithms with any of these capabilities. During the talk, LeCun also pointed out that many modern training processes, like reinforcement learning techniques, fall short of human reliability in real-world tasks.

Reinforcement learning is an AI training technique based on rewarding favorable behaviors and punishing undesirable behaviors. It’s a practical problem, because we really want machines with common sense. We want self-driving cars, household robots, intelligent virtual assistants, he said. LeCun, 62,’s talk represents a startling critique of what seems to work coming from the academic who has perfected the use of convolutional neural networks.

Source: Research article by Yann LeCun (PDF)

And you?

What is your opinion on the subject?

What do you think of LeCun’s proposal to achieve autonomous AI?

In your opinion, can artificial intelligence one day be endowed with common sense as LeCun wishes?

Will the AI ​​creation model proposed by LeCun achieve this goal in the years to come?

Do you also think that most current approaches to AI will never lead to true intelligence?

See as well

The godfathers of artificial intelligence awarded the 2018 Turing Prize, the Nobel Prize in computing

Meta’s new AI chatbot claims CEO Mark Zuckerberg is ‘scary and manipulative’, chatbot also makes racist remarks and spreads conspiracy theories

Engineer fired by Google reportedly claims AI chatbot is pretty racist, and Google’s AI ethic is a fig leaf

GPT-3 can run code, look up a value in a lookup table, autogressive language model seems to have problems with large numbers

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *