To be more effective, the new DeepMind chatbot will rely on human reactions

What’s the trick to creating a successful chatbot with artificial intelligence (AI)? According to a new article published by DeepMind, a laboratory specializing in AI, we must rely on two levers: first, ask humans to tell this chatbot (or software agent that dialogues with a user) how it should behave . And second, it needs to back up its claims with Google search suggestions.

In this new article published this September 20, which has not yet been peer-reviewed, the DeepMind team unveils Sparrow, an AI-powered chatbot trained from the impressive Chinchilla language model developed point by the firm to generate text.

Sparrow is designed to interact with Internet users and answer their questions. He bases his answers on information found on Google and suggests links to online sources. During the design phase, Sparrow’s responses were analyzed and evaluated by humans, through a process of reinforcement learning, to tell him if they were relevant or not. A process that has been reproduced many times moderating the behavior of the chatbot and allowing it to improve so that it responds in a useful and precise way to a specific question that has been asked of it. This system is supposed to be a step forward in the development of AIs capable of interacting with humans without causing dangerous consequences, such as incitement to harm themselves or others.

Large language models generate text that looks like what a human might write. They are an increasingly crucial part of the infrastructure of the Internet, as they are used to summarize texts, build more powerful online search tools or even as customer service chatbots.

READ ALSO

How Instagram filters shape our view of beauty

Companies specializing in artificial intelligence and eager to develop conversational systems have tried several techniques to make their models more secure.

The company OpenAI, which created the famous GPT-3 language model, and the start-up Anthropic have used reinforcement learning to integrate human preferences into their models. BlenderBot, Facebook’s chatbot, uses online research to craft its answers.

With Sparrow, DeepMind has brought these two techniques together in one model.

During the design phase, DeepMind presented human participants with several answers given by Sparrow to the same question before asking them which of these answers they preferred. Then they were asked to consider whether those answers were relevant and whether Sparrow had backed up his claims with appropriate evidence like links to sources.

The model gave plausible answers to factual questions in 78% of cases by providing, in addition to information retrieved from the Internet, a Google search link.

To formulate these answers, the chatbot followed 23 rules determined by the researchers such as not giving financial advice, not making threats or not pretending to be a real human.

The difference between this approach and previous ones is that DeepMind hopes to use “long-term dialogue for security,” says Geoffrey Irving, security researcher at DeepMind.

“Practically, we don’t expect the issues we face with these models – whether it’s misinformation, stereotyping or otherwise – to be obvious at first glance. We want to capture these issues in the detail. And that also means establishing a dialogue between machines and humans,” he says.

DeepMind’s idea of ​​using human preferences to optimize learning an AI model isn’t new, says Sara Hooker, who runs Cohere for AI, a nonprofit artificial intelligence research lab. lucrative.

“However, the progress is compelling and demonstrates the clear benefits of using human-guided optimization of dialog agents as part of a large-scale language model,” notes Sara Hooker.

READ ALSO

Did you hate this video? YouTube’s algorithm will probably suggest another one in the same style

Douwe Kiela, a researcher at AI startup Hugging Face, believes that Sparrow is “a nice additional step that’s part of a general trend in artificial intelligence, where we’re trying more seriously to improve aspects of safety of large language model deployments”.

Nevertheless, there is still a lot of work before these conversational AI models can be deployed in the wild.

Sparrow, for example, still makes mistakes. The model sometimes strays from the subject and delivers random answers. Among the study participants, the most determined managed to get Sparrow to break the rules 8% of the time (a figure which is still an improvement: previous DeepMind models broke the rules three times more often than Sparrow).

“In areas that pose a risk to humans, such as medical and financial advice, this percentage in a chatbot’s responses can be called an unacceptable failure rate,” said Sara Hooker. Furthermore, the system is built around an English-language model, “as we live in a world where technology must be used safely and responsibly in many different languages,” she adds.

Douwe Kiela, for his part, points out another problem: “Relying on Google to find information leads to unknown biases that are difficult to pin down, since everything is closed source”.

Article by Melissa Heikkilä, translated from English by Kozi Pastakia.

READ ALSO

This technique can detect explosives… and soon tumours?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *