The two video game players have decided to join forces to launch the “Zero Harm in Comms” research project. They aim to set up an AI capable of detecting toxic behavior in online games in order to make it a cyberbullying policeman.
Give back to online video games their friendly, healthy and community spirit in the good sense of the term. The toxic behavior of some players, capable of harassing other players online, of having more than reprehensible behavior with streamers or even simple players, has for several years been constantly highlighting the problem. But the solutions, beyond simple moderation and reporting, only temporarily heal the wounds that never heal.
To solve a problem that plagues the entire sector, the solution must perhaps come from the main players themselves. This is the observation made by Riot Games and Ubisoft. The two heavyweights of the video game announce this Wednesday the launch of the research project Zero Harm in Comms (zero damage in the comments, in French). As its name suggests, its goal is to achieve automatic moderation by artificial intelligence of all hurtful, outrageous, or even more shocking comments.
An AI capable of detecting inappropriate comments
Drawing on their experience in online games, the publishers of league of legends and Assassin’s Creed decided to join forces to design an AI-based solution that would clean up comments. These moderation tools will detect and sanction inappropriate behavior.
“Ubisoft approached us for this project because they knew of Riot’s interest and commitment to working with others in the industry to build safe communities and mitigate disruptive behavior,” Wesley Kerr told Tech&Co. Head of Technology Research at Riot Games. “It’s a complex and difficult subject to solve,” adds Yves Jacquier, executive director of Ubisoft La Forge. “But we believe that by bringing the industry together through collective action and knowledge sharing, we will be more effective in delivering positive online experiences and a reassuring community environment.”
Currently, the moderation of comments is often based on a dictionary of insults “easily circumvented and which does not take into account the online context”, underlines Yves Jacquier. “We need an AI that can understand the overall meaning of an online game in its context.”
The project therefore aims to develop an AI trained to “preventively” perceive harmful behavior in online chats and make them disappear as quickly as possible. The objective is also to develop tools that will then be shared with other industry players for concerted and complementary action.
Active members of the Fair Play Alliance, Ubisoft and Riot Games explain that they want to rely on their technologies already applied in online gaming tools, as well as on their approach to put in place a framework that guarantees ethics and confidentiality. To do this, they will rely on the experience of competitive games from Riot Games (Valorant in particular) which can sometimes generate threatening behavior from players despite their efforts, but also on the diversity of games from Ubisoft, Far cry to Mario+The Rabbids, and therefore multiple possible player profiles.
Ubisoft has often been at the forefront of the fight against toxic behavior in games by constantly strengthening its tool for detecting racist, homophobic, sexist or even hateful remarks in game chats such as Rainbow Six headquarters. This takes the form of a message displayed to the player to explain the offending behavior, expulsion from a game for a defined time, account suspension, or even total banishment. Riot has, for Valuing, multiplied the penalties, ranging from the simple cut off of the microphone to exclusion. This was notably enabled by voice analysis and reporting by other players.
Make it an effective moderation tool for the whole industry
It must be said that the two companies have a common interest: they are increasingly basing their operation on games with strong community potential. And for them to be attractive, their environment must be healthy.
With the research project Zero harm in communicationsthe two actors say they are ready to share their knowledge in order to solve the toxicity problem in the online comments. By crossing their gaming experiences, they hope to develop a database covering all types of games, players and behaviors, with an AI capable of responding to all situations.
“Harmful behaviors are not only the prerogative of games, but of all social platforms,” said Wesley Kerr. “To create positive experiences, we all have to come together. This project is an illustration of our commitment and the work we are doing at Riot to develop inclusive, healthy and secure exchanges in our games.”
This announcement is the first stone of the “ambitious and cross-industry” research project. Ubisoft and Riot Games hope to rally other publishers and developers to their cause. The first lessons should be able to be shared next year with the gaming video industry, they explain, whatever the conclusion. “The only real failure is when it doesn’t work and you can’t explain why,” says Yves Jacquier.