To moderate content, should we trust humans or AI?

Social media users may trust artificial intelligence as much as human moderators when it comes to detecting hate speech and dangerous content, according to Penn State researchers.

According to experts, when users reflect on the qualities of machines, such as their precision and objectivity, they trust AI more. However, if they are reminded of the inability of machines to make subjective decisions, their confidence is lower.

These findings could help developers design better AI-driven content curation systems that can handle the vast amounts of information that is generated, today, while avoiding the perception that this content has been censored, or even misclassified, says Professor S. Shyam Sundar, who participated in the study.

“There is a great need for content moderation on social media and, more generally, on the internet,” says Mr. Sundar.

“In traditional media, we have desks that serve as gatekeepers. But online, access is wide open, and humans aren’t necessarily able to act as filters, especially given the volume of information that is generated. So, as the industry moves more and more towards automatic solutions, this study looks at the differences between human and artificial moderators, based on how people react to them. »

Both human and AI-powered moderators have their pros and cons. Humans tend to better gauge whether content is dangerous, such as racist messages or incitement to self-harm, according to Maria D. Molina, lead author of the study, who is an assistant professor at the State University of Michigan.

“Humans, however, cannot process large amounts of content, including the high volume that is being generated right now. »

“When we think of automated content moderation, it raises the question of whether these machines can infringe on freedom of expression,” says Ms. Molina. “It creates a dichotomy between the fact that we need more content moderation, since people are sharing large amounts of problematic content, and, at the same time, Internet users are worried about the ability of AI to moderate content. So ultimately, we want to know how we can create AI moderators people can trust, all without infringing on freedom of expression. »

Transparency and interactive transparency

Molina says uniting humans and AI in the context of moderation could be one way to build a content management system that instills trust. Still according to the researcher, transparency, or indicating to users that a computer is involved in moderation, is an approach that improves trust in AI.

However, allowing people to provide suggestions to that same machine, a process called interactive transparency, would be even more effective in improving trust.

To investigate these two options, among other variables, the researchers recruited 676 participants to interact with a content classification system. Participants were randomly linked to one of 18 experimental conditions, designed to test how the source of moderation – whether artificial, human or a mixture of the two – and transparency – normal, interactive or without Transparency – can affect participants’ trust in artificial moderators.

The researchers tested rating decisions – under which content was deemed “at risk” or “not at risk” of being dangerous or hateful. The “dangerous” content test was related to suicidal thoughts, while the “hate” content test included hate speech.

Among other findings, the researchers found that user trust depended on whether the presence of an artificial content moderator invoked positive attributes of computers, such as their reliability and objectivity, or negative attributes, such as their inability to carry subjective judgments about nuances in human languages.

Giving users a chance to help AI decide whether digital information is harmful or not could also boost their trust. According to the researchers, participants who added their own terms to the results of an AI-generated keyword list trusted a machine to moderate as much as a human.

Ethical fears

Sundar believes that removing the responsibility of moderating content from humans goes beyond reducing the pressure and impact of hard work. Hiring human moderators are effectively exposed to hours and hours of hateful and violent content, he recalls.

“There is an ethical need for automated content moderation,” says Sundar. “Human moderators, who are performing a task with social benefit, need to be protected from constant exposure to dangerous content day in and day out. »

“You have to design something that not only allows you to trust the system, but also allows people to understand how AI works,” says Ms. Molina. “How can we use the concept of interactive transparency and other methods to better understand artificial intelligence? And how can we present AI so that it involves the right balance between appreciating the capabilities of machines and being skeptical about their weaknesses? These questions deserve to be studied. »

Don’t miss any of our content

Encourage Octopus.ca

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *