Humans and AI must work together for effective online content moderation, survey finds

According to an August 11 Pollfish survey of 1,000 Americans for TELUS International, a leader in digital customer experience, the vast majority (92%) of consumers surveyed believe it is important or very important that online content is reviewed by humans, not just AI. Nearly three-quarters (73%) of respondents believe that AI cannot understand or distinguish context and tone as well as a human.

TELUS International designs, manufactures and delivers next-generation digital solutions to enhance the customer experience of global market-creating brands. Its services support the full digital transformation lifecycle of its clients and enable them to more quickly adopt next-generation digital technologies to improve their bottom line. TELUS International’s integrated solutions encompass digital strategy, innovation, consulting and design, information technology lifecycle management (managed solutions, intelligent automation and comprehensive data solutions based on AI, such as computer vision), omnichannel customer experience, and trust and security, including content moderation.

Content moderation

Its content moderation specialists review and moderate generated content (text, images, video, audio) to ensure that it not only meets community guidelines but also local and government regulations. To moderate effectively, it combines human intervention with technological automation to ensure content remains appropriate and relevant.

Siobhan Hanna, Managing Director, AI Data Solutions, TELUS International, says:

“AI increasingly detects digital content that violates brand standards and community guidelines. While it has proven its worth as a first line of defense against harmful content, it is almost impossible to keep pace with the new types of content that are constantly emerging and the increased use of algolanguage. Humans are still needed to make more contextual decisions, as AI is limited in its ability to make sometimes difficult decisions that consider the intent behind a particular phrase or image. By integrating a human into the approach, brands can take advantage of the speed and efficiency of AI and ensure nuanced content is reviewed properly. »

Increasingly complex content moderation

For more than half of respondents (53%), it has become more difficult for brands, social networks and gaming platforms to monitor content on their sites over the past year. According to them, the main difficulties come from the fact that:

  • Each platform and channel has more users (66%);
  • It is increasingly common to complain online (54%);
  • Younger generations are more digital minded (50%);
  • Content is published in more languages ​​(29%);
  • 5G connectivity has increased access to digital networks globally (19%).

Siobhan Hanna concludes:

“As more and more people are expressing themselves on different digital platforms in many languages, moderation cannot be done effectively by AI or humans alone. A robust content moderation strategy, based on different types of AI whose algorithms are based on the reliable datasets of a team of annotators, ensures that the data is accurate, the context is well understood and that biases are responsibly mitigated. The AI ​​of content moderation tools will continue to improve, but human moderators will always be a necessary resource for keeping digital spaces safe. For this reason, it’s important for brands to support content moderators with a robust wellness program that enables them to do the best job possible while protecting both their mental and physical health. »

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *