AI-powered tools for detecting “hate speech” in gaming chats raise concerns about over-policing, censorship
09/07/2023 // Richard Brown // Views

Gaming giant Activision has started listening to gamer chatter with the use of artificial intelligence (AI) that scans conversations for "toxicity" – raising concerns about over-policing and censorship.

The AI-powered Toxmod program developed by tech firm Modulate is specifically designed to monitor and manage player interactions in real time. It can detect and restrict "hate speech, discriminatory language, sexism, harassment and more" in both in-game text and voice chats.

It has been eavesdropping on players of "Call of Duty: Warzone" and "Call of Duty: Modern Warfare II" – both Activision titles – in North America. According to Frontline News, "Call of Duty: Modern Warfare III" players worldwide, excluding Asia, will be monitored by Toxmod starting in November.

ToxMod not only listens for language that may be offensive but uses AI to determine if an offense has been taken. According to PC Gamer, the program can "listen to conversational cues to determine how others in the conversation are reacting to the use of [certain] terms."

Over one million gaming accounts have so far had their chats restricted by Call of Duty's "anti-toxicity team," the company boasted in a blog post. Offenders first receive a warning and then penalties if they re-offend.

One of the central concerns raised by the deployment of ToxMod is the potential for over-policing and censorship. While it aims to curb offensive language, it also relies on AI algorithms to gauge whether participants in a conversation have taken offense.

This approach raises significant questions about the AI's ability to accurately interpret context and cultural nuances. For example, certain words or phrases may be used humorously or reclaimed within specific communities but could be misconstrued as offensive by the AI, leading to content removal.

Also, certain words are seen as humorous when said by some races but considered racial slurs if said by others.

"While the n-word is typically considered a vile slur, many players who identify as black or brown have reclaimed it and use it positively within their communities," said Modulate. "If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation."

Furthermore, ToxMod has been programmed to identify what Modulate classifies as "White supremacists" and "alt-right extremists," categorizing them as forms of "violent radicalization."

False positives, unwarranted censorship feared

The adoption of AI-powered content moderation tools in the gaming industry has sparked a broader debate about the role of artificial intelligence in online communication and its implications for freedom of expression. This trend can be understood within the context of ongoing efforts to address toxicity, harassment and hate speech in online communities.

However, the criteria and methodology behind this moderation process are not fully transparent, potentially resulting in false positives and unwarranted censorship of individuals or groups who do not fit the defined categories. (Related: AI can influence people's decisions in life-or-death situations.)

Aside from the lack of context, AI algorithms may be biased and produce unfair outcomes. For example, an AI-powered tool may flag certain types of behavior as toxic more frequently than others, leading to unfair moderation.

Then there are privacy concerns. AI-powered tools may collect and analyze personal data, such as chat logs and voice recordings, to identify toxic behavior. This raises concerns about privacy and data protection.

Some people may be concerned about the lack of transparency in how AI-powered tools are being used to moderate online gaming. They may not understand how the algorithms work or how decisions are being made.

Visit FutureTech.news for more stories on how artificial intelligence is shaping the world.

Watch this video that talks about the the future of AI.

This video is from The Talking Hedge channel on Brighteon.com.

More related stories:

AI is about to change the world for the WORSE: Here are 3 reasons why.

AI is currently the greatest threat to humanity, warns investigative reporter Millie Weaver.

AI surveillance tech can find out who your friends are.

Sources include:

Frontline.news

iMerit.net

Platforms.AEI.org

Brighteon.com



Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.