The social media giant's new ruling is the latest in a wave of stricter policies that tech companies have been rolling out to address the flood of virus-related misinformation that has been popping up on their platforms. Google, which owns YouTube, and Facebook have already put similar systems in place.
The announcement shows that Twitter is taking its role in spreading misinformation more seriously. However, it raises questions about whether or not the company can enforce the rule effectively, and whether it's qualified to determine what misinformation is in the first place.
Even as they announce the new rules, Twitter's leadership is already tempering expectations. Yoel Roth, the company's head of site integrity, admitted that the company would “not be able to take enforcement action on every tweet with incomplete or disputed information about COVID-19.”
On Monday, Roth acknowledged that the platform has historically applied a “lighter touch” when enforcing its policies on misleading tweets. However, he said that the company is working to improve the technology around the new labels.
Twitter initially stated it would add warning labels to doctored or manipulated photos and videos after footage of House Speaker Nancy Pelosi was slowed down to make it appear as if she was slurring her words. However, the company has since only used the label twice, supposedly because of technical glitches.
Additionally, Twitter has not added any warning labels to political tweets that violate its policies but are deemed in the “public interest.” This is due to a company policy that was announced back in June of 2019.
With its latest rules for COVID-19, Twitter will decide which tweets are misleading. It will only take down posts if they are deemed harmful.
Despite its supposed more lenient enforcement of policies, Twitter also displayed moments of heavy-handedness in the past. The company has even gone as far as to ban accounts for actions done outside of the platform.
Social media sites, including Twitter, Facebook and Google's YouTube, are under pressure to combat misinformation that's being spread on their platforms about the ongoing coronavirus pandemic. These moves, however, raised questions of whether or not these platforms are qualified to vet posts and determine who's telling the truth in the first place.
Rival social media site Facebook currently relies on third-party partners to do their fact-checking for them. Meanwhile, Google stated that YouTube would start showing information panels with third-party, fact-checked articles for U.S. video searches.
For its new rules, however, Twitter seems to be going back to the “lighter touch” it has applied in the past. Nick Pickles, global senior strategies for public policy at Twitter confirmed that the company will not actually fact-check or call tweets false on their platform. Rather, the warning labels would instead send users to public health websites, curated tweets or news articles.
“One of the differences in our approach here is that we’re not waiting for a third-party to have made a cast-iron decision one way or another,” said Pickles.
However, this move may raise even more questions. Without third-party fact-checking, it falls entirely up to Twitter to identify if a tweet needs a label.
For its part, the company stated that it would not take action on tweets with information that was unconfirmed at the time of sharing. However, this serves a little comfort considering its history of spotty enforcement.