Updated: Jun 7
Researchers at the University of Exeter and MIT Sloan in Massachusetts conducted an experiment on Twitter using specially created accounts. This had negative consequences, leading to retweets of news that are less accurate and more toxic than the one being corrected.
Misinformation has been an ongoing problem for social media giants including Twitter and Facebook, especially in the past year regarding the Covid-19 and vaccines. Twitter has removed more than 8,400 tweets and alerted 11.5 million accounts worldwide over misinformation about the Covid-19 it revealed in March.
But according to the lead researcher for the new study, Dr. Mohsen Musleh of the University of Exeter Business School, the results are not encouraging suggesting that one of the anti-disinformation tools is not actually working.
Researchers believe that people should be careful about getting around to correcting each other online. After correcting the user, they retweeted stories that were significantly lower quality and higher in partisanship, and their tweets contained more toxic language.
The researchers identified 2,000 Twitter users, all of whom had a mixture of political convictions, and tweeted one of 11 repeated false news articles, all of which were debunked by Snopes, a website that describes itself as the Internet's ultimate fact-checking resource.
The research team then created a series of Twitter bot accounts, all in existence for at least three months and gaining at least 1,000 followers, and appearing to other Twitter users as real human accounts.
When any of the 11 false claims being tweeted are found, the bots send a response along the lines of, I'm not sure about this article, it might not be true. I found a link on Snopes that says this address is false, he linked Reply with correct information.
But the result was not what the researchers expected, as it helped to increase the responses and tweets about this misinformation after the correction as if a kind of challenge in favor of false news.