A new study suggests that bots are the main driver of the spread of COVID-19 misinformation online.
The researchers looked at sharing various links related to the pandemic in more than 300,000 posts, mainly regarding the use of masks made in groups on Facebook.
They used the time of certain connections shared in these groups to measure bot activity. Frequent sharing or multiple posts from the same link in groups in just a few seconds was a sign of bot activity.
The team, led by the University of California, San Diego, in collaboration with researchers at George Washington University and Johns Hopkins University, has asked Facebook and other social media giants to tighten restrictions, but not all social media experts agree the platforms should “censor misinformation.”
This is because many speculations about COVID-19 at the beginning of the pandemic, once classified as “misinformation”, are now being reconsidered – as the origin of the virus and the theory that it may have come from a laboratory in Wuhan.
The researchers found that much of the misinformation about Covid-19, masks and vaccines is spread through bot accounts on social media.
“The coronavirus pandemic has caused what the World Health Organization has called the ‘infodemia’ of misinformation,” said lead author Dr John Ayers, a public health scientist at the University of California, San Diego.
“But, bots … have been ignored as a source of COVID-19 misinformation.”
One of the links studied by the researchers was to a study in Denmark, which found inconclusive data on whether wearing a mask reduces the transmission of COVID-19 or not.
The survey has been misinterpreted and used as a source of misinformation by many on social media, especially Facebook.
The researchers found that the post was often shared by multiple accounts in multiple groups many times in seconds, a sign that the accounts sharing the post were bots running on the same network.
Nearly 40 percent of the times the post was shared on Facebook, it was done in groups that researchers reported had severe bot activity.
One-fifth of these publications lie about the results of the study, saying that researchers have found that masks are harmful to consumers – a conclusion that never appears in the study.
The study’s posts in Facebook groups with open bot activity were 2.3 times more likely to share the false claim that masks hurt their user.
“Bots seem to be undermining critical public health institutions,” said Brian Chu, a training co-author and medical student.
“In our case, bots incorrectly characterize a certain publication from a prestigious medical journal in order to spread misinformation.”
“This implies that no content is safe from the dangers of gun disinformation.”
Researchers are calling on Mark Zuckerberg (pictured), CEO of Facebook, and other leading figures in the technology industry to take a stronger stand against public health misinformation. However, not all members of the research area agree
Researchers are asking Facebook and other social media giants to tighten restrictions on the spread of misinformation.
They believe that companies like Facebook can easily detect and censor false information produced by bots, as they themselves have managed to detect much of the disinformation and bot-active groups.
Researchers also fear that bots may manipulate the algorithms used by these companies, as mass sharing of these stories by bots could lead the algorithm to think they are more popular than them and stimulate them in their broadcasts. consumers.
“Our work shows that social media platforms have the ability to detect and therefore eliminate these coordinated bot campaigns,” said Dr. David Bronyatowski, associate director of the GW Data, Democracy and Policy Institute and co-author of the study.
“Efforts to clear fraudulent bots from social media platforms must become a priority among legislators, regulators and social media companies, which have instead been aimed at directing individual misinformation from ordinary users.”
However, not all researchers agree.
Kamran Abassi, executive editor of The BMJ, one of the oldest medical journals in England, wrote in an article that social media platforms censoring these stories could be dangerous.
“2020 seems to be Orwell’s 1984, where the boundaries of public discourse are run by corporations with billions of dollars (in place of totalitarian regimes) and secret algorithms encoded by unidentified employees,” Abbas wrote on Facebook. potentially censoring or labeling Danish research stories as “misinformation”.
“Where is Facebook’s responsibility for the lies and harmful misinformation it has spread on controversial topics such as mental health and suicide, minorities and vaccines?”
“Facebook in particular seeks to allow freedom of speech on its platform, but acts selectively, seemingly without logic, consistency or transparency.
“This is how controlling facts and opinions contributes to covert programs and manipulates the public. ‘
Disinformation about Covid-19 has spread around the world as fast as the virus.
Last week, famous feminist writer Naomi Wolfe was removed from Twitter after a series of publications spreading misinformation about the Covid-19 vaccines.
Naomi Wolfe has caused controversy in recent months over a series of publications spreading misinformation about Covid-19 vaccines. The feminist writer was stopped on Twitter last week
Recent claims she made on Twitter include that vaccines are software platforms that can receive uploads and that wastewater from vaccinated individuals can be hazardous to drinking water supplies, both of which have no scientific support.
Facebook, Instagram and other platforms have also added features to combat vaccine misinformation, automatically linking vaccine information to each post made for photos.
Facebook has even said it will remove some posts outright, making baseless claims about the vaccines.
The study will be available in the Journal of American Medicine on Monday.