Scientists have developed a new machine learning tool that can identify Covid-19-related conspiracy theories on social media and predict how they evolved over time, an advance which may lead to better ways for public health officials to fight misinformation online. The study, published in the Journal of Medical Internet Research, analysed anonymised Twitter data to characterise four Covid-19 conspiracy theory themes – such as one that erroneously claims the Bill and Melinda Gates Foundation engineered or has malicious intent related to the pandemic. Using the AI tool's analysis of more than 1.8 million tweets that contained Covid-19 keywords, the scientists from the Los Alamos National Laboratory in the US categorised the posts as misinformation or not, and provided context for each of these conspiracy theories through the first five months of the pandemic. "From this body of data, we identified subsets that matched the four conspiracy theories using pattern filtering, and hand labeled several hundred tweets in each conspiracy theory category to construct training sets," explained Dax Gerts, a computer scientist and co-author of the study from the Los Alamos National Laboratory. The four major themes examined in the study were that 5G cell towers spread the virus; that the Bill and Melinda Gates Foundation engineered or have "malicious intent" related to Covid-19; that the novel coronavirus was bioengineered or was developed in a laboratory; and that vaccines for Covid-19, which were still in development during the study period, would be dangerous.
Background: Misinformation spread through social media is a growing problem, and the emergence of COVID-19 has caused an explosion in new activity and renewed focus on the resulting threat to public health. Given this increased visibility, in-depth analysis of COVID-19 misinformation spread is critical to understanding the evolution of ideas with potential negative public health impact. Methods: Using a curated data set of COVID-19 tweets (N ~120 million tweets) spanning late January to early May 2020, we applied methods including regular expression filtering, supervised machine learning, sentiment analysis, geospatial analysis, and dynamic topic modeling to trace the spread of misinformation and to characterize novel features of COVID-19 conspiracy theories. Results: Random forest models for four major misinformation topics provided mixed results, with narrowly-defined conspiracy theories achieving F1 scores of 0.804 and 0.857, while more broad theories performed measurably worse, with scores of 0.654 and 0.347. Despite this, analysis using model-labeled data was beneficial for increasing the proportion of data matching misinformation indicators. We were able to identify distinct increases in negative sentiment, theory-specific trends in geospatial spread, and the evolution of conspiracy theory topics and subtopics over time. Conclusions: COVID-19 related conspiracy theories show that history frequently repeats itself, with the same conspiracy theories being recycled for new situations. We use a combination of supervised learning, unsupervised learning, and natural language processing techniques to look at the evolution of theories over the first four months of the COVID-19 outbreak, how these theories intertwine, and to hypothesize on more effective public health messaging to combat misinformation in online spaces.
Online conspiracy theories and misinformation relating to Covid-19 have resulted in at least 800 deaths from coronavirus, new research has revealed. The so-called "infodemic" resulted in around 5,800 people being admitted to hospital as a result of following false information on social media in the first three months of this year. A study published in the American Journal of Tropical Medicine and Hygiene detailed examples of misleading rumours, conspiracy theories and stigma surrounding the pandemic. Rumours include claims that drinking cleaning products, hand sanitiser or cow urine can cure coronavirus. False conspiracy theories range from fears that Covid-19 is a bio-weapon funded by Bill Gates, to accusations that Covid-19 has been engineered to damage US President Donald Trump's chance of re-election.
Is President Trump the nation's chief disinformation officer? Controversial posts concerning COVID-19 on Monday in which the president tells the public "Don't let it dominate you" and "Don't be afraid of it" and claims he may have immunity to the deadly virus have heightened public criticism of Trump for spreading dangerous falsehoods. "There is no doubt that Donald Trump is the largest spreader of specific and important types of misinformation today," said Graham Brookie, director of the Atlantic Council's Digital Forensic Research Lab. In the critical last weeks of the election, social media companies are facing a tsunami of conspiracy theories, hoaxes and fake claims on everything from COVID-19 to voting. And whether during a presidential debate, in press briefings or in posts on Facebook and Twitter, much of that misinformation is being generated and amplified by Trump, two recent studies show.
Reopening America has been a hot topic on Twitter, as millions are calling for the US government to end the lockdown - but a new study suggests the trend is being fueled by bots. Using a'bot-hunter' tool, researchers discovered that there are twice as many web crawlers starting conversations about the coronavirus pandemic and stay-at-home orders than human users. After analyzing more than 200 million tweets, experts determined that of the top 50 influential retweeters, 82 percent are bots and 62 percent are bots in the top 1,000. The team also found that 66 percent of the activity is orchestrated by human hands. The findings were made by a team at Carnegie Mellon University, which scanned Twitter for coronavirus related tweets starting in January.