A new study by the University of Liverpool has shown that the prevalence of misinformation on Twitter fell in the 48-hour period after Boris Johnson’s announcement of a national lockdown on March 2020.
Published in the journal Big Data & Society, the study finds that misinformation accounted for less than 1% of COVID-19-related tweets, with less misinformation tweeted in the 48 hours post-announcement than compared to the 48 hours before. The study also found that there was a slight increase in COVID-19 related tweets by bots in the period just after the announcement which may have been dangerous, but these tweets were less likely to be shared by others.
Researchers used data science techniques to analyse all UK twitter posts in the 48 hours before and after Boris Johnson’s national lockdown announcement began at 8pm on 23 March 2020.
They identified just over 2.5million COVID-19-related tweets in the UK during the period and found that less than 1% of these tweets (20,172 tweets) contained misinformation, with fewer misinformation tweets posted in the period after the announcement.
The researchers estimate 858,409 tweets potentially made by bots over the period, with slightly more COVID-19-related tweets from bots occurring in the initial 24 hours post-announcement.
While overall misinformation fell after the national lockdown announcement, when the team looked at different ‘types’ of misinformation they found an increase in misinformation about ‘cures and treatments’ in the 48 hours following the announcement. This is likely in response to the national lockdown, with people seeking solutions or alternatives.
Whilst misinformation during public health crises is not new, the COVID-19 pandemic is the first global pandemic at the height of social media usage and ‘fake news’. Vast volumes of information and data relating to Covid-19 is available on the internet, most of which is unregulated, and potential harmful when misleading. In addition, social media platforms such as Twitter facilitate the fast sharing of Covid-19 information and misinformation across large populations.
Dr Mark Green, from the University’s Department of Geography & Planning and lead author of the paper, said: “Misinformation throughout the COVID-19 pandemic has disrupted and harmed public health communication.
“Our study shows that clear and consistent messaging by governments can be helpful in containing the spread of misinformation.
“Our findings do not mean the potential impact of misinformation circulating on social media platforms should be ignored. The increase in sharing of misinformation about ‘cures and treatments’ is worrying as such misinformation can be very dangerous if encouraging behaviours that may harm people.”
While the project examined data for Twitter, the researchers think it is important to co-ordinate responses across all social media platforms. Apps such as ‘WhatsApp’ and ‘Parler’ have played important roles in sharing misinformation, and it might be that these platforms facilitate its spread more than Twitter or help it to reach a different demographic of users.