Facebook has announced today that it will start issuing warnings to users who share and spread false information about COVID-19.
Here’s what the company said in a blog post today:
Ever since COVID-19 was declared a global public health emergency in January, we’ve been working to connect people to accurate information from health experts and keep harmful misinformation about COVID-19 from spreading on our apps.
We’ve now directed over 2 billion people to resources from the WHO and other health authorities through our COVID-19 Information Center and pop-ups on Facebook and Instagram with over 350 million people clicking through to learn more.
But connecting people to credible information is only half the challenge. Stopping the spread of misinformation and harmful content about COVID-19 on our apps is also critically important. That’s why we work with over 60 fact-checking organizations that review and rate content in more than 50 languages around the world. In the past month, we’ve continued to grow our program to add more partners and languages. Since the beginning of March, we’ve added eight new partners and expanded our coverage to more than a dozen new countries. For example, we added MyGoPen in Taiwan, the AFP and dpa in the Netherlands, Reuters in the UK, and others.
To further support the work of our fact-checking partners during this time, we recently announced the first round of recipients of our $1 million grant program in partnership with the International Fact-Checking Network. We’ve given grants to 13 fact-checking organizations around the world to support projects in Italy, Spain, Colombia, India, the Republic of Congo, and other nations. We will announce additional recipients in the coming weeks.
Once a piece of content is rated false by fact-checkers, we reduce its distribution and show warning labels with more context. Based on one fact-check, we’re able to kick off similarity detection methods that identify duplicates of debunked stories. For example, during the month of March, we displayed warnings on about 40 million posts related to COVID-19 on Facebook, based on around 4,000 articles by our independent fact-checking partners. When people saw those warning labels, 95% of the time they did not go on to view the original content. To date, we’ve also removed hundreds of thousands of pieces of misinformation that could lead to imminent physical harm. Examples of misinformation we’ve removed include harmful claims like drinking bleach cures the virus and theories like physical distancing is ineffective in preventing the disease from spreading.
Today we’re sharing some additional steps we’re taking to combat COVID-19 related misinformation and make sure people have the accurate information they need to stay safe.
Informing People Who Interacted With Harmful COVID-19 Claims
We’re going to start showing messages in News Feed to people who have liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed. These messages will connect people to COVID-19 myths debunked by the WHO including ones we’ve removed from our platform for leading to imminent physical harm. We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook. People will start seeing these messages in the coming weeks.