Facebook, Google and others have begun using algorithms, new rules and factual warnings to knock down harmful coronavirus conspiracy theories, questionable ads and unproven remedies that regularly crop up on their services — and which could jeopardize lives.
Pattison said he and his team now directly flag misleading coronavirus information and, at times, lobby for it to be removed from Facebook, Google and Google's YouTube service. Twitter deleted a post by Trump’s personal attorney Rudy Giuliani that described hydroxychloroqine, a cousin to chloroquine, as"100% effective” against coronavirus. The company also removed a tweet from Fox News personality Laura Ingraham touting what she called the drug's"promising results”.
Facebook has long resisted calls to fact check or remove false claims directly made by politicians, arguing the public should be able to see what their elected officials say. In this pandemic, however, the platforms have no choice but to rethink their rules around misinformation, said Dipayan Ghosh, co-director of the Platform Accountability Project at Harvard Kennedy School.
The pandemic has thrown up new challenges to content moderation. Early on, health considerations forced the contractors that employ human moderators to send most of them home, where for privacy reasons they couldn't do their jobs. Facebook eventually shifted some of that work to in-house employees and leaned more heavily on artificial-intelligence programs. More recently, it has made new arrangements for contract moderators to do their jobs remotely.