This article is part of After Corona, a series exploring how the pandemic has changed the world.
The COVID-19 pandemic changed everything — even for social media giants like Facebook and Twitter.
Over the last 18 months, as the global death toll surged to more than 4 million people, these tech companies, which once considered themselves neutral platforms for free speech, took an increasingly hands-on role in policing what users said about public health. They removed millions of posts that spread online falsehoods. They censored global leaders that championed COVID-19 misinformation. They promoted official health advice about vaccines to billions worldwide.
In short, pressed by the global public health emergency, social media platforms became arbiters of information. Now, as the world stumbles toward a new post-COVID reality, they’re quickly realizing they’ve bitten off far more than they can chew.
“Dealing with COVID really showed the world that they can act decisively when they need to,” said Philip Howard, director of Oxford University’s program on democracy and technology.
Having witnessed how tech companies have the expertise, and willingness, to monitor, track and remove potentially harmful content, policymakers in Brussels, Washington and elsewhere are heaping pressure on platforms to do even more. That includes everything from removing reams of COVID-19 social media posts to opening up companies’ content algorithms to greater public scrutiny.
With public anger over social media content growing, governments are also demanding these firms apply similar restrictions to other hot-button topics where divisive and often false posts can also cause wide-ranging damage, such as elections, far-right extremism and climate change. “For me, the next big crisis question is over climate change where the scientific consensus is just as strong as the public health consensus around COVID,” Howard added.
This has put some of Silicon Valley’s biggest names in a bind — one, in many ways, of their own making — over an ever-higher bar for the type of dubious content they have to police across the web. By taking an aggressive, yet incomplete, stance on COVID-19 misinformation, social media giants are discovering they have opened a Pandora’s box when it comes to how posts are tracked online, and one that will be almost impossible to close once the current pandemic eventually ebbs into memory.
Chine Labbe had a front-row seat to the so-called COVID-19 “infodemic.”
As the European managing editor for NewsGuard, an analytics firm that tracks digital falsehoods, the former reporter and her team work with the World Health Organization (WHO) to track which coronavirus hoaxes and conspiracy theories are trending. A report last month discovered that typing “Covid” into the search bar of TikTok, the Chinese-owned video-sharing social media service, brought up autocomplete suggestions like “Covid vaccine side effects” and “Covid vaccine magnet.” Another, showed well-entrenched anti-vaccine influencers were broadcasting to tens of thousands of followers on Facebook and Instagram.
“I feel like I’m saying the same thing over and over again,” Labbe said. “Misinformation is still alive on a lot of these platforms.”
Labbe’s work shows how, a year and half into the global crisis, social media giants are still struggling to sort through what’s true and what’s false — especially when purveyors of junk science wrap their wares in technical-sounding jargon. The fact that leading scientists often disagree doesn’t make it easier.
The flood of falsehoods has piled on the pressure for platforms to act, as there’s growing, albeit fledgling, evidence linking viral misinformation to harmful health outcomes. A recent peer-reviewed paper, for example, discovered a connection between false rumors that drinking concentrated alcohol could kill coronavirus to around 800 people dying from alcohol poisoning.
“The spread of health misinformation matters because it can be dangerous for people’s health, as we’ve seen in this pandemic,” said Aleksandra Kuzmanovic, a social media manager at the WHO. “The wrong advice on how to prevent or treat the virus infection can have harmful effects on people’s health and even cause death.”
Social media companies say they have deleted scores of COVID-19 misinformation; removed countless extremist accounts and movements like the QAnon conspiracy theory; and worked with both independent fact-checkers and public health authorities to pepper billions of people with up-to-date information about the global pandemic.
And yet, telling fact from harmful fiction is often easier said than done. Fast-evolving COVID-19 science, according to public health experts, makes labeling misleading coronavirus posts more complicated than filtering out terrorist content — even as global leaders like U.S. president Joe Biden call on tech giants to do more to combat the misinformation threat.
Last month, for instance, an analysis that claimed coronavirus vaccines caused two deaths for every three lives they saved was published in a legitimate, peer-reviewed journal. Days later, it was retracted mid outcry, though not before the paper had been shared widely online.
Facebook also recently reversed its policy of barring posts alleging the pandemic was caused by a leak from a Chinese lab — an idea once dismissed as a conspiracy theory — after mainstream experts began to discuss that possibility.
The rise of extremist content
When it comes to other types of dangerous speech, like removing extremist material online, the effort is still very much a work in progress.
Facebook, Google and Twitter have aggressively deleted jihadi propaganda — and signed up to industrywide efforts to coordinate their responses. But the platforms have been much slower at combating far-right content that often falls in the gray zone between legitimate political discourse and outright hate speech.
Executives say countering disinformation during the ongoing pandemic, which may have an immediate impact on people’s health, is different from policing the long-term effects of a country’s democratic process. And yet, companies have come under mounting pressure to act in the wake of the January 6 riots on Capitol Hill in Washington, with policymakers repeatedly using the removal of COVID-19 falsehoods as an example of what can be done if companies put their minds to it. The push against far-right extremism has raised awkward questions over these private companies’ role in policing political speech, including the removal of the social media accounts of former U.S. President Donald Trump.
“There are probably 10 or so clear signals of content being COVID misinformation,” said Adam Hadley, director of Tech Against Terrorism, a non-profit organization that works with social media companies to remove harmful material. “But racist content is so broad. We need to look at that in more detail, about what we mean by racist content and what we mean by terrorist content.”
Still, it does not take long to find white supremacist and far-right groups on social media — evidence the platforms are still failing to clamp down on such material.
The Global Project Against Hate and Extremism (GPAHE), a left-leaning nonprofit organization that tracks such material, found more than 50 YouTube channels — with a combined audience of over 100,000 viewers — with ties to the so-called Generation Identity movement, whose anti-immigrant and white supremacist stance has seen them banned in France and Austria, respectively.
Facebook removed the transnational group from its network last year. But on Google’s video-streaming service, the extremists were able to promote disinformation targeting minority groups, as well as profit from advertising displayed alongside their online videos, based on POLITICO’s review of GPAHE’s analysis. YouTube’s search algorithm also suggested other far-right channels with ties to the Generation Identity movement. Google declined to comment.
“There is so much of this content. It is a cesspool,” said Wendy Via, GPAHE’s president. “When they began addressing the Islamic extremism content across all platforms, there was a global consensus that that had to be addressed. But when it comes to far-right extremism, it is very much a struggle because it’s bound up in wider societal constructs.”
Policing online elections
A recent state election in Germany illustrates how difficult policing political content can be.
A photo posted on Twitter claimed to show ballots for Alternative for Deutschland (AfD), the German far-right political party, being spoiled to hinder the group’s electoral success. Uploaded to the platform by a former prominent AfD politician, the image was shared hundreds of times.
There was just one problem: It was fake. It was a repost of a photo taken from a polling station during November’s U.S. presidential election, where the original social media user had alleged (without evidence) that workers had destroyed American ballots.
Social media companies can expect to have to deal with many similar incidents as Germany heads toward its national election in September. False claims about election fraud, often promoted by leading far-right politicians, are starting to pick up pace despite these firms’ efforts to clamp down on such online falsehoods. Russian-based media groups like RT also have fostered a favorable image of AfD via their social media accounts, based on analysis from the Institute for Strategic Dialogue, a London-based think tank that tracks online extremism across the European Union and United States.
“The far right is trying to undermine trust in the elections,” said Julia Smirnova, one of the researchers who conducted the review of social media activity around the Saxony-Anhalt election. “Russian state media has given more positive coverage to the AfD and given more space to their politicians than for other political parties.”
So far, the tech giants have been slow to respond, although company executives stress they are working with German authorities to weed out the worst offenders, while providing an open online space to discuss the upcoming election. They say they have taken lessons from the recent U.S. presidential election, including efforts to demote politically divisive content and fact check false claims made by leading politicians.
But there’s much more they could be doing. Waves of anti-immigrant and increasingly misogynist content are still slipping through the companies’ content nets, according to Felix Kartte, a former European Union official who’s coordinating several German nonprofit organizations’ misinformation work around September’s vote via Reset, a tech advocacy group.
Local authorities, he said, had been slow to act because of concerns over hampering political debate, while platforms — despite growing calls during the COVID-19 pandemic — have been slow to open up their networks to outside scrutiny.
“Germany has a very important stabilizing function for Western democracy,” Kartte added. “If election fraud or politically-motivated misogynist content is able to get through here, what does that say for how the platforms will handle this content in other countries?”
It’s a question social media companies need to answer. By showing they’re willing and (mostly) able to police content, they’ve opened themselves up to it.
This article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.
149 total views, 2 views today