Article

ADL Calls for Platforms to Take Action to Address Hate Online During Pandemic

social media

New ADL data reports 10,000+ antisemitic incidents since 10/7 attack.

Related Content

May 08, 2020

At ADL, we are watching in real time how the spread of the coronavirus is opening the doors to a new surge of online hate and bigotry targeting Jews, Asians and Asian-Americans, Muslims and immigrants, among others. We are tracking antisemitic, anti-Asian, and anti-Muslim content on Facebook and other social media platforms, and we believe the scale and speed at which these platforms operate could transform the situation from a tinderbox of hate to potential real world violence against marginalized communities.

Times of turmoil and great anxiety have always offered fertile ground for hate and extremism targeting those considered the dangerous “other.” This has been the case for centuries. Jews were widely blamed for the Black Death in the 14th century, accused of engaging in filthy habits and spreading disease, often with intentional malice leading to the murder of Jews. In the current pandemic Asian-Americans and Jews, among others, are similarly being blamed for creating and spreading the coronavirus, or seeking to profit off of it and we have seen an uptick of physical assaults on Asian-Americans. The difference today is that large internet platforms spread this toxic hate virally. This is not merely an unfortunate but inevitable byproduct of the fact that billions of us are now globally connected, but the unintended result of structures built into the platforms and business models of Facebook and other large social media companies. These companies depend upon -- and therefore do everything possible to promote, or at least not curb -- exponential growth and engagement.

ADL has engaged directly with private technology companies, including Facebook, about their  content moderation practices during the pandemic. Many tech companies sent their human content moderation teams home, but this left their platforms to be predominantly moderated by automated systems – and often automated systems are tuned towards leniency. The human moderators who remain on the job -- from home or elsewhere -- have been instructed to focus on reviewing content that is clearly linked to threats of imminent harm as defined by the companies, such as terrorism, child exploitation and self-harm. (In the coronavirus context, Facebook and other platforms are also removing misinformation that they believe could cause imminent harm, even though the content is not categorized as hate speech, because it peddles medically unproven diagnostic tests, cures or treatments or, as in a recent example, suggests that wearing masks could make people sick.) While such work is clearly vital, this decision leaves other forms of harm on the platform -- such as hate directed at vulnerable populations -- largely unchecked.

In its reporting interface, Facebook notes this lack of capacity for human review and its current prioritization of forms of harm:


cant review

While this transparency with users is admirable, notifications like these send clear signals to networks of hate about the platform’s diminished capacity to enforce its policies, which permit campaigns of harassment and a surge of misinformation. We have also observed that this reduced capacity and increased reliance on automated systems has not proven to be effective at moderating content that requires context, including a significant portion of antisemitic content. For example, on Twitter, obvious examples of antisemitism were tweeted following Mayor Bill de Blasio’s tweet about the Jewish community on April 28th and remain active on the platform as of this writing, despite having been reported to the platform.


the plan

Example of antisemitic content on Twitter following Mayor Bill de Blasio’s tweet on April 28th


In mid-March, Facebook revised its Misinformation, Hate Speech, and Coordinated Harms policies to allow for the removal of content that could lead to real world harm, such as false information on preventative and curative treatments for the novel coronavirus. However, conspiracy theories such as blaming groups based on their national origin, for creating the virus -- including Israelis or Chinese or even Chinese-Americans -- would not be considered hate speech. (This national origin “carveout” reflects the company’s attempt to allow legitimate debates over the genesis of the virus.)  Nor was it considered misinformation that could cause imminent harm. And at that point in time Facebook also did not prohibit content accusing Jews or Asians – groups defined by religion and race or ethnicity -- of deliberately creating the virus.

In early April, due in part to ADL’s direct advocacy, the company revised its policy. Modifying its position, Facebook decided that content blaming Jews and Asians (and by extension Asian-Americans) for the virus was hate speech. The rationale: they were attacks on a group based on a protected characteristic -- in the case of Jews, religion and in the case of Asian Americans, ethnicity or race.

Yet even though the company has stated its new policy publicly, we continue to see many such posts on the platform. We believe this may be due to overly-lenient tuning of content moderation algorithms related to hate speech, which has not proven to be effective at moderating content that requires context.  As it happens, that encompasses a significant amount of antisemitic content.


britain

Example of antisemitic content on Facebook alleging the (((ENEMY))) is responsible for the virus, using thinly veiled “code” terms used by white supremacists and antisemites. Three parentheses are a common antisemitic symbol used to highlight the names of individuals of a Jewish background.


Social media companies such as Facebook and Twitter can and must do better. They certainly have the resources to raise their game, even during the pandemic, and their disappointing efforts in this regard to date can only be described as a serious failure.

There are a number of actions Facebook and other social media companies should take immediately to address the less-than-optimal enforcement on the platform:

  1. Augment “work-from-home” content review teams so that they can focus on hate speech content in addition to imminent harm content. As we have seen time and time again, hate speech and misinformation have the potential to lead to real world violence, even if the speech is not directly inciting violence. Facebook made $18B in profit last year. Surely they can mobilize more quickly to increase their content moderation teams capabilities, even when working remotely.
  2. Follow YouTube’s example of making AI content moderation systems more stringent and increase the number of human moderators assigned to appeals.  As Jonathan Swift wrote, “Falsehood flies, and the Truth comes limping after it.” To help attenuate the deliberate spread of lies and misinformation, Facebook and other social media companies should turn up the knob of their automated flagging systems - and focus their limited resources on handling appeals. Since AI models are still less accurate than well trained content moderators, this helps to balance the risk of allowing more disinformation and hateful speech to go unmoderated over the communication slowdown caused by going through an appeals process.
  3. Increase transparency and publish a roadmap for bolstering their operational capacity to effectively enforce their policies. Platforms should be transparent about the number of human content reviewers that are working from home, what type of content rises to the level of a human reviewer, and why users can’t appeal decisions at this time. They should also publish the size of the queue for review and wait time for content review in each policy category and provide details on their plan to increase the work-from-home human content review team, as well as when they anticipate the content review processes will return to a pre-COVID-19 baseline of expectation.

We are also seeing surges of extremist hate surrounding the polarized cultural and political debates happening now in connection with whether and how to reopen the economy and the origin of the virus. In a world characterized by so many divides and so much suffering -- both medical and economic -- as well as so much fear and anxiety, the enormous and unprecedented influence and power exercised by Big Tech in particular comes with profound responsibilities they are not meeting.