Perils of networking: Addressing social media’s greatest perils

Fear is weaponised even more than hate by leaders who seek to spark violence. Understanding the distinction between fear-inducing and hateful speech is crucial as we collectively grapple with how to govern global internet platforms

Update: 2023-05-08 05:33 GMT
Representative image

This year, Facebook and Twitter allowed a video of a talk to be distributed on their platforms in which Michael J. Knowles, a right-wing pundit, called for “transgenderism” to be “eradicated.” The Conservative Political Action Coalition, which hosted the talk, said in its social media posts promoting the video that the talk was “all about the left’s attempt to erase biological women from modern society.” None of this was censored by the tech platforms because neither Knowles nor CPAC violated the platforms’ hate speech rules that prohibit direct attacks against people based on who they are. But by allowing such speech to be disseminated on their platforms, the social media companies were doing something that should perhaps concern us even more: They were stoking fear of a marginalized group.

Nearly all of the tech platforms developed extensive and detailed rules banning hate speech after finding that the first thing that happens on a new social network is that users start tossing slurs at one another. The European Union even monitors the speediness with which tech platforms remove hate speech. But fear is weaponised even more than hate by leaders who seek to spark violence. Hate is often part of the equation, of course, but fear is almost always the key ingredient when people feel they must lash out to defend themselves.

Understanding the distinction between fear-inducing and hateful speech is crucial as we collectively grapple with how to govern global internet platforms. Most tech platforms do not shut down false fear-inciting claims such as “Antifa is coming to invade your town” and “Your political enemies are paedophiles coming for your children.” But by allowing lies like these to spread, the platforms are allowing the most perilous types of speech to permeate our society.

Susan Benesch, the executive director of the Dangerous Speech Project, said that genocidal leaders often use fear of a looming threat to prod groups into pre-emptive violence. Those who commit the violence do not need to hate the people they are attacking. They just need to be afraid of the consequences of not attacking.

For instance, before the Rwandan genocide in 1994, Hutu politicians told the Hutus that they were about to be exterminated by Tutsis. During the Holocaust, Nazi propagandists declared that Jews were planning to annihilate the German people. Before the Bosnian genocide, Serbs were warned to protect themselves from a fundamentalist Muslim threat that was planning a genocide against them.

“I was stunned at how similar this rhetoric is from case to case,” Ms. Benesch told me in an interview for The Markup. “It’s as if there’s some horrible school that they all attend.” The key feature of dangerous speech, she argued, is that it persuades “people to perceive other members of a group as a terrible threat. That makes violence seem acceptable, necessary or even virtuous.” Fear speech is much less studied than hate speech. In 2021 a team of researchers at the Indian Institute of Technology and M.I.T. published the first large-scale quantitative study of fear speech.

In an analysis of two million WhatsApp messages in public Indian chat groups, they found that fear speech was remarkably difficult for automated systems to detect because it doesn’t always contain the derogatory words that can characterize hate speech. “We observed that many of them are based on factually inaccurate information meant to mislead the reader,” wrote the paper’s authors, Punyajoy Saha, Binny Mathew, Kiran Garimella and Animesh Mukherjee. (Garimella moved from M.I.T. to Rutgers after the paper’s publication.) Human judgment is often needed to differentiate between real fears and false fears, but the tech platforms often don’t spend the time or develop the local knowledge to research all the fears being expressed.

Three of the authors and one new collaborator, Narla Komal Kalyan, followed up this year with a comparison of fear and hate speech on the right-wing social network Gab. The “non-toxic and argumentative nature” of fear speech prompts more engagement than hate speech, they found.

So how do we vaccinate ourselves against fear-based speech on social media that may incite violence? The first steps are to identify it and to recognize it as the cynical tactic that it is. Tech platforms should invest in more humans who can fact-check and add context and counterpoints to false fear-inducing posts.

Another approach is to wean social-media platforms off their dependence on the so-called engagement algorithms that are designed to keep people on the site for as long as possible — which also end up amplifying outrageous and divisive content. The European Union is already taking a step in this direction by requiring large online platforms to offer users at least one alternative algorithm.

Ravi Iyer, who worked on the same team as the Facebook whistle-blower Frances Haugen, now is the managing director of the Psychology of Technology Institute. A joint project between the University of Southern California Marshall School of Business’s Neely Center and the University of California, Berkeley’s Haas School of Business, the institute studies how technology can be used to benefit mental health. In a draft paper published last month, he, Jonathan Stray and Helena Puig Larrauri suggested that platforms can reduce destructive conflict by dialling down the importance of some metrics that promote engagement.

That could mean that for certain hot-button topics, platforms would not automatically boost posts that performed well on typical engagement metrics like the number of comments, shares and time spent per post. After all, those metrics don’t necessarily mean that the user liked the post. Users could be engaging with the content because they found it objectionable or just attention getting.

Instead, the researchers propose that social media algorithms could boost posts that users explicitly indicated they found valuable. Facebook has been quietly pursuing this approach toward political content. In April the company said it was “continuing to move away from ranking based on engagement” and instead for users was giving more weight to “learning what is informative, worth their time or meaningful.” Facebook’s change might be a reason that Knowles’s talk got far fewer views on Facebook than on Twitter, which has not adopted this algorithmic change.

But in the end, algorithms aren’t going to save us. They can demote fear speech but not erase it. We, the users of the platforms, also have a role to play in challenging fear-mongering through‌ counter-speech, in which leaders and bystanders negatively respond to fear-based incitement. The goal of counter-speech is not necessarily to change the views of true believers but rather to provide a counter-narrative for people watching on the sidelines.

Visit news.dtnext.in to explore our interactive epaper!

Download the DT Next app for more exciting features!

Click here for iOS

Click here for Android

Tags:    

Similar News

Editorial: High off the hog