The Frances Haugen documents, exposing how Facebook prioritised growth over issues like hate speech and misinformation, and the negative impact it has on teens, and more, have its echoes in India, too.
The former Facebook executive and whistleblower’s testimony in the US and European Commission, backed by the leaked documents filed with the US Security and Exchanges Commission, was first reported by The Wall Street Journal.
In India, the company has been selective when it comes to curbing hate speech and misinformation, particularly anti-Muslim content, the document revealed.
The Delhi riots in 2020 is an example
Can’t Facebook remove content?
“Why would you (Facebook) want to reduce the virality of the content and not remove it,” asked Raghav Chadha, MLA and head, Peace and Harmony Committee, to Facebook India public policy director Shivnath Thukral on November 18.
The committee, formed in March 2020 after riots in north-east Delhi killed over 50 people and injured 200, summoned the Facebook India executive to understand the role of social media in preventing the spread of malicious and false speech on the platform that might affect peace.
Thukral’s answer was that Facebook cannot be the judge of the content on the platform, and it works with fact-checking partners and civil society members in the country to help them make decisions.
Chadha and other committee members were not satisfied with this reply. They alleged that the company did not do enough to stop the spread of misinformation on the platform. One of the members pointed out, “Facebook is reluctant to take responsibility.”
This follows, after Haugen’s testimony in the US and European Commission, the US senators, last month, that called for a probe into the whistleblower’s allegations and grilled Facebook’s executives on the same.
Hate speech in India and Facebook
India, which accounts for about 300 million users, is one of the largest markets for Facebook. It has 300 employees in India and 20 in the public policy team. It works with 10 fact-checking partners in the country, up from two in 2019, covering 11 languages.
The company has spent $13 billion in safety and integrity so far -- $5 billion this year alone, including content moderation. The company’s content moderation team has 15,000 members, globally, covering 70 languages, and 20 in India. It is not clear how much of this is being invested in India.
According to the leaked document, first reported by Journal, the company had flagged that India has high-risk misinformation and hate speech shared by politicians that could lead to violence. “Our lack of Hindi and Bengali qualifiers means that much of the content is never flagged,” the document read.
Case in point is an article in Time, which reported that the company failed to take down a video, propagating alleged hate speech, which received 1.4 million views. In 2020, Ankhi Das, former Facebook public policy executive, resigned after the Journal reported that she interfered with moderation.
Facebook, according to media reports, has invested in Hindi content qualifiers in 2018, and Bengali only in 2020. Facebook’s moderation of hate speech or problematic content in India is a cause for concern.
Chadha committee’s questions
On the Delhi riots, the committee had these questions: Did Facebook have the ability to stop misinformation and malicious content that was spread on the platform during the riots?
The committee had a couple of other key points.
One, for a country with millions of users, there is no definition of hate speech in India, given the diverse languages in the country.
Two, the platform supports 20 Indian languages. However, the fact-checking partnership Facebook has with third party companies covers only 11.
The third question is why Facebook just reduces the virality of hate content, instead of taking down problematic content, at least as an interim measure.
What Facebook says
According to Thukral, the platform has its own process for tackling problematic content, and this is a complex issue.
“One person's speech could have multiple implications for different kinds of people. That is why we don't take a call, when we feel some piece of content is disputed, even on mere facts. We have seen that people who speak in public, when they put out certain commentary which they believe is true, is disputed by another person saying it is not true. Facebook cannot be standing in judgment of that. So this is exactly the time when we bring in our third-party fact checkers, who helped us determine whether it is true or false,” Thukral said.
In the meantime, the algorithm to reduce the virality kicks in.
Will the company take down content if it is proved to be false?
No. In such cases, the company works with the civil society organisations to help the company judge on hate speech. The executive did not disclose the names of these organisations.
Does the company remove hate speech when it knows it can be termed hate speech? No.
According to Thukral, “There are certain nuances sometimes, which we may not be able to pick up because of the language, or certain value issues which really do not have the expertise.”
Chadha, who was cited earlier, said that since the committee has "failed to extract definition of hate speech in India" from the executive. It has asked the firm to submit the records of complaints about posts violating the platform’s community standards and of the posts that were then removed, from one month before and until two months after the northeast Delhi riots, 2020.
The committee, he said, will review the submission and could summon the executives for clarity.
This is consistent with the issues brought out by the whistleblower complaints, and clearly has not helped addressing the issue at hand: tackling hate speech and misinformation.
Facebook’s handling of hate speech globally
A report by Time pointed out that the company deprioritised handling misinformation, and instead chose to focus on actions such as reducing the virality of the content instead of removing them even when they knew it was false.
A case in point was the doctored video of the US politician Nancy Pelosi that went viral. The video, which was slowed down to 75 percent, showed Pelosi speaking at a public function in an inebriated condition. The company refused to take it down, citing it did not violate community standards.
Facebook is also being investigated for its role in the Capitol riots on January 6, 2020, before Joe Biden was sworn in as the President.
This is in a market where the company invests significantly. As the whistleblowers' reports show, investments the company make in other regions, including in India, is significantly lesser, and that is a concern.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!