The year 2020 was crucial for Facebook and Twitter, testing the social networks’ abilities to curb hate speech and misinformation. But have they been successful?
(Subscribe to our Today’s Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
The year 2020 was crucial for Facebook and Twitter, testing the social networks’ abilities to curb hate speech and misinformation. The companies rolled out several measures including labeling misleading tweets and introducing new tools to stop fake news. But have they been successful?
The tech giants were criticised globally for failing to stop misinformation on their platforms, especially with respect to the U.S. Election and COVID-19 pandemic. In India too, the companies received criticism of different kinds.
Earlier in October, Ankhi Das, the ex-public policy director of Facebook India, South and Central Asia, said she was leaving the company to pursue her interest in public service.
Das was criticised by India’s opposition lawmakers on the social network’s approach on regulating hate speech in the country. She had opposed applying hate-speech restrictions to some Hindu nationalist individuals and groups, fearing damage to Facebook’s business prospects, The Wall Street Journal reported in August.
Actions in India
Facebook was summoned by the joint parliamentary committee (JPC) in September to discuss how the company allegedly failed to regulate the content favouring the country’s ruling Hindu nationalist party. It was also questioned on how Facebook, WhatsApp and Instagram stored and used data on its users.
Ajit Mohan, chief of the company’s India operations, said that the platform has remained true to its design of being “neutral” and “non-partisan”.
Twitter, on the other hand, started labelling tweets with misleading content in April. In November, BJP’s IT cell head Amit Malviya’s tweet on the recent farmer’s protest was tagged as ‘manipulated media’ by Twitter, sparking public debate. Some even saw it as Twitter moderating its own content and hence overarching its role of being just an intermediary.
A similar instance occurred earlier in December, when senior journalist and writer Salil Tripathi had his Twitter account suspended after posting a tweet containing a poem on the demolition of Babri Masjid. Several users including author Salman Rushdie and Congress leader Shashi Tharoor took to social media to defend the journalist, calling it “an outrageous act of censorship”. Tripathi’s account was later reinstated.
In another episode on social media regulation, Facebook unpublished the page of Kisan Ekta Morcha that was sharing updates on the farmer’s protests. The California-based company said the page went against its ‘community standards on spam’. This raised further questions on the social media’s bias toward the ruling party. The page was reinstated later in the day after the news sparked widespread public outrage.
Facebook is already amidst a flood of antitrust cases globally, with several countries questioning the tech giant’s policies and conduct. The first U.S. Congressional hearing in July this year put emphasis on Facebook’s acquisition of image-sharing app Instagram and how the company stifled competition. The company was also questioned on for its filtered political viewpoints seen from the way it moderated content relating to the coronavirus.
Social media and Section 230 in U.S.
Facebook, like most of today’s social media platforms, is a child of Section 230 of the U.S. Communications Decency Act, 1996. The act has been a major talking point this year as the law generally exempts the internet companies from the liability for the material users post on their networks. U.S. lawmakers came together to discuss the implications of this law, aiming to revise legal protections for online speech.
The path ahead
Discussion around Section 230 is likely to deepen as lawmakers on both sides of the aisle look to rein social media platforms. This may lead to an increase in instances of censorship on the Internet. But while a platform can proactively clean content, this will also increase arbitrariness and kill conversation without tangible benefit, Advocate Apar Gupta, co-founder of Internet Freedom Foundation, said in a separate article for The Hindu.