As nearly 900 million Indians engage in voting over seven weeks to select the country’s next government, social media companies are under pressure to try to control fake news and misinformation.
But measures taken by the authorities and internet companies have proved woefully ineffective. Fake news, misinformation, propaganda, and hate speech, some of it posted by the political parties and their supporters, is being indiscriminately shared on social media platforms such as Facebook, Twitter, and WhatsApp. A recent survey showed that one in two respondents received fake news in the run-up to the elections, mostly through Facebook and WhatsApp.
A study by the British Broadcasting Corporation last November found that a rise in Hindu nationalism is driving Indians to spread unverified news on social media.
Even before the election campaign, WhatsApp, which is owned by Facebook, had been under fire in India. Between January 2017 and July 2018, 33 people were killed and at least 99 injured in 69 reported attacks on suspected child abductors, apparently based on WhatsApp messages. The BBC study found that the defining feature of WhatsApp groups in India was “the drawing together of people in tight networks of like-mindedness” that also “enabled mobilization in the cause of violence.”
The Indian government has largely blamed WhatsApp for the violence and demanded changes in the messaging application. WhatsApp is already changing in response to the pressure from the Indian government, and this is shaping its reaction to fake news globally since India is WhatsApp’s biggest market, with over 200 million monthly active users.
While it is understandable that the Indian government wants to tackle the scourge of fake news, rumors, and hate speech online, its approach is misguided and dangerous.
In December, Prime Minister Narendra Modi’s government proposed new rules, which if passed, will require companies to proactively identify and remove “unlawful information or content.” Such regulation will greatly undermine rights to freedom of expression and privacy of users.
The rules would also require all companies to be able to trace the origin of information on their platforms. This would effectively require all platforms such as WhatsApp, Signal and Telegram that use end-to-end encryption – only the person sending the message and the person receiving it are able read it – to either withdraw from India or alter their architecture in ways that would undermine cybersecurity and privacy.
These proposed rules come in the absence of a data protection law and when dissent is under attack in India. The government also wants to expand its powers of mass surveillance. But encryption protects ordinary citizens from a range of cybersecurity threats. Platforms with end-to-end encryption have provided means to beleaguered human rights defenders to communicate in a safe manner.
So far, the government has largely relied on frequent blanket internet shutdowns-134 in 2018 – as a tool it claims is needed to prevent violence fueled by rumors circulated online. These blanket restrictions disproportionately curtail freedom of expression and interfere with other fundamental rights. They also hurt the economy, running contrary to the government’s emphasis on internet and digital technology in general for development.
Internet shutdowns are not confined to India. This week, after over 350 people in Sri Lanka were killed in a series of bombing attacks, concerned over rumors and false information, the authorities temporarily shut down access to major social media sites. While there is little evidence that such shutdowns are effective, they hamper the ability of authorities to counter misinformation and discourage violence, and for people to communicate properly.
The authorities want a technological solution, raising concerns over how these internet companies are responding to government requests for user data and content takedowns. The companies need to be more transparent about the number and scope of such requests and the process through which they are made.
WhatsApp, Facebook, Twitter and other social media and messaging platforms may have accelerated the ways in which people share rumors and mobilize mob violence, but they did not invent the problem. They cannot solve it alone either.
India has a long history of communal and caste-based violence. Successive central and state governments have failed to prosecute those most responsible, including public officials accused of complicity or dereliction of duty in high-profile cases. For instance, vigilante “cow-protection” groups have killed at least 44 people since May 2015 on suspicions that the victims were trading or killing cows for beef in violation of Hindu religious beliefs.
My research on the issue found that in most cases the police initially stalled investigations, ignored procedures, or were even complicit in the killings and cover-up of crimes. In several cases, political leaders of Hindu nationalist groups, including elected officials of the ruling Bharatiya Janata Party, defended the assaults.
The victims in most of the killings by the so-called cow protection groups – often with links to Hindu extremist groups with BJP ties-are from minority communities. Chinmayi Arun, a fellow at Harvard University’s Berkman Klein Center notes that if the killings are occurring even without the use of WhatsApp, it is clear that the violence requires a more complex solution than WhatsApp can offer. She cited “the mobilization of hatred, and the tacit approval of local actors, including law enforcement and local leaders.”
It is important for WhatsApp and other social media and messaging companies to become more transparent, devote more resources to respond to legitimate threats, and work with fact-checkers, nongovernmental groups, activists and journalists on the ground. But ending violence in society will need harder work by whichever political party wins the 2019 elections and forms the next government. As a first step, the incoming government should enforce the July 2018 Supreme Court directives to address mob violence, including proper investigations and a public campaign to end attacks on Muslims, Dalits and other minorities.
At the same time, the next government should withdraw the proposed internet rules and instead, publish detailed reports on all content-related requests issued to internet companies to improve transparency and involve public input on all regulatory questions. Ultimately, as David Kaye, United Nations special rapporteur on promotion of the right to freedom of expression and opinion, recently noted, any framework governing user-generated online content should put “human rights at the very center.”