Skip to main content

Can Social Media Platforms Stop Electoral Disinformation and Respect Free Speech?

US Elections Underscore Challenges of Online Content Moderation

Combination of images shows logos for companies from left, Twitter, YouTube and Facebook. © AP Photos/File

Social media platforms—under fire for not doing enough to address United States election-related misinformation in 2016—have released a flurry of new policies in recent weeks and months to protect the integrity of the November 3, 2020 US election. But even well-intentioned efforts by private companies to rein in electoral misinformation can result in silencing political expression and dissent. In order to meet their responsibility to identify and mitigate harms on their platforms, they should ensure that any restrictions on content are necessary and proportionate, carried out in a transparent manner, and give users access to meaningful remedy.

Facebook, Twitter, and YouTube, among others, have expanded and refined their policies to fight foreign interference and stem the spread of misinformation and disinformation intended to suppress the vote and delegitimize election results. Beyond taking down content and accounts that violate their policies, some platforms have begun labeling misleading or borderline content, directing users to third-party “authoritative” sources, and providing corrective information by engaging fact-checkers. They also attempt to reduce the reach of misleading posts by downranking them in their algorithms or otherwise limiting their spread. Some platforms created voter information centers with details on when, where, and how to vote and added a degree of transparency to their political advertising practices through ad libraries.

But the platforms’ policies on electoral misinformation have been released in a patchwork manner and are not uniform. Additionally, some policies are written in a way that gives platforms considerable leeway in their interpretation, which can lead to both harmful misinformation staying up and political expression being taken down.

One key challenge is how platforms deal with influential posters, like political figures. As Facebook’s own civil rights audit noted, politicians have historically been the greatest perpetrators of voter suppression in the US, which has disproportionately targeted voters of color. Political and other leaders’ speech can also be more likely to incite violence than that of an ordinary user. Yet, platforms like Facebook and Twitter do not consistently take down or label their posts for violating content guidelines, because they treat posts from political figures as inherently newsworthy or in the public interest. The public has a right to know what their elected officials and candidates are saying on matters of public interest, especially in the context of elections. Placing a clear label on posts that violate a platform’s policy—and then taking measures to limit the post’s reach—can be preferable to taking it down entirely. But an overall deference to politicians combined with a narrow interpretation of election integrity policies can allow politicians to get away with misrepresenting electoral information and suppressing the vote.

In recent weeks, Facebook and Twitter have stepped up their measures against content from politicians when they determined that the harm of spreading voter suppressive content outweighed the public interest value of keeping it up. However, posts that have prompted Twitter to place a warning label on a tweet and limit its spread have not always resulted in a similar action from Facebook. Furthermore, labels and other efforts to slow down the distribution of misinformation and disinformation often come too late, after a post has gone viral. Civil society organizations are constantly flagging violative content, including content fact-checkers have already flagged, that has slipped under platforms’ radar.

In the coming days and weeks especially, it is critical that platforms apply their standards in a manner that is consistent with human rights principles to reduce the spread of election-related misinformation and disinformation and to direct users to corrective information. Platforms are bound to make mistakes, which can end up silencing people’s political expression, so it’s important that they be more accountable and provide users with fair process. That means the platforms need to give notice and access to appeal in order to better ensure that they are enforcing their policies in a fair, unbiased, and proportional manner.

In order to understand the efficacy of these policy changes and their impact on human rights and democratic processes, platforms need to be far more transparent by providing access to data to researchers and publishing more comprehensive transparency reports. For example, what are the error rates in their implementing election integrity policies (both in terms of infringing content staying up, and acceptable content being taken down)? Do labels on posts effectively provide users with corrective information, and if so, which kinds? How are they measuring the impact of efforts to slow down the spread of disinformation through reducing algorithmic amplification?

One reason combatting electoral misinformation and disinformation is particularly difficult for these platforms is because they were designed to maximize clicks, likes, and shares of the most engaging content—not to deliver reliable and accurate election information. It is essential that platforms be more transparent about the algorithms that shape their curation and recommendation systems and address the role they play in directing users towards misinformation. Additionally, platforms should give users the ability to opt out of recommendation systems, and to change the variables that influence the content they see.

Finally, US voters aren’t the only people going to the polls in 2020 who use social media. By the end of the year there will have been 69 national elections. Social media companies need to put sufficient resources and consideration towards protecting the integrity of the elections in every country where people use their platforms to engage in political discourse. They should take a principled approach that takes into consideration local context. Their human rights responsibilities don’t stop at US borders but apply in all countries where people use their services.

Your tax deductible gift can help stop human rights violations and save lives around the world.

Region / Country