Youtube homepage on a cell phone

“Video Unavailable”

Social Media Platforms Remove Evidence of War Crimes

YouTube, which is owned by Google, says it is implementing “cutting-edge machine-learning technology” designed to identify and remove millions of pieces of uploaded content, including content identified as “violent or graphic,” “hateful and abusive,” a “promotion of violence and violent extremism,” and “spam, misleading, or scams.” © 2017 Jaap Arriens/NurPhoto via Getty Images

Summary

In recent years, social media platforms have been taking down online content more often and more quickly, often in response to the demands of governments, but in a way that prevents the use of that content to investigate people suspected of involvement in serious crimes, including war crimes. While it is understandable that these platforms remove content that incites or promotes violence, they are not currently archiving this material in a manner that is accessible for investigators and researchers to help hold perpetrators to account.

Social media content, particularly photographs and videos, posted by perpetrators, victims, and witnesses to abuses, as well as others has become increasingly central to some prosecutions of war crimes and other international crimes, including at the International Criminal Court (ICC) and in national proceedings in Europe. This content also helps media and civil society document atrocities and other abuses, such as chemical weapons attacks in Syria, a security force crackdown in Sudan, and police abuse in the United States.

Yet social media companies have ramped up efforts to permanently remove posts from their platforms that they consider violate their rules, or community guidelines or standards according to their terms of service, including content they consider to be “terrorist and violent extremist content” (TVEC), hate speech, organized hate, hateful conduct, and violent threats. According to the companies, they not only take down material that content moderators classify for removal. Increasingly, they also use algorithms to identify and remove content so quickly that no user sees it before it is taken down. In addition, some platforms have filters to prevent content identified as TVEC and other relevant content from being uploaded in the first place. Governments globally have encouraged this trend, calling on companies to take down content as quickly as possible, particularly since March 2019, when a gunman livestreamed his attack on two mosques in Christchurch, New Zealand that killed 51 people and injured 49 others.

Companies are right to promptly remove content that could incite violence, otherwise harm individuals, or jeopardize national security or public order. But the social media companies have failed to set up mechanisms to ensure that the content they take down is preserved, archived, and made available to international criminal investigators. In most countries, national law enforcement officials can compel the companies to hand over the content through the use of warrants, subpoenas, and court orders, but international investigators have limited ability to access the content because they lack law enforcement powers and standing.

Law enforcement officers and others are also likely to be missing important information and evidence that would have traditionally been in the public domain because increasingly sophisticated artificial intelligence systems are taking down content before any of them have a chance to see it or even know that it exists. There is no way of knowing how much potential evidence of serious crimes is disappearing without anyone’s knowledge.

Independent civil society organizations and journalists have played a vital role in documenting atrocities in Iraq, Myanmar, Syria, Yemen, Sudan, the United States, and elsewhere – often when there were no judicial actors conducting investigations. In some cases, the documentation of organizations and the media has later triggered judicial proceedings. However, they also have no ability to access removed content. Access to this content by members of the public should be subject to careful consideration, and removal may be appropriate in some cases. But when the content is permanently removed and investigators have no way of accessing it, this could hamper important accountability efforts.

Companies have responded to some civil society requests for access to content either by reconsidering its takedown and reposting it, or by saying that it is illegal for them to share the content with anyone. Human Rights Watch is not aware of any instances where companies have agreed to provide independent civil society and journalists access to such content if it was not reposted.

It is unclear how long the social media companies save content that they remove from their platforms before deleting it from their servers or even whether the content is, in fact, ever deleted from their servers. Facebook states that, upon receipt of a valid request, it will preserve the content for 90 days following its removal, “pending our receipt of [a] formal legal process.” Human Rights Watch knows, however, of instances in which Facebook has retained on its servers content taken down for periods much longer than 90 days after removal. In an email to Human Rights Watch on August 13, a Facebook representative said, “Due to legislative restrictions on data retention we are only permitted to hold content for a certain amount of time before we delete it from our servers. This time limit varies depending on the abuse type… retention of this data for any additional period can be requested via a law enforcement preservation request.”

In an email to Human Rights Watch on August 4, Twitter said it, “retains different types of information for different lengths of time, and in accordance with our Terms of Service and Privacy Policy.” In at least one instance that Human Rights Watch is aware of, YouTube restored content two years after it had taken it down.

Holding individuals accountable for serious crimes may help deter future violations and promote respect for the rule of law. Criminal justice also assists in restoring dignity to victims by acknowledging their suffering and helping to create a historical record that protects against revisionism by those who will seek to deny that atrocities occurred.

However, both nationally and internationally, victims of serious crimes often face an uphill battle when seeking accountability, especially during situations of ongoing conflict. Criminal investigations sometimes begin years after the alleged abuses were committed. It is likely that by the time these investigations occur, social media content with evidentiary value will have been taken down long before, making the proper preservation of this content, in line with standards that would be accepted in court, all the more important.

International law obligates countries to prosecute genocide, crimes against humanity, and war crimes. In line with a group of civil society organizations who have been engaging with social media companies on improving transparency and accountability around content takedowns since 2017, Human Rights Watch urges all stakeholders, including social media platforms, to engage in a consultation to develop a mechanism to preserve potential evidence of serious crimes and ensure it is available to support national and international investigations, as well as documentation efforts by civil society organizations, journalists, and academics.

The mechanism in the US to preserve potential evidence of child sexual exploitation posted online provides important lessons for how such a mechanism could work. US-registered companies operating social media platforms are required to take down content that shows child sexual exploitation, but also preserve it on their platforms for 90 days and share a copy of the content, as well as all relevant metadata—for example, the name of the content’s author, the date it was created, and the location—and user data, with the National Center for Missing and Exploited Children (NCMEC). The NCMEC, a private nonprofit organization, has a federally designated legal right to possess such material indefinitely, and, in turn, notifies law enforcement locally and internationally about relevant content that could support prosecutions.

A mechanism to preserve publicly posted content that is potential evidence of serious crimes could be established through collaboration with an independent organization that would be responsible for storing the material and sharing it with relevant actors. An upcoming report from the Human Rights Center at the University of California, Berkeley, “Digital, Lockers: Options for Archiving Social Media Evidence of Atrocity Crimes,” studies the possible archiving models for this, creating a typology of five archive models, and assessing strengths and weaknesses of each.

In parallel with these efforts, social media platforms should be more transparent about their existing takedown mechanisms, including through the increased use of algorithms, and work to ensure that they are not overly broad or biased and provide meaningful opportunities to appeal content takedowns.

Methodology

For this report, Human Rights Watch interviewed seven people who work at civil society organizations that are primarily focused on open source material and privacy rights, two child protection workers, three lawyers who work on new methodologies for using audiovisual material and publicly available data in legal cases, two archivists, one statistician focused on human rights data, two journalists who use open source material, one former prosecutor with experience in international tribunals, five individuals within internationally mandated investigations, three national law enforcement officers, one European Union official, and one European Member of Parliament.

Human Rights Watch also reviewed Facebook, Twitter, and YouTube content that the organization has cited in its reports to support allegations of abuse since 2007. From 5,396 total pieces of content referenced in 4,739 reports (the vast majority of which were published in the last five years), it found that 619 (or 11 percent) had been removed.

In letters to Facebook, Twitter, and Google sent in May 2020, Human Rights Watch shared the links to this content that had been taken down and asked the companies if Human Rights Watch could regain access for archival purposes. Human Rights Watch also asked a series of other questions related to how the companies remove content. The full response from Twitter is included as an Annex in this report. At the time of writing, Human Rights Watch had not received a response from Google, and only a brief email response not addressing most of the questions raised in the letter from Facebook.

I. How Takedowns Work

For many years, but particularly since 2014 and the ballooning of online content from the Islamic State (also known as ISIS) and other violent extremist groups, social media companies have been concerned about material they consider to be “terrorist and violent extremist content” (TVEC) on their platforms.[1] Traditionally, social media platforms have relied on users and in some cases subject-matter experts who flag inappropriate content, which content moderators working for, or on behalf of, the platforms then review and either take offline or leave up.[2] Companies have been taking down large amounts of other content as well, beyond what they consider to be TVEC. They have also recently started affixing labels and warning notices that users have to click through in order to access content that may be graphic, but that the companies have chosen not to remove.

However, given the quantity of content that could be flagged, including potentially hundreds of thousands of reposts, platforms announced a new approach involving the use of machine-learning systems.[3] YouTube, which is owned by Google, said in August 2017 that it was implementing “cutting-edge machine-learning technology” designed to identify and remove content it identified as TVEC.[4] The new system has yielded results: YouTube took 6,111,008 videos offline between January and March 2020 for violating its Community Guidelines, according to the most recent transparency report available at time of writing. The company removed 11.4 percent of the content because it was “violent or graphic,” 1.8 percent of the content because it was “hateful and abusive,” 4.2 percent of the content because it was a “promotion of violence and violent extremism,” and 37 percent of the content because it was “spam, misleading, or scams.”[5] Automated systems flagged 93.4 percent of all the content that the platform took down. Of this, 49.9 percent was taken down before any user saw it, the report said.

Until recently, YouTube has said that it only removes content that automated systems flagged as TVEC after humans reviewed whether it fit the company’s definitions of “terrorist” or “violent extremist” material.[6] However, YouTube announced on March 16, 2020, that in response to the Covid-19 pandemic, it “will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review.”[7]

During that same time period, between January and March 2020, Facebook took down 6.3 million pieces of “terrorist propaganda,” 25.5 million pieces of “graphic violence,” 9.6 million pieces of “hate speech,” 4.7 million pieces of “organized hate” content, and disabled 1.7 billion “fake accounts.” 99.3 percent, 99 percent, 88.8 percent, 96.7 percent, and 99.7 percent of this content, respectively, was automatically flagged before users reported it.[8] During that period, the company said it was able to remove most content it considered to be terrorist before users saw it. Users appealed takedowns for 180,100 pieces of “terrorist propaganda” content, 479,700 pieces of “graphic violence” content, 1.3 million pieces of “hate speech” content, and 232,900 pieces of “organized hate” content. Upon appeal, Facebook restored access to 22,900 pieces of “terrorist propaganda” content, 119,500 pieces of “graphic violence” content, 63,600 pieces of “hate speech” content, and 57,300 pieces of “organized hate” content. Facebook reported that it restored content that had been taken down without any appeal in 199,200 cases involving “terrorist propaganda,” 1,500 cases involving “graphic violence,” 1,100 cases involving “hate speech,” and 11,300 pieces involving “organized hate” content.

Between January and June 2019, 5,202,281 Twitter accounts were reported for “hateful conduct” and 2,004,344 Twitter accounts were reported for “violent threats.” Upon receiving these reports, Twitter “actioned” 584,429 accounts for “hateful conduct” and 56,219 accounts for “violent threats.”[9] The email from Twitter’s Public Policy Strategy and Development Director outlined a slightly different approach to identifying content to take down, based on the behavior of the account putting out the content rather than a review of the substance of the content:

Twitter’s philosophy is to take a behavior-led approach, utilizing a combination of machine learning and human review to prioritize reports and improve the health of the public conversation. That is to say, we increasingly look at how accounts behave before we look at the content they are posting. Twitter also employs content detection technology to identify potentially abusive content on the service, along with allowing users to report content. This is how we seek to scale our efforts globally and leverage technology even where the language used is highly context specific.
In certain situations, behaviour identification may allow us to take automated action - for example, accounts clearly tied to those that have been previously suspended, often identified through technical data. However, we recognise the risks of false positives in this work and humans are in the loop for decisions made using content and where signals are not strong enough to automate. Signals based on content analysis are part of our toolkit, but not used in isolation to remove accounts and we agree with concerns raised by civil society and academics that current technology is not accurate enough to fully automate these processes. We would not use these systems to block content at upload, but do use them to prioritise human review.

Similar to the approach adopted to address child sexual exploitation content, in December 2016, the founding member companies of the Global Internet Forum to Counter Terrorism (GIFCT) – Facebook, Microsoft, Twitter, and YouTube – committed to creating a shared industry database of hashes, later called the “Hash Sharing Consortium,” for its members.[10] In addition to the four founders, GIFCT’s current members include Pinterest, Dropbox, Amazon, LinkedIn, Mega.nz, Instagram, and WhatsApp.[11] If members identify a piece of content on their platform as terrorist content according to their respective policies, they can assign it a hash or unique digital “fingerprint,” which is entered into the shared database. A “hash sharing consortium” that includes GIFCT members and some other tech companies can then use filtering technology to identify hashed content and block it from being uploaded in the first place.[12]

If the content has been edited in any way (sped up, slowed down, or shortened, for example) or if there is more contextual information that has been added, the content would bypass hash filtering and not be automatically blocked from being uploaded. According to the GIFCT, as of July 2020 its database currently has over 300,000 unique hashes of “terrorist” content.[13]

For years, civil society organizations have engaged with social media companies, independently and as part of the GIFCT, on the need for increased transparency in how and why “terrorist” content is taken down and to warn against the human rights harms of opaque, cross-platform coordination. Most recently, in July 2020, 16 civil society organizations wrote to Nicholas Rasmussen, the executive director of the GIFCT, reiterating concerns groups raised in February 2020 with Facebook, Google, Microsoft and Twitter around the growing role of GIFCT more broadly in regulating content online.[14] The letter raises concerns around a serious risk of unlawful censorship from government involvement in GIFCT; lack of genuine and balanced engagement with civil society; lack of clarity over the terms “terrorism,” “violent extremism,” “extremism,” and support for or incitement to them; increasing scope and use of a shared hash database without either transparency or remedy for improper removals; and persistent lack of transparency around GIFCT activity.

Despite these and other efforts, there is still little public information on what criteria social media platforms use when assessing whether content is TVEC or hate speech and should be taken down.[15] Additionally, there is little visibility to anyone outside of the GIFCT member companies as to what content is represented in the hash database as TVEC, and whether it meets the platforms’ own definitions of terrorist content. According to the GIFCT, its member companies “often have slightly different definitions of ‘terrorism’ and ‘terrorist content.’”[16] But for the purposes of the hash-sharing database, the companies decided to define terrorist content “based on content related to organizations on the United Nations Security Council’s consolidated sanctions list.”[17]

Dia Kayyali, a program manager and advocacy lead at WITNESS, warns that GIFCT may ultimately become a “multi-stakeholder forum where human rights experts are brought in as window-dressing while government and companies work closely together… in the mad rush to ‘eliminate’ poorly defined ‘terrorist and violent extremist content.’”[18]

Compounding concerns over the disappearance of potentially valuable online evidence, a growing number of governments, as well as Europol, have created law enforcement teams known as internet referral units (IRUs) that flag content for social media companies to remove, with scant opportunity to appeal or transparency over the criteria they use and how much of the removed material they archive, if any.[19]

Each company retains the right to make individualized decisions about whether to remove any particular post from its service on the basis of its own community guidelines and terms of service, but little is known about how companies choose to act when using the shared hash database.[20] In practice, it is likely that small companies will use the database to automatically remove content in the database because they do not have the resources to carry out individualized reviews. The GIFCT has categorized the hashes it has entered into its database as falling into at least one of the following categories: “Imminent Credible Threat,” “Graphic Violence Against Defenseless People,” “Glorification of Terrorist Acts,” “Radicalization, Recruitment, Instruction,” and “Christchurch, New Zealand, attack and Content Incident Protocols”.[21]

The European Union’s parliament, commission, and council are currently negotiating the text of a proposed regulation addressing how social media companies should address content classified as TVEC.[22] The draft regulation contains deeply troubling provisions, such as a proposed requirement for platforms to remove content within one hour of authorities issuing them a removal order—potentially without a judicial warrant—and requiring that platforms use machine-learning algorithms to detect and remove suspected terrorist content.[23] This would likely incentivize overcompliance. It also would run the risk of forcing some smaller social media companies to close if their limited staffing would mean they were unable to meet these requirements.

Importantly, though, the regulation is likely to require platforms to preserve content they remove for six months, in order to allow access by law enforcement.[24] This could help put pressure on social media companies to create an independent mechanism to preserve and archive content.

Recognizing that companies engaged in content moderation are not operating with sufficient transparency and accountability, a group of organizations, academics, and advocates developed in February 2018 the Santa Clara Principles on Transparency and Accountability in Content Moderation.[25] The Principles provide a set of baseline standards or initial steps that companies should take to provide meaningful due process to impacted individuals and better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.

II. Value of Social Media Content in Criminal Investigations into Serious International Crimes and other Documentation Efforts

Photos, videos, and other content posted on social media have increasingly supported accountability processes, including judicial proceedings, for serious international crimes, both at the national and international level. Human rights workers and journalists have also analyzed such content when investigating serious crimes. This material can be used to corroborate witness testimony and to confirm specific details about an incident, including the exact time and location, identities of the perpetrators, and how the crimes were carried out or their aftermaths. Such content can be especially valuable when researchers and investigators do not have access to the location where alleged crimes were committed due to security concerns or restrictions imposed by local authorities.

National Proceedings

Human Rights Watch is aware of at least 10 cases where prosecutors in Germany, Finland, the Netherlands, and Sweden secured convictions against individuals linked to war crimes in Iraq and Syria in cases that involved videos and photos shared over social media.[26]

In one example, in 2016, Swedish authorities investigated Haisam Omar Sakhanh, a Syrian man who was seeking asylum there, for lying about a former arrest in Italy on his asylum application.

Through their investigation, they discovered a video that the New York Times had published in September 2013 that showed a Syrian non-state armed group opposed to the government extrajudicially executing seven captured Syrian government soldiers in Idlib governorate on May 6, 2012.[27] Sakhanh was seen in the video participating in the executions. A Swedish court convicted him of war crimes and sentenced him in 2017 to life in prison.[28] In a 2019 Dutch case, a man was convicted of a war crime for posing next to a corpse on a crucifix in Syria and then posting the photograph on Facebook. In this case, the content was not only germane to the charge of association with a terrorist organization, but also the charge of participating in an outrage on personal dignity, adding to the determination that a war crime had been committed.[29]

As one European law enforcement officer working on war crimes investigations put it: “Social media content is absolutely crucial to our work, especially to preliminary investigations and in ongoing conflicts and countries where we can’t go.”[30]

International Courts and Internationally Mandated Investigations

For International Criminal Court (ICC) investigators and United Nations-mandated investigations or inquiries, open source information is particularly helpful given that these bodies do not have national law enforcement powers. [31] As such, investigators cannot rely on subpoenas and search warrants to access privately held information. ICC investigations and other internationally mandated inquiries take place when national authorities have been unable or unwilling to address serious crimes, which sometimes means there can be years between the alleged crime and the gathering of evidence, making it even more important for the court to have access to material that captured incidents as they
took place.

On August 15, 2017, the ICC issued an arrest warrant for Mahmoud al-Werfalli, linked to the armed group known as the Libyan Arab Armed Forces (LAAF) under the command of Khalifa Hiftar, for the war crime of murder.[32] He was wanted by the court for his alleged role in the killing of 33 people in seven incidents that took place in and around Benghazi between June 2016 and January 2018.[33] On July 4, 2018, the ICC issued a second arrest warrant for al-Werfalli for crimes committed in another incident.[34] The ICC issued the arrest warrants largely on the basis of seven videos of the killings posted on social media, some of which were posted by the unit that al-Werfalli commanded. Al-Werfalli remains a fugitive of
the court.

This is the first instance at the ICC where such videos played a key role in the triggering of an investigation into a particular series of alleged crimes.[35] The case is also unique in that investigators were documenting the alleged crimes as they occurred, thus enabling them to identify and preserve the relevant content. However, because of the ICC’s role as a court of last resort, investigators usually undertake investigations into alleged crimes long after they have taken place.[36]

Facebook in Myanmar

Myanmar’s military committed extensive atrocities against its Rohingya population, including murder, rape, and arson during its late 2017 campaign of ethnic cleansing, forcing more than 740,000 Rohingya to flee to Bangladesh.[37] As security forces perpetrated crimes against humanity starting in August 2017, they used Facebook as an echo chamber to foster the spread of incendiary commentary which served to dehumanize the Rohingya and incite violence.[38] The International Fact-Finding Mission on Myanmar, established by the UN Human Rights Council in March 2017, reported on Facebook’s role in enabling the spread of discrimination and violence against Rohingya and called for Myanmar's military generals to be investigated for genocide, crimes against humanity, and war crimes.

Facebook has since admitted that it failed to prevent its platform from being used to “foment division and incite offline violence.”[39] In an effort to address this, from August to December 2018, Facebook took down 490 pages, 163 accounts, 17 groups, and 16 Instagram accounts for “engaging in coordinated inauthentic behavior,” and banned 20 individuals and organizations tied to the military “to prevent them from using our service to further inflame ethnic and religious tensions.”[40] In these cases, it stated on its website that it took down the content down but has preserved it. On May 8, 2020, Facebook said it took down another three pages, 18 accounts, and one group, all linked to the Myanmar police.[41] Facebook defines coordinated inauthentic behavior as, "when groups of pages or people work together to mislead others about who they are or what they are doing."[42]

A Myanmar expert told Human Rights Watch that it likely took Facebook so long to identify these posts because it did not have enough Burmese-speaking content moderators, and its algorithm was unable to detect the Burmese language font Zawgyi because it is not machine-readable.[43] For this reason, takedowns of content in response to the August 2017 violence were primarily manual as opposed to automatic. A Unicode conversion by Facebook has now enabled computer-initiated takedowns of content written in Zawgyi.[44]

The International Fact-Finding Mission recommended that all social media platforms “retain indefinitely copies of material removed for use by judicial bodies and other credible accountability mechanisms addressing serious human rights violations committed in Myanmar in line with international human rights norms and standards, including where such violations amounted to crimes under international law.”[45]

An individual with knowledge of the UN’s Independent Investigative Mechanism for Myanmar (IIMM) – which is mandated to collect evidence of serious crimes and prepare files for criminal prosecution, making use of the information handed over to it by the International Fact-Finding Mission– said that this content posted on Facebook is crucial to its own investigations and that losing access to this content could completely halt some of its investigations.[46] ICC prosecutor Fatou Bensouda, in her request to open an investigation into the deportation of Rohingya into Bangladesh and related crimes, also cited Facebook posts by military officials as evidence of discriminatory intent.[47]

Some investigators and researchers identified and saved Myanmar-related content related to state-sponsored hate speech, incitement to violence, and misinformation before Facebook took it down, and this has also provided important documentation related to the commission of grave crimes. Human Rights Watch, the UN, and others, have gathered a wide range of information from Facebook pages belonging to the Myanmar military, including for example evidence that demonstrates command responsibility and identifies units involved in attacks as well as evidence that demonstrates the role of authorities in promoting threats, discrimination, and incitement; the military’s intent to force Rohingya from the country, and possibly its intent to destroy; and the pre-planning of grave crimes.[48] Relevant documentation included frequent updates on the military’s “clearance operations” posted on the Facebook pages of the commander-in-chief and other military officials and organizations that were later removed.

In November 2019, Gambia brought a case to the International Court of Justice (ICJ) alleging Myanmar’s violation of various provisions of the Genocide Convention. In a major ruling, on January 23, 2020 the ICJ unanimously adopted “provisional measures” ordering Myanmar not to commit and to prevent genocide, and to take steps to preserve evidence while the case proceeds on the merits. The legal team representing Gambia, which brought the case to the ICJ, relied on content that Myanmar state officials had posted on Facebook as important evidence of genocidal intent, as identified by the International Fact Finding Mission.[49] On June 8, 2020, Gambia filed a suit against Facebook in the United States to compel the social medial company to provide documents and communications from Myanmar officials’ profiles and posts that the social media platform had taken down, as well as materials from internal investigations that led to the takedowns.[50]

On August 4, 2020, Facebook filed an objection to the request from Gambia to compel the company to provide documents and communications from Myanmar officials’ profiles and posts that the social media platform had taken down.[51] Facebook said that the proposed discovery order would violate the Stored Communications Act, a US law that prohibits providers of an “electronic communications service” from disclosing the content of user communications, and urged the US District Court for the District of Columbia to reject the request.[52]

Gambia is seeking this information under 28 U.S.C. Section 1782, which enables parties to litigation outside of the US to seek evidence in the US for their case. While foreign litigants do not have the power to compel potentially relevant evidence in all circumstances, there is a question as to whether the Stored Communications Act poses too high a barrier, particularly in relation to public communications that the company removed.[53]

On August 26, Facebook announced that it had lawfully provided the IIMM with data it had preserved in 2018.[54]

The situation in Myanmar shows both how, in some cases, it is important for social media companies like Facebook to act quickly to remove content that may be inciting violence from their public platforms, while at the same time preserving such content is critical so it can be used for accountability purposes.

Civil Society and Media Documentation

The value of social media content extends beyond judicial mechanisms and internationally mandated investigations to the work of civil society organizations and investigative journalists.

Between January 1, 2007 and February 11, 2020, Human Rights Watch in its public reports linked to at least 5,396 pieces of content on Facebook, Twitter, and YouTube that supported allegations of abuse in 4,739 reports, the vast majority of which were published in the last five years. When reviewing these links in April 2020, Human Rights Watch found that the content in at least 619 links (or 11 percent) was no longer available to the organization online, meaning it had presumably either been removed by the social media platforms, or the users who posted the material had removed it or made it private. It was generally not clear from the error messages why the content was unavailable, with some error messages that were extremely vague, including “Please try your request again later,” which we did numerous times over an extended period, and “Video unavailable.” Human Rights Watch is now in the process of developing a comprehensive archiving system to preserve all content that we link to in our reports going forward.

In some cases, the content posted on social media alerted Human Rights Watch researchers to an alleged violation that they were not previously aware of and prompted deeper inquiry. In other cases, researchers discovered relevant social media content in the course of their research and used it to corroborate important details from other sources. These include videos in 2017 showing Iraq forces executing ISIS suspects in Mosul and a 2020 video from Niger showing soldiers in a military vehicle running over and killing two alleged Boko Haram fighters.[55] In China, Human Rights Watch researchers found content posted on public WeChat accounts that provided evidence of mass police surveillance and abuse of ethnic Uyghurs and other Turkic Muslims in Xinjang, as well as of gender discrimination in employment.[56]

Investigative journalists have also relied on social media content in their reporting on apparent war crimes and laws-of-war violations. On September 21, 2015, for example, the US-led coalition against ISIS in Iraq posted a video on its YouTube channel titled “Coalition Airstrike Destroys Daesh VBIED Facility Near Mosul, Iraq 20 Sept 2015.” The video, filmed from an aircraft, showed the bombing of two compounds that its caption identified as a car-bomb factory. However, an Iraqi man who saw the video, Bassim Razzo, recognized that the sites being bombed were actually his home and the home of his brother. The attack on September 20, now known to have been a joint US-Dutch airstrike, killed four members of Bassim Razzo's family.[57]

The video was a key part of an extensive New York Times Magazine investigation into over 100 coalition airstrikes, showing that the US-led coalition was killing civilians at much higher rates than it claimed.[58] The journalists preserved a copy of the video of the bombing, which was fortunate because the coalition took it down in November 2016 after one of the journalists, Azmat Khan, contacted the coalition about the strike in the video.[59] Based on the evidence gathered, the coalition eventually offered Bassim Razzo compensation for the bombing. Rizzo refused it because of the paltry sum being offered, a mere fraction of Rizzo’s medical and property damage costs, without even factoring in the loss of life.[60]

Khan said that starting on January 8, 2017, the coalition began removing all of its airstrike videos from YouTube. Chris Woods, the founder and director of Airwars, an independent civilian casualties monitor, said that as of 2016, amid mounting allegations of mass civilian casualties during US-led coalition airstrikes, the coalition also began aggressively removing any social media content seemingly linked to ISIS within minutes.[61] Platforms like YouTube and Facebook followed suit, taking down profiles, groups, or pieces of content they identified as linked to ISIS or other extremist armed groups. These pages sometimes included videos and photos from the sites of coalition actions that, when preserved quickly enough, were vital resources for Airwars to gain insight into the impact of airstrikes on civilians in areas under ISIS control. Airwars developed a system to download and archive content where it could. “On at least one occasion, the coalition actually contacted us to get its hands on a deleted ISIS video showing the civilian harm from one of its strikes, which we had luckily saved,” Woods said.[62]

Between 2013 and 2018, Human Rights Watch and seven other independent, international organizations researched and confirmed at least 85 chemical weapons attacks in Syria the majority perpetrated by Syrian government forces.[63] The actual number of chemical attacks is likely much higher. The Syrian Archive is the open source project of a nonprofit organization called Mnemonic that collects, verifies, and analyzes visual documentation of human rights violations in Syria.[64] It has been documenting suspected attacks by collecting data from the media and from civil society and other organizations, totaling over 3,500 sources. Its “Chemical Weapons Database” contains 28 GB of documentation from 193 sources of what it found to amount to 212 chemical weapons attacks in Syria between 2012 and 2018, it says.[65] This content includes 861 videos, most of which were posted on YouTube by citizen and professional journalists, medical groups, humanitarian organizations, and first responders. The organization said that out of 1,748,358 YouTube videos in its entire archive that it had preserved up until June 2020, 361,061 or 21 percent were no longer available online. Out of 1,039,566 Tweets that it had preserved, 121,178 or 11.66 percent were no longer available online.[66]

Mnemonic’s Yemeni Archive project has similar findings. [67] Of the 444,199 videos from YouTube that the Yemeni Archive has preserved as of June 2020, 61,236 videos or 13.79 percent are no longer publicly available online. Of the 192,998 Tweets that the Yemeni Archive has preserved, 15,860 Tweets or 8.22 percent are no longer available online.[68]

Because Mnemonic’s Syrian Archive and Yemeni Archive projects have their own systems of saving and archiving material, the group has retained copies of the content subsequently taken down either by platforms or by users themselves. If it had not, these takedowns could have had a real impact on potential accountability in the future.[69]

For years, Russia has used its veto as a permanent member of the UN Security Council to quash efforts brought by other member states to hold those responsible for chemical weapons and other attacks on civilians in Syria to account. This has made investigations by independent organizations all the more important. In 2018, Russia vetoed the renewal of the main investigation mechanism for chemical weapons attacks in Syria, the Joint Investigative Mechanism (JIM). Shortly after, in late 2018, the Organization for the Prohibition of Chemical Weapons (OPCW) created a new team, the Investigation and Identification Team (IIT), responsible for identifying the perpetrators of the use of chemical weapons in Syria. The creation of this team significantly expanded the OPCW’s remit to identifying perpetrators, when before it was limited to identifying whether attacks happened or not through the Fact-Finding Mechanism (FFM). In April 2020, the IIT published its first report where it found that a chemical weapons attack in March 2017 occurred following orders at the highest level of the Syrian Armed Forces.[70] Videos posted online of the incidents were part of the evidence that was used in the investigation.

Bellingcat, an investigative journalism outlet that specializes in fact-checking and open-source intelligence, was the first to uncover the link between a Russian Buk missile launcher from Russia's 53rd air defense brigade and the downing of Malaysia Airlines Flight MH17. Much of their investigation was based on materials that had been posted online. According to Eliot Higgins, the founder of Bellingcat, on more than one occasion lawyers working on cases related to Flight MH17 asked the group to provide it with the results of Bellingcat’s work.[71] When trying to compile the material, Higgins realized that much of the content it had relied on had been taken offline. The content included videos and photographs, hosted on sites such as Facebook, Twitter, YouTube, and the Russian social media platform VKontakte. As a result, Bellingcat had to spend a significant amount of time finding alternative copies of links and online archived copies of images and pages to substantiate its conclusions. Ultimately, the Dutch-led Joint Investigation Team, which Bellingcat shared its material with, issued arrest warrants for three Russians and one Ukrainian, who were put on trial in the Netherlands in absentia in March 2020.[72]

Nick Waters, an open source investigator at Bellingcat, investigated an airstrike on June 18, 2015 in Sabr Valley in northern Yemen which Amnesty International concluded had killed at least 55 civilians.[73] He told Human Rights Watch that on July 28, 2019, he discovered a video of the airstrike on YouTube that was much clearer than any he had previously seen. It was in high definition, and showed the munition falling from the sky, children on the ground who were killed in the strike, and the topography of the area which allowed for geolocation. He had seen photos of the children elsewhere, but not at the site itself. Waters said the video had been online for several years. Waters shared the link to the video with a colleague who watched it, but when Waters tried to watch it again the following day, it had been taken down. “Maybe us watching it triggered the algorithm that took it down?”, he wondered.

Waters said an acquaintance inquired with YouTube on his behalf to try to understand why it had suddenly been taken down. His acquaintance later told him that he could not “disclose any details about the mechanism,” but he added that “it’s certainly not completely random—that would fly in the face of logic.”[74] Waters was unable to get a copy of the content from YouTube; however, in 2020, he found out that fortunately that Mnemonic has preserved a copy through its Yemeni Archive initiative.

In another example from Yemen, in April 2015, the September 21 YouTube channel, a media outlet linked to the Houthis, an armed group in control of northern Yemen, uploaded a video with no audio of an apparent cluster bomb attack. The video (which has since been taken down) showed numerous objects attached to parachutes slowly descending from the sky, and then zoomed out to show mid-air detonation and several black smoke clouds from other detonations. [75] By matching visible landmarks in the video to satellite imagery and topographic (three-dimensional) models of the area, Human Rights Watch determined that the video was recorded in the village of al-Shaaf in Saqeen, in the western part of Saada governorate. Human Rights Watch also determined the specific weapon likely used in the attack by matching the distinctive parachute design and detonation signatures visible in the video to technical videos of the CBU-105 Sensor-Fuzed munitions manufactured by Textron Systems Corporation, and supplied to Saudi Arabia and the United Arab Emirates by the US.

The video raised concerns regarding how US cluster munitions came into the hands of the Houthis. Neither the US, Saudi Arabia, nor the United Arab Emirates has signed the 2008 Convention on Cluster Munitions, which bans their use. However, US policy on cluster munitions at the time was detailed in a June 2008 memorandum issued by then-Secretary of Defense Robert Gates.[76] Under the Gates policy, the US could only use or export cluster munitions that “after arming do not result in more than 1 percent unexploded ordnance across the range of intended operational environments,” and the receiving country had to agree that cluster munitions “will only be used against clearly defined military targets and will not be used where civilians are known to be present or in areas normally inhabited by civilians.” By verifying the location of the attack in the video, Human Rights Watch was able to conclusively demonstrate that the munitions had been used in an area inhabited by civilians.

As a result of this and other evidence of attacks using this weapon, in June 2016, the US Department of State suspended new deliveries of CBU-105 Sensor Fuzed Weapons to Saudi Arabia.[77]In August 2016, Textron announced that it would discontinue production of the CBU-105s, which were the last cluster munitions to be manufactured in the US. This represented an important step in minimizing cluster munitions attacks in Yemen by the Saudi-led coalition.

In another example, in April 2018 Human Rights Watch published a report that included a video with footage from a journalist broadcasting on Facebook Live showing Nicaragua’s government brutally cracking down on demonstrators at ongoing protests.[78] The organization used this evidence to call for those responsible for the abuses to be held to account, including Francisco Diaz, the deputy chief of the national police. Diaz was among a group of officials subsequently subjected to targeted sanctions by the European Union, the United Kingdom, and Canada.[79]

As shown in these cases, documentation by civil society organizations and the media, which often rely at least in part on content posted on social media, can play a crucial role in spurring national and international prosecutions or other forms of accountability and redress.

III. Obstacles for Those Seeking Removed Content

National Law Enforcement

Currently, each social media company sets its own procedures for law enforcement to request content that is no longer available online and user information. In most cases, companies require the law enforcement body to present a valid subpoena, court order, or search warrant.[80] Facebook states that, upon receipt of a valid request, it will preserve the content for 90 days, “pending our receipt of [a] formal legal process.”[81] However Human Rights Watch knows of instances in which it has retained content taken down for much longer.[82] It states that it will not process “overly broad or vague requests.”

In January 2020, Google announced that it would be charging US law enforcement money for requested responses to search warrants and subpoenas.[83] In at least one instance that Human Rights Watch is aware of, YouTube restored content two years after it had taken
it down.[84]

The email from Twitter’s Public Policy Strategy and Development Director said the company “retains different types of information for different lengths of time, and in accordance with our Terms of Service and Privacy Policy.” Twitter’s Privacy Policy states that it keeps Log Data for a maximum of 18 months, but Human Rights Watch was not able to determine from its Privacy Policy more detail on how long other information might be retained for.[85]

A European law enforcement officer investigating war crimes told Human Rights Watch that “content being taken down has become a daily part of my work experience” and that he is “constantly being confronted with possible crucial evidence that is not accessible to me anymore.”[86] He and another European law enforcement officer said that as a general principle they secure copies of all content they come across during their investigations in a forensically sound manner, using a system that downloads the webpage of interest, timestamps this download, and automatically applies a cryptographic hashing algorithm and a cryptographic digital signature to these files, all in order to authenticate when, where, and by whom this material was archived. Where it seems to constitute an important piece of evidence, they contact the platform and ask it to preserve the content, and all the relevant attached data, even if it comes down.[87]

The procedures developed to facilitate access to content for law enforcement are premised on authorities knowing about the content that was taken offline, in order to make a request. They do not address how law enforcement could access content taken down so rapidly that no authority knows of its existence. One of the law enforcement officers said that, with content that was posted, viewed, and then taken down, even if he did not see the content while it was online, it almost always leaves a trace, with people online referencing it. [88] This allows him to know what to request access to from the companies. When content is blocked from being uploaded or comes down so quickly it does not leave a trace, this potentially ends his ability to pursue a case, he said.

International Courts and Internationally Mandated Investigations

The International Criminal Court (ICC) and internationally mandated investigations, such as the Independent Investigative Mechanism for Myanmar (IIMM), do not have the power to compel evidence from private companies outside their jurisdiction, and this has been a significant obstacle in their ability to obtain content from social media companies.[89] According to one UN investigator, each social media platform has a law enforcement focal point and when UN investigators have contacted them, the focal points have had to decline their requests for the simple reason that these types of requests are not backed by court orders or subpoenas.

Some of these investigative teams have developed a work-around, with the ICC for example requesting that law enforcement officials in a country that is a party to its statute obtain a court order or subpoena and make a company request on its behalf. [90] Two investigators told Human Rights Watch that the data they wanted from companies like Facebook included the content itself as well as the data on users who had posted the content, which was vital to their investigations.[91] As such, simply saving the content they identified was insufficient for the purposes of their investigations.

Civil Society and Media

Facebook and Google, two of the three companies contacted for this report, did not respond to Human Rights Watch queries as to whether they had created any mechanism to allow the media or civil society organizations to request content that has been removed.

In an emailed response to Human Rights Watch’s letter to Twitter requesting access to taken down content for archival purposes, the company’s Public Policy Strategy and Development Director said it could not provide content data without an appropriate legal process:

Pursuant to the U.S. Stored Communications Act (18 U.S.C. 2701 et seq.), Twitter is prohibited from disclosing users’ content absent an applicable exception to the general bar on disclosure. This law allows U.S. law enforcement to compel disclosure of content with a valid and properly scoped search warrant, but there is no such mechanism for disclosure to entities who are unable to obtain a warrant (whether governmental or non-governmental).
Unfortunately, this means we cannot provide copies of the content you have identified for archival purposes. However, Twitter is supportive of efforts through the Global Internet Forum to Counter Terrorism (GIFCT)’s working group on legal frameworks to consider potential avenues to allow greater access to content for appropriate uses…

As far as Human Rights Watch is aware, organizations and the media can only dispute whether a piece of content should have been taken down or not, and appeal for it to be put back online. Mnemonic has successfully supported requests to reinstate 650,357 pieces of content that social media companies had taken offline, that captured alleged human rights violations, including the targeting of hospitals and medical facilities, and the use of chemical weapons.[92]Social media companies also enter into relationships with some individuals and groups as “trusted partners” to help flag content, which has assisted these individuals and groups in requesting content to be reinstated in some cases.[93]

In 2013, Access Now established the Digital Security Helpline, which works with individuals and organizations to help them engage with social media companies to, among other things, get content individuals deem hate speech to be taken offline, or conversely to get content they posted that platforms have taken offline back online if they believe the takedown was incorrect.[94]

IV. Child Sexual Exploitation’s Model for Takedowns and Preservation

Companies, UN entities, and governmental authorities can draw useful lessons about managing content classified as TVEC from the field of child sexual exploitation, often referred to as Child Sexual Abuse Material (CSAM) or child sexual abuse content. There, a similar imperative exists for companies to both take down content and also preserve it for law enforcement and investigative purposes.

There is no universal legal definition of either child sexual exploitation material or terrorism content. However, in a key distinction between the two categories of content, child sexual abuse material is better suited to automated takedowns based on hashes than content that social media companies classify as TVEC. Because most countries criminalize the simple possession of CSAM, regardless of the intent to distribute, CSAM is not considered protected speech under the law. This means that once such material is identified, it can be taken down without the need to examine its context or intent.[95] When identifying terrorist or violent extremist content, contextual factors are extremely important in determining alleged support for or glorification of terrorism, the nuances of which automated systems are notoriously bad at catching.[96] Using hashes as the basis for taking down content strips away this context.

Most platforms identify child sexual exploitation content based on hashes in various child sexual exploitation hash databases that different governments and organizations have developed. Some of the organizations running these databases require a person to vet each piece of content, before it is hashed and that hash is added to the database, like the Internet Watch Foundation in the United Kingdom. [97] New content is identified by users who flagging it or by algorithms.[98]

In the United States, the National Center for Missing and Exploited Children (NCMEC), a private nonprofit organization with a federally designated legal right to possess such material indefinitely, also uses the copy of the content to produce a hash, which it enters into its database to share with platforms.[99] NMEC’s authorizing statutes in some ways make it a hybrid entity, exercising special law enforcement powers and mandate its collaboration with law enforcement authorities.[100]

One child protection worker raised concerns that the NCMEC has sometimes added hashes of content to its database that do not meet the definition of child sexual exploitation, and has even at times wrongfully notified law enforcement.[101] When this happens, a user can come under surveillance from state authorities for content that is not illegal. This reinforces the importance of ensuring such a mechanism is appropriately resourced and regulated, the expert said.[102]

In the US, once the content is identified the companies have a statutory obligation to take down but preserve it on their servers for 180 days.[103] After that period, the US government requires companies to delete the content.[104] The US government also requires companies to share a copy of each piece of content as well as all relevant metadata and user data with the NCMEC. The NCMEC, in turn, notifies law enforcement nationally and internationally of the content. Some other jurisdictions have similar independent arrangements, including with the Internet Watch Foundation in the United Kingdom.[105]

Both CSAM and TVEC are sometimes processed in servers located outside of the country from which the content originated. For both types of content, while the definition of what counts as child sexual exploitation material and terrorism varies from country to country, and Facebook, Twitter, and YouTube are obligated to comply with national laws, they have also developed internal standards for content they consider TVEC or CSAM that they
apply globally.[106]

In 1999, the NCMEC developed the Safeguard Program, designed to address vicarious trauma, secondary trauma, and compassion fatigue in staff and assist them in developing the healthy coping skills necessary to maintain a positive work-life balance. Similar support would be essential in any system created to preserve content classified as TVEC or otherwise relevant for evidentiary purposes of serious international crimes.[107]

 

Recommendations

In line with recommendations made by a coalition of civil society organizations aimed at increasing transparency and accountability around content takedowns, Human Rights Watch believes it is vital that all relevant stakeholders jointly develop a plan to establish an independent mechanism to take on the role of liaising with social media platforms and preserving publicly posted content they classify as TVEC, as well as other removed material that could be evidence of serious international crimes, including content taken down because it was associated with accounts showing “coordinated inauthentic behavior.”[108] The independent mechanism should then be responsible for sorting and granting access to the content for archival and investigative purposes in a manner that respects privacy and security concerns.

This mechanism could serve a role similar to that of existing archives that are legally privileged to hold child sexual exploitation content, but should be nongovernmental, and allow more stakeholders to access the content, including international, regional, and local civil society organizations, journalists, and academics, in addition to national law enforcement officials and investigators with internationally mandated investigations. The system should require legal authorization to retain such content, but in contrast to groups such as National Center for Missing and Exploited Children (NCMEC), it would not be statutorily linked to any particular government. It also would not have a duty to automatically notify particular law enforcement agencies of the removed content.

This body should function akin to a restricted-access research library. Some international tribunals, including the International Criminal Tribunal for Rwanda, have established archives holding physical and digital records, including audio and video recordings of the tribunal’s work.[109] Because these records are often sensitive yet vitally important repositories documenting historical narratives, the United Nations has developed policies to ensure that people can request to access them based on a classification system that prioritizes security and privacy.[110] A distinction with these materials is that they have already been used in criminal investigations, so have a demonstrated evidentiary value.

Decisions around the publication of national archives that reveal the abuses by prior governments in different country contexts could help inform methods for protecting privacy rights while upholding the collective right to information about human rights abuses.[111] For example, in 2005, after broad national consultations, the Guatemalan government decided to make public the Historical Archive of the National Police, which consists of nearly five linear miles of documents, photographs, videotapes, and computer disks.[112] Most of the documents have been digitized, and the public has access to records that include the names, photographs, and details of individuals arrested by the police from 1881 to 1997.[113] Kate Doyle, senior analyst at the National Security Archive, said,

"When it comes to uncovering archives of repression, privacy rights have to be seriously considered, but they're not absolute. The right of an individual to privacy may be overcome by the right of an entire societyto know its own terrible history. You are talking about the right of future generations to read and fully comprehend once-secret records documenting State violence - how it functioned, why it was used, and specifically who it targeted. In that sense the identities of victims become part of the puzzle of a repressive past."[114]

Broad consultations would be key to ensuring the mechanism correctly identifies and preserves material that could be relevant for investigations into serious crimes.

Efforts are already underway to create a limited archive of content that social media companies remove as “terrorist.” One such example, intended for different purposes than Human Rights Watch's proposed mechanism, is led by Tech Against Terrorism (TAT), a project launched by the UN Security Council and supported by the Council's Counter Terrorism Committee Executive Directorate.[115]

According to TAT’s executive director, Adam Hadley, the TCAP:

[W]ill be a secure online platform that hosts terrorist material including verified terrorist content (imagery, video, PDFs, URLs, audio) collected from open-sources and existing datasets. Content on the TCAP will be verified by terrorist content specialists. The purpose of the TCAP is to facilitate secure information sharing between platforms, academia, and data scientists. As well as archiving historical content to support academic analysis and the development of improved content classifiers, the TCAP will provide a real-time alert service to inform smaller internet platforms of public content [that TAT identifies as terrorist] discovered on their services. Furthermore, the TCAP dataset will support third party data scientists in developing more accurate and transparent algorithmic / analytical efforts that can be deployed to support smaller internet platforms.[116]

Hadley added that to address any privacy concerns, “Only tech companies, researchers, and civil society will be allowed access to the platform. We will also ensure that personal identifiable information (PII) of users is not traceable on the platform.”[117]

The mechanism that Human Rights Watch proposes would serve a different function than TCAP and would not be overseen by an entity launched by the UN Security Council or any UN counterterrorism body. Nevertheless, the questions TCAP has grappled with regarding how to securely and legally store and provide access to archived content may help inform discussions on the mechanism that Human Rights Watch is calling for.

In April, 2020, the GIFCT launched six working groups with membership of representatives from social media companies, civil society organizations, academics and governments.[118] One of the working groups is focused on “understanding the challenges and constraints of existing legal frameworks; incorporating the risks and opportunities of greater data-sharing; and identifying opportunities for clarification and reform.”[119] Two members of the working group told Human Rights Watch that the group will be tackling, among other topics, the issue of content takedowns and the legal framework needed to preserve taken down content for evidentiary purposes.[120]

Human Rights Watch urges social media companies and other relevant stakeholders to launch a consultation process to determine the contours of an independent mechanism to preserve content and its metadata that may serve as evidence of serious international crimes.

These consultations should prioritize inclusion of internationally mandated investigators, human rights researchers, civil society organizations, journalists, academics, and national law enforcement representatives. Such consultations should address the following issues:

Nature of the content to be archived, and the manner in which it would be stored:

  • What content would need to be preserved for evidentiary and research purposes in this mechanism, ensuring that the selection of content would be based on narrow criteria, in a manner that would meet international standards of free expression and privacy and data protection rights;
  • How would content be archived in order for the archiving process not to be too onerous, but for content to be found relatively easily, without jeopardizing privacy concerns..[121]

Determining who would have access to the content:

  • Developing clear criteria and rights-based principles to guide who could access archived material, and measures to avoid either unrestricted access and dissemination or unreasonable restrictions on nongovernmental access for research;
  • Developing clear guidelines and conditions for accessing the content in accordance with privacy and data protection rights standards;
  • Developing accreditation standards that would govern access by individuals, governments and organizations to restricted content for approved purposes.

Determining what content would be accessible and for what purposes:

  • Establishing under what circumstances would various types of applicants be able to access not just content but data that is related to the content, such as metadata and user data, and for which purposes and under what conditions;[122]
  • Ensuring that there are appropriate protections when access is sought to private information, including security safeguards to ensure sensitive content is not leaked and prevent unlawful sharing of the archived material;
  • Ensuring that there are appropriate notification and appeals processes;
  • Establishing the terms of use of the content, once accessed, including ways to ensure the privacy and security of the individuals featured in preserved content, as well as those who posted or captured the content;
  • Securing ways to ensure those granted access to archived material do not violate the terms of use.

Requests for accessing the content:

  • Determining how specific requests would need to be, in light of the need to ensure access to content that has been taken down before any human has seen it;
  • Finding ways to legally share content for approved purposes with those seeking access from jurisdictions where viewing it is a criminal offense or is otherwise prohibited by law.

Safeguarding the mechanism, including to prevent abuse and misuse by:

  • Implementing measures to ensure maximum transparency around the functioning of the mechanism, while respecting privacy, data protection, and due process rights in compliance with international law;
  • Ensuring that the archived material is stored in a secure manner in accordance with privacy and data protections standards under international law and that sufficient information security controls and auditing are put in place to securely protect all data contained in the mechanism;
  • Finding ways to adequately fund the mechanism and preserve its independence, both to sort and securely preserve content and review access applications, and to carry out outreach and educational activities to help ensure that all those wishing to apply to access archived content know how to do so;
  • Developing and funding a program to address vicarious trauma, secondary trauma, and compassion fatigue experienced by those analyzing distressing material and to the extent possible others accessing the material;
  • Basing the mechanism in a jurisdiction where it can operate without government interference;
  • Implementing a regular audit of the mechanism to ensure fairness and accuracy.

In advance of the creation of an independent mechanism to liaise with social media platforms and preserve online material classified as TVEC and other relevant content, Human Rights Watch urges social media companies to take the following steps:

  • Put in place a process whereby internationally mandated investigators, including those from the International Criminal Court (ICC) and UN-mandated investigations, can request access to removed content and its metadata, without having to go through national law enforcement agencies;
  • Make public the full process by which the company identifies and removes content, including the roles of human moderation and artificial intelligence, how a hash is added to the Global Internet Forum to Counter Terrorism (GIFCT) hash database, how the company uses hashes from other companies in its own content moderation processes, how long the company stores content it has taken down, what measures have been put in place to decide when to delete it, and how quickly the deletion occurs;
  • Improve transparency and accountability in content moderation to ensure takedowns are not overly broad or biased.[123] This includes implementing the standards in the Santa Clara Principles on Transparency and Accountability in Content Moderation, namely to clearly explain to users why their content or their account has been taken down, including the specific clause from the Community Standards that the content was found to violate and how the content was detected, evaluated, and removed (for example, by users, automation, or human content moderators), and provide a meaningful opportunity for timely appeal of any content removal or account suspension;[124]
  • Review and modify overly broad definitions of “terrorist and violent extremist content” to ensure they comport with international human rights norms including the right to free expression.

Until a mechanism is created, it will be important for human rights researchers to improve their own ability to preserve and archive material that they rely on in their documentation efforts. Donors and human rights organizations should invest in supporting the development and maintenance of the necessary technical infrastructure and developing the skills of those who might not currently have the ability to preserve and archive material, to assist in preservation efforts.

Acknowledgements

This report was researched and written by Belkis Wille, crisis and conflict division senior researcher. Ida Sawyer, acting crisis and conflict director, edited the report.

Julie Ciccolini, research technologist, provided research support. Shayna Bauchner, Asia division assistant researcher, Deborah Brown, senior researcher and advocate on digital rights, Hye Jung Han, children’s rights division researcher and advocate, Gabriela Ivens, head of open source research, Balkees Jarrah, international justice program associate director, Sara Kayyali, Syria researcher, Linda Lakhdhir, Asia division legal advisor, Nicole Martin, senior manager of archives and digital systems, Manny Maung, Asia researcher, HananSalah, senior Libya researcher, Param-Preet Singh, associated director of the international justice program, Joe Stork, deputy Middle East and North Africa director, and Letta Tayler, crisis and conflict division senior researcher provided specialist review. Dinah Pokempner, general counsel, provided legal review and Tom Porteous, associate program director, provided programmatic review. Crisis and conflict associate Madeline de Figueiredo, photography and publications coordinator Travis Carr, and administrative manager Fitzroy Hepkins prepared the report for publication.

Human Rights Watch would like to thank the many experts, particularly civil society representatives who have been engaging with social media companies and victims’ communities for years on this issue, who were generous with their time and insights when speaking about this topic with researchers.

 

[1] Kent Walker (General Counsel, Google), “Fours steps we’re taking today to fight terrorism online,” Google blog, June 18, 2017, https://www.blog.google/topics/google-europe/four-steps-were-taking-today-fight-online-terror/ (accessed July 27, 2020); Angel Diaz and Faiza Patel, “Scramble to Erase New Zealand Attack Videos Exposes Pitfalls Too,” Just Security, April 1, 2019, https://www.justsecurity.org/63451/scramble-to-erase-new-zealand-attack-videos-exposes-pitfalls-too/ (accessed July 27, 2020).

[2] Ibid.; “An update on our commitment to fight terror content online,” YouTube Official blog, August 1, 2017, https://youtube.googleblog.com/2017/08/an-update-on-our-commitment-to-fight.html(accessed July 27, 2020); Monika Bickert and Brian Fishman, “Hard Questions: How We Counter Terrorism,” Facebook news release, June 15, 2017, https://about.fb.com/news/2017/06/how-we-counter-terrorism/ (accessed July 27, 2020).

[3] Malachy Browne, “YouTube Removes Videos Showing Atrocities in Syria,” New York Times, April 22, 2017, https://www.nytimes.com/2017/08/22/world/middleeast/syria-youtube-videos-isis.html?rref=collection%2Fbyline%2Fmalachy-browne&action=click&contentCollection=undefined&region=stream&module=stream_unit&version=latest&contentPlacement=1&pgtype=collection&_r=0 (accessed July 27, 2020).

[4] “An update on our commitment to fight terror content online,” YouTube Official blog, August 1, 2017, https://youtube.googleblog.com/2017/08/an-update-on-our-commitment-to-fight.html(accessed July 27, 2020)

[5] Google Transparency Report, “YouTube Community Guidelines enforcement,” January 2020 through March 2020, https://transparencyreport.google.com/youtube-policy/removals(accessed July 27, 2020)

[6] Google Transparency Report, Featured Policies: Violent Extremism chapter, January 2020 through March 2020, https://transparencyreport.google.com/youtube-policy/featured-policies/violent-extremism?hl=en (accessed August 12, 2020).

[7] “Protecting our extended workforce and the community,” YouTube Creator Blog, March 16, 2020, https://youtube-creators.googleblog.com/2020/03/protecting-our-extended-workforce-and.html?m=1 (accessed July 27, 2020).

[8] Facebook Transparency Report, Community Standards Enforcement Report, January 2020 through March, 2020, https://transparency.facebook.com/community-standards-enforcement (accessed July 27, 2020). See the chapters: “Dangerous organizations: Terrorism and Organized Hate,” “Hate Speech,” “Fake Accounts,” and “Violent and Graphic Content.”

[9] Twitter Transparency Report, “Twitter rules enforcement,” January 2019 through June 2019, https://transparency.twitter.com/en/twitter-rules-enforcement.html (accessed July 28, 2020).

[10] “Partnering to Help Curb Spread of Online Terrorist Content,” Facebook news release, December 5, 2016, https://about.fb.com/news/2016/12/partnering-to-help-curb-spread-of-online-terrorist-content/ (accessed July 27, 2020).

[11] Global Internet Forum to Counter Terrorism (GIFCT) webpage, https://www.gifct.org/members/ (accessed July 29, 2020).

[12] “An Update on Combating Hate and Dangerous Organizations,” Facebook news release, May 12, 2020 https://about.fb.com/news/2020/05/combating-hate-and-dangerous-organizations/ (accessed July 27, 2020); “Social media companies launch upload filter to combat ‘terrorism and extremism,” European Digital Rights, April 5, 2017, https://edri.org/social-media-companies-launch-upload-filter-to-combat-terrorism-and-extremism/ (accessed July 27, 2020); Jon Porter, “Upload Filters and One-Hours Takedowns: The EU’s Latest Fight Against Terrorism Online, Explained,” The Verge, March 21, 2019, https://www.theverge.com/2019/3/21/18274201/european-terrorist-content-regulation-extremist-terreg-upload-filter-one-hour-takedown-eu (accessed July 27, 2020).

[13] Global Internet Forum to Counter Terrorism (GIFCT), “GIFCT Transparency Report – July 2020,” July 2020, https://gifct.org/transparency/ (accessed July 27, 2020).

[14] Joint letter to Nicholas Rasmussen, Executive Director, Global Internet Forum to Counter Terrorism, July 30, 2020, https://www.hrw.org/news/2020/07/30/joint-letter-new-executive-director-global-internet-forum-counter-terrorism; Dia Kayyali, “WITNESS joins 14 organizations to urge GIFCT to respect human rights,” Witness blog post, July 30, 2020, https://blog.witness.org/2020/07/witness-joins-14-organizations-to-urge-gifct-to-respect-human-rights/ (accessed August 18, 2020).

[15] Syrian Archive, Caught in the Net: The Impact of Extremist Speech Regulations on Human Rights Content, 2019, https://syrianarchive.org/en/lost-found/impact-extremist-human-rights (accessed August 12, 2020); Emma Llanso, “Takedown Collaboration by Private Companies Creates Troubling Precedent,” Center for Democracy and Technology post, December 6, 2016, https://cdt.org/insights/takedown-collaboration-by-private-companies-creates-troubling-precedent/ (accessed August 27, 2020); Letter to Members of the European Parliament from various organizations, February 4, 2019, https://cdt.org/wp-content/uploads/2019/02/Civil-Society-Letter-to-European-Parliament-on-Terrorism-Database.pdf (accessed August 27, 2020). See also Diya Kayyali, “WITNESS tells world leaders – don’t delete opportunities for justice,” Witness blog post, September 24, 2019, https://blog.witness.org/2019/09/witness-tells-world-leaders-dont-delete-opportunities-justice/(accessed August 26, 2020); Hadi Al Khatib and Dia Kayyali, “YouTube Is Erasing History,” New York Times, October 23, 2019, https://www.nytimes.com/2019/10/23/opinion/syria-youtube-content-moderation.html (accessed August 27, 2020).

[16] Ibid.

[17] There is concern that without a common definition, efforts to coordinate around takedowns will result in a race to the bottom and disproportionately impact Arabic language content, because of a tendency within many social media companies to focus on content that is Islamist extremist rather than, for example, Islamophobic, xenophobic or white-supremacist. See Letter from Human Rights Watch to Nicholas Rasmussen, Executive Director, Global Internet Forum to Counter Terrorism, July 30, 2020, https://www.hrw.org/news/2020/07/30/joint-letter-new-executive-director-global-internet-forum-counter-terrorism.

[18] Dia Kayyali, “WITNESS joins 14 organizations to urge GIFCT to respect human rights,” Witness blog post, Juy 30, 2020, https://blog.witness.org/2020/07/witness-joins-14-organizations-to-urge-gifct-to-respect-human-rights/ (accessed August 18, 2020).

[19] See e.g. Isabelle van der Vegt, Paul Gill, Stuart Macdonald, and Bennett Kleinberg, “Shedding Light on Terrorist and Extremist Content Removal,” Global Research Network on Terrorism and Technology, paper no. 3, https://rusi.org/sites/default/files/20190703_grntt_paper_3.pdf (accessed August 14, 2020).

[20] Global Internet Forum to Counter Terrorism (GIFCT), “GIFCT Transparency Report – July 2020,” July 2020, https://gifct.org/transparency/ (accessed July 27, 2020).

[21] Ibid., “Christchurch shootings: 49 dead in New Zealand mosque attacks,” BBC, March 15, 2019, https://www.bbc.com/news/world-asia-47578798 (accessed July 27, 2020).

[22] Daniel Boffey, “Remove terror content or be fined millions, EU tells social media firms,” Guardian, September 13, 2018, https://www.theguardian.com/media/2018/sep/13/social-media-firms-could-face-huge-fines-over-terrorist-content (accessed July 28, 2020).

[23] Patrick Breyer, “EU Terror Filter Negotiation: Here’s Where We Stand [Updated 11 March 2020].,” Patrick Breyer’s website, September 24, 2019, https://www.patrick-breyer.de/?p=589500&lang=en (accessed July 28, 2019).

[24] Tackling the dissemination of terrorist content online, European Parliament legislative resolution, April 17, 2019.

[25] The Santa Clara Principles of Transparency and Accountability in Content Moderation. See here https://santaclaraprinciples.org.

[26] Network for investigation and prosecution of genocide, crimes against humanity and war crimes, “Prosecuting war crimes of outrage upon personal dignity based on evidence from open sources – Legal framework and recent development in the Members States of the European Union,” February 2018, http://www.eurojust.europa.eu/doclibrary/genocide-network/KnowledgeSharing/Prosecuting%20war%20crimes%20of%20outrage%20upon%20personal%20dignity%20based%20on%20evidence%20from%20open%20sources%20(February%202018)/2018-02_Prosecuting-war-crimes-based-on-evidence-from-open-sources_EN.pdf (accessed July 27, 2020). See also. Court of the Hague judgment, July 23, 2019, case no. 09/748003-18v, https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2019:10647 (accessed July 27, 2020); “Sweden jails Iraqi for war crimes after Facebook post,” Al Araby, December 6, 2016, https://english.alaraby.co.uk/english/fullimage/eff18e9b-7920-4da1-9f9f-e242fc02290f/e59adf51-6a6d-408f-b10b-8573ab87d8e0.

[27] Human Rights Watch, “These are the Crimes we are Fleeing”: Justice for Syria in Swedish and German Courts (New York: Human Rights Watch, 2017, https://www.hrw.org/report/2017/10/04/these-are-crimes-we-are-fleeing/justice-syria-swedish-and-german-courts; “Syrian Rebels Execute 7 Soldiers,” New York Times video, September 5, 2013, https://www.nytimes.com/video/multimedia/100000002421671/syrian-rebels-execute-7-soldiers.html (accessed July 27, 2020). See also Morten Bergsmo and Carsten Stahn, “Quality Control in Preliminary Examination: Volume 2,” published by Torkel Opshal Academic EPublisher, 2018, https://www.law.berkeley.edu/wp-content/uploads/2018/09/Preliminary-Examinations.-Chapter-34-Koenig-McMahon-Mehandru-and-Bhattacharjee.pdf (accessed August 12, 2020).

[28] Trial International, “Haisam Omar Sakhanh,” last modified July 27, 2020, https://trialinternational.org/latest-post/haisam-omar-sakhanh/ (accessed July 27, 2020); Human Rights Watch, “These are the Crimes we are Fleeing”: Justice for Syria in Swedish and German Courts.

[29] Court of the Hague judgment, July 23, 2019, case no. 09/748003-18v, https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2019:10647 (accessed July 27, 2020).

[30] Human Rights Watch telephone interview with two national law enforcement officers, May 20, 2020.

[31] Human Rights Watch interview with a lawyer working on new methodologies for using audiovisual material and publicly available data in legal cases, February 3, 2020. See also Digital Witness: Open Source Information for Human Rights Investigation, Documentation, and Accountability, ed. Alexa Koenig, Sam Dubberley, Daragh Murray, (Oxford, UK: Oxford University Press, 2020).

[32] “Situation in Libya: ICC Pre-Trial Chamber I issues a warrant of arrest for Mahmoud Mustafa Busayf Al-Werfalli for war crimes,” International Criminal Court (ICC) press release, August 15, 2017, https://www.icc-cpi.int/Pages/item.aspx?name=pr1328 (accessed July 28, 2020).

[33] The Prosecutor v. Mahmoud Mustafa Busayf Al-Werfalli, International Criminal Court (ICC), No. ICC-01/11-01-/17, July 4, 2018, second warrant of arrest, https://www.icc-cpi.int/CourtRecords/CR2018_03552.PDF; Emma Irving, “And So It Begins…Social Media Evidence in an ICC Arrest Warrant,” August 17, 2017,

http://opiniojuris.org/2017/08/17/and-so-it-begins-social-media-evidence-in-an-icc-arrest-warrant/ (accessed July 27, 2020).

[34] Alexa Koenig, “ ‘Half the Truth is Often a Great Lie’: Deep Fakes, Open Source Information, and International Criminal Law,” The American Society of International Law, 2019 (presented at the Symposium on Non-State Actors and New Technologies in Atrocity Prevention), https://www.cambridge.org/core/services/aop-cambridge-core/content/view/FB05229E78A65BEE8D7126766DA8F2D4/S2398772319000473a.pdf/half_the_truth_is_often_a_great_lie_deep_fakes_open_source_information_and_international_criminal_law.pdf (accessed August 12, 2020).

[35] Human Rights Watch telephone interview with an individual within an internationally mandated investigation, February 14, 2020.

[36] Human Rights Watch telephone interview with an individual within an internationally mandated investigation, February 14, 2020.

[37] “Burma: Military Commits Crimes Against Humanity,” Human Rights Watch news release, September 25, 2017, https://www.hrw.org/news/2017/09/25/burma-military-commits-crimes-against-humanity (accessed July 27, 2020).

[38] Avi Ascher-Shapiro, “YouTube and Facebook are Removing Evidence of Atrocities, Jeopardizing Cases Against War Criminals,” The Intercept, November 2, 2017, https://theintercept.com/2017/11/02/war-crimes-youtube-facebook-syria-rohingya/ (accessed July 27, 2020); Human Rights Watch, World Report 2018 (New York: Human Rights Watch, 2018), Myanmar chapter, https://www.hrw.org/world-report/2020/country-chapters/myanmar-burma; Paul Mozur, “A Genocide Incited on Facebook, With Posts from Myanmar’s Military,” New York Times, October 15, 2018, https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html (accessed July 27, 2020).

[39] Alex Warofka, “An Independent Assessment of the Human Rights Impact of Facebook in Myanmar,” Facebook news release, November 5, 2018, https://about.fb.com/news/2018/11/myanmar-hria/ (accessed July 27, 2020). https://about.fb.com/news/2018/08/removing-myanmar-officials/ (accessed July 27, 2020); Nathaniel Gleicher, “Coordinated Inauthentic Behavior Explained,” Facebook news release, December 6, 2018, https://about.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/ (accessed July 27, 2020); Steve Stecklow, “Hatebook: Inside Facebook’s Myanmar operation,” Reuters special report, August 15, 2018, https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/ (accessed August 12, 2020).

[40] “Removing Myanmar Military Officials From Facebook,” Facebook news release, August 28, 2018, https://about.fb.com/news/2018/08/removing-myanmar-officials/; Instagram is owned by Facebook.

[41] “April 2020 Coordinated Inauthentic Behavior Report,” Facebook news release, May 5, 2020, https://about.fb.com/news/2020/05/april-cib-report/ (accessed July 27, 2020).

[42] Facebook defines Coordinated Inauthentic Behavior (CIB) as campaigns by domestic non-government actors or on behalf of a government entity or by a foreign actor that include groups of accounts and Pages seeking to mislead people about who they are and what they are doing while relying on fake accounts. See: Nathaniel Gleicher (Facebook head of cybersecurity policy), “Coordinated Inauthentic Behavior Explained,” Facebook news release, December 6, 2018, https://about.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/ (accessed August 12, 2020).

[43] Human Rights Watch telephone interview with an individual within an internationally mandated investigation, January 27, 2020; Nick La Grow and Miri Pruzan, “Integrating autoconversion: Facebook’s path from Zawgyi to Unicode,” Facebook engineering news release, September 26, 2019, https://engineering.fb.com/android/unicode-font-converter/ (accessed July 27, 2020).

[44] Nick LaGrow and Miri Pruzan, “Integrating autoconversion: Facebook’s path from Zawgyi to Unicode,” Facebook engineering press release, September 26, 2019, https://engineering.fb.com/android/unicode-font-converter/ (accessed August 14, 2020).

[45] UN Human Rights Council, Compilation of all recommendations made by the Independent International Fact-Finding Mission on Myanmar, to the Government of Myanmar, armed organizations, the UN Security Council, Member States, UN agencies, the business community and others, September 16, 2019, A/HRC/42/CRP.6.

[46] Human Rights Watch telephone interview with an individual within an internationally mandated investigation, January 27, 2020. See also the Report of the detailed findings of the Independent International Fact-Finding Mission on Myanmar, United Nations Human Rights Council (UNHRC), September 17, 2018, A/HRC/39/CRP.2, p. 339.

[47] International Criminal Court, Situation in the People’s Republic of Bangladesh/Republic of the Union of Myanmar, July 4, 2019, No ICC-01/19, www.icc-cpi.int/CourtRecords/CR2019_03510.PDF (accessed August 27, 2020), para 175.

[48] See, for example,Human Rights Council, Report of the detailed findings of the Independent International Fact-Finding Mission on Myanmar, A/HRC/39/CRP.2, September 17, 2018, paras.705-706, 1254, 1310-1319, 1328-1329, 1342-1354,1422, 1533, 1539;Simon Lewis et al.,“Tip of the Spear:The Shock Troops Who Expelled the Rohingya from Myanmar,” Reuters, June 26, 2018,https://www.reuters.com/investigates/special-report/myanmar-rohingya-battalions/(accessed August 27, 2020);Weiyi Cai and Simon Lewis,“Sharing the Crackdown,”Reuters, December 28, 2018,https://graphics.reuters.com/MYANMAR-ROHINGYA/010081TG394/index.html(accessed August 27, 2020); Steve Stecklow,“Hatebook: Inside Facebook’s Myanmar Operation,” Reuters,August 15, 2018,https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/(accessed August 27, 2020). See alsoHuman Rights Watch,“Burma: Army Report Whitewashes Ethnic Cleansing,” November 14, 2017,https://www.hrw.org/news/2017/11/14/burma-army-report-whitewashes-ethnic-cleansing;Massacre by the River: Burmese Army Crimes against Humanity in Tula Toli (New York: Human Rights Watch, December 2017),https://www.hrw.org/report/2017/12/19/massacre-river/burmese-army-crimes-against-humanity-tula-toli;“Myanmar: Prosecute Dismissed Officers for Atrocities,” June 29, 2018,https://www.hrw.org/news/2018/06/29/myanmar-prosecute-dismissed-officers-atrocities.

[49] United Nations Human Rights Council, “Report of the detailed finding of the Independent International Fact-Finding Mission on Myanmar,” September 17, 2018, U.N. document A/HRC/39/CRP.2. See also Public sitting in the case concerning Application of the Convention on the Prevention and Punishment of the Crime of Genocide (The Gambia v. Myanmar), International Court of Justice (The Hague), verbatim record.

[50] The Republic of the Gambia v. Facebook, United States District Court for the District of Columbia, June 8, 2020.

[51] Poppy McPherson, “Facebook rejects request to release Myanmar officials’ data for genocide case,” Reuters, August 6, 2020, https://www.reuters.com/article/us-myanmar-facebook/facebook-rejects-request-to-release-myanmar-officials-data-for-genocide-case-idUSKCN2521PI (accessed August 12, 2020); ” Facebook’s Opposition to Petitioner’s Application Pursuant to 28 U.S.C. § 1782” The Republic of the Gambia v. Facebook, Inc., United States District Court for the District of Columbia, case 1:20-mc-00036-JEB-DAR, August 4, 2020.

[52] Mike Becker’s Twitter page, https://twitter.com/mabecker17/status/1291045316125437952 (accessed August 12, 2020).

[53] Ibid.; Megan Cassidy, “Facebook, Twitter hold evidence that could save people from person. And they’re not giving it up,” San Francisco Chronicle, January 21, 2020, https://www.sfchronicle.com/crime/article/Facebook-Twitter-hold-evidence-that-could-save-14990176.php (accessed August 12, 2020).

[54] Alex Warofka, “An Independent Assessment of the Human Rights Impact of Facebook in Myanmar,” Facebook blog post, November 5, 2018, https://about.fb.com/news/2018/11/myanmar-hria/(accessed August 26, 2020).

[55] “Iraq: Investigate Possible Mosul Abuse,” Human Rights Watch news release, July 13, 2017, https://www.hrw.org/news/2017/07/13/iraq-investigate-possible-mosul-abuse; “Niger: Video Shows Army Killing Wounded Men,” Human Rights Watch news release, June 12, 2020, https://www.hrw.org/news/2020/06/12/niger-video-shows-army-killing-wounded-men.

[56] Human Rights Watch: “Only Men Need Apply”: Gender Discrimination in Job Advertisements in China, (New York: Human Rights Watch, 2018), https://www.hrw.org/report/2018/04/23/only-men-need-apply/gender-discrimination-job-advertisements-china; Human Rights Watch, China’s Algorithms of Repression: Reverse Engineering a Xinjiang Police Mass Surveillance App, (New York: Human Rights Watch, 2019), https://www.hrw.org/sites/default/files/report_pdf/china0519_web.pdf; Human Rights Watch, “Eradicating Ideological Viruses”: China’s Campaign of Repression Against Xinjiang’s Muslims (New York: Human Rights Watch, 2018).

[57] US Department of Defense, Memorandum for Record: CIVCAS Allegation Closure Report, February 13, 2017, https://www.documentcloud.org/documents/4242269-The-Coalition-s-Internal-Probe-Into-The-Razzo.html (accessed August 12, 2020.

[58] Azmat Khan and Anand Gopal, “The Uncounted,” New York Times Magazine, November 16, 2017, https://www.nytimes.com/interactive/2017/11/16/magazine/uncounted-civilian-casualties-iraq-airstrikes.html (accessed July 27, 2020).

[59] “Civilian Casualty Airstrike Video Removed by the U.S.-Led Coalition,” November 29, 2016, video clip, YouTube, https://www.youtube.com/watch?v=Kced_hO9w_4 (accessed August 12, 2020); Human Rights Watch telephone email exchange with Azmat Khan, investigative reporter, August 7, 2020. See also US Department of Defense, Memorandum for Record: CIVCAS Allegation Closure Report.

[60] The Daily podcast, episode “Friday, Nov. 17, 2017,” produced by The New York Times. November 17, 2017, https://podcasts.google.com/feed/aHR0cDovL3Jzcy5hcnQxOS5jb20vdGhlLWRhaWx5/episode/Z2lkOi8vYXJ0MTktZXBpc29kZS1sb2NhdG9yL1YwL0pIN3VvWnFjeU9jWXUwQXhhcFFCX2RFMGNUTERGZ3Q3TDRGcDRjYUtFMU0?sa=X&ved=2ahUKEwjmhbz2yojrAhWW2HMBHSFBDB4QkfYCegQIARAF.

[61] Human Rights Watch telephone interview with Chris Woods, the founder and director of Airwars, January 31, 2020.

[62] Human Rights Watch telephone interview with Chris Woods, the founder and director of Airwars, January 31, 2020.

[63] “Syria: A Year On, Chemical Weapons Attacks Persist,” Human Rights Watch news release, April 4, 2018, https://www.hrw.org/news/2018/04/04/syria-year-chemical-weapons-attacks-persist (accessed July 27, 2020).

[64] https://syrianarchive.org/; https://mnemonic.org/

[65] Rick Gladstone, “Russia Vetoes Stopgag Resolution to Preserve Syria Chemical Weapons Panel” New York Times, November 17, 2017, https://www.nytimes.com/2017/11/17/world/middleeast/syria-chemical-weapons-united-nations-jim.html (accessed July 27, 2020). See also Syrian Archive, “Chemical Weapons Database,” https://syrianarchive.org/en/datasets/chemical-weapons (accessed July 28, 2020).

[66] Syrian Archive’s Twitter page, https://twitter.com/syrian_archive/status/1271101648082030592 (accessed July 27, 2020).

[67] Yemeni Archive homepage, https://yemeniarchive.org/ (accessed August 14, 2020).

[68] Human Rights Watch email exchange with Hadi al Khatib, executive director of Mnemonic, and Jeff Deutch, director of operations and research at Mnemonic, August 6, 2020.

[69] Human Rights Watch interview with Hadi Al Khatib, a founding member of the Syrian Archive, January 24, 2020.

[70] Organisation for the Prohibition of Chemical Weapons – UN Joint Commission, “First Report by the OPCW Investigation and Identification Team Pursuant to Paragraph 10 of Decision C-ss-2/Dec.3 ‘Addressing the Threat from Chemical Weapon Use’ Lt Amenah (Syrian Arab Republic) 24, 25, and 30 March 2017,” April 8, 2020, UN Document S/1867/2020, https://www.opcw.org/sites/default/files/documents/2020/04/s-1867-2020%28e%29.pdf (accessed July 27, 2020).

[71] “Identifying the Separatists Linked to the Downing of MH17,” Bellingcat, June 19, 2019, https://www.bellingcat.com/news/uk-and-europe/2019/06/19/identifying-the-separatists-linked-to-the-downing-of-mh17/ (accessed July 27, 2020).

[72] “MH17: Four charged with shooting down plane over Ukraine,” BBC, June 19, 2020, https://www.bbc.com/news/world-europe-48691488 (accessed July 27, 2020); Janene Pieters, “Trial Against First MH17 Suspects Starts in Netherlands on Monday,” March 8, 2020, https://nltimes.nl/2020/03/08/trial-first-mh17-suspects-starts-netherlands-monday (accessed July 27, 2020).

[73] Human Rights Watch telephone interview with Nick Waters, an open source investigator at Bellingcat, February 11, 2020. See also Bellingcat Yemen Project,” SAA10010 – Sabr Valley strike,” July 13, 2019, https://yemen.bellingcat.com/investigations/saa10010-sabr-valley-strike (accessed July 27, 2020); Amnesty International, “’Bombs Fall from the Sky Day and Night’: Civilians Under Fire in Northern Yemen,” October 2015, https://www.amnestyusa.org/files/bombs-fall-from-the-sky-day-and-night_civilians-under-fire-in-northern-yemen_final.pdf (accessed July 27, 2020).

[74] Human Rights Watch telephone interview with Nick Waters, an open source investigator at Bellingcat, February 11, 2020.

[75] “Yemen: Saudi-Led Airstrikes Used Cluster Munitions,” Human Rights Watch news release, May 3, 2015, https://www.hrw.org/news/2015/05/03/yemen-saudi-led-airstrikes-used-cluster-munitions (accessed July 27, 2020).

[76] “Memorandum for Secretaries of the Military Departments; DoD Policy on Cluster Munitions and Unintended Harm to Civilians,” Secretary of Defense Robert Gates, June 19, 2008, https://www.globalsecurity.org/military/library/policy/dod/d20080709cmpolicy.htm (accessed July 28, 2020). The Gates policy remained in effect until 2017.

[77] Stephen Snyder, “When the White House said ‘No’ to the Saudis,” Public Radio International (PRI), June 6, 2016, https://www.pri.org/stories/2016-06-06/when-white-house-said-no-saudis (accessed July 27, 2020).

[78] “Video: Protests Leave Deadly Toll in Nicaragua,” Human Rights Watch news release, April 26, 2018, https://www.hrw.org/video-photos/video/2018/04/26/317368.

[79] “Nicaragua: Senior Officials Responsible for Abuse,” Human Rights Watch news release, July 10, 2018, https://www.hrw.org/video-photos/video/2018/07/10/320260; “EU, UK Sanction Top Nicaraguan Officials,” Human Rights Watch news release, May 9, 2020, https://www.hrw.org/news/2020/05/09/eu-uk-sanction-top-nicaraguan-officials. See also “Correction: United States Nicaragua,” Associated Press, June 22, 2019, https://apnews.com/d47ad9b4ae824615b72577bc85c14ad6 (accessed July 27, 2020).

[80] “Law Enforcement Online Requests,” Facebook page, https://www.facebook.com/records/login/ (accessed July 27, 2020); “Information for Law Enforcement Authorities,” Facebook Safety Center, https://www.facebook.com/safety/groups/law/guidelines/ (accessed July 27, 2020); “Request for User Information FAQs,” Transparency Report Help Center, https://support.google.com/transparencyreport/answer/9713961?hl=en&visit_id=637314712335549943-890419305&rd=1 (accessed July 27, 2020).

[81] “Information for Law Enforcement Authorities,” Facebook Safety Center.

[82] Alex Warofka, “An Independent Assessment of the Human Rights Impact of Facebook in Myanmar,” Facebook blog post, November 5, 2018, https://about.fb.com/news/2018/11/myanmar-hria/(accessed August 26, 2020).

[83] Gabriel J.X. Dance and Jennifer Valentino-DeVries, “Have a Search Warrant for Data? Google Wants You to Pay,” The New York Times, January 24, 2020, https://www.nytimes.com/2020/01/24/technology/google-search-warrants-legal-fees.html

[84] Ugarit News is a Syrian anti-government media outlet that started posting videos of government abuse on its YouTube channel in 2011. In 2017, YouTube took down it’s channel, https://www.youtube.com/user/UgaritNews, but restored it in 2019. Syrian Archive’s Facebook page, https://www.facebook.com/syrianarchive/posts/2602179510012409 (accessed August 27, 2020).

[85] Twitter Privacy Policy, See here https://twitter.com/en/privacy. An individual’s Log Data refers to information the company receives when someone views content on or otherwise interact with its service.

[86] Human Rights Watch telephone interview with two national law enforcement officers, May 20, 2020.

[87] Human Rights Watch telephone interview with two national law enforcement officers, May 20, 2020.

[88] Human Rights Watch telephone interview with two national law enforcement officers, May 20, 2020.

[89] Human Rights Watch interview with a lawyer working on new methodologies for using audiovisual material and publicly available data in legal cases, February 3, 2020; Human Rights Watch telephone interview with an individual within an internationally mandated investigation, January 27, 2020; Human Rights Watch telephone interview with an individual within an internationally mandated investigation, February 3, 2020.

[90] Human Rights Watch interview with a lawyer working on new methodologies for using audiovisual material and publicly available data in legal cases, February 3, 2020; Human Rights Watch telephone interview with an individual within an internationally mandated investigation, January 27, 2020; Human Rights Watch telephone interview with an individual within an internationally mandated investigation, February 3, 2020.

[91] Human Rights Watch telephone interview with an individual within an internationally mandated investigation, February 3, 2020; Human Rights Watch telephone interview with an individual within an internationally mandated investigation, February 14, 2020.

[92] Human Rights Watch email exchange with Hadi al Khatib, executive director of Mnemonic, and Jeff Deutch, director of operations and research at Mnemonic, August 6, 2020.

[93] YouTube Trusted Flagger program, Google Support page, https://support.google.com/youtube/answer/7554338?hl=enhttps://support.google.com/youtube/answer/7554338?hl=en (accessed September 2, 2020).

[94] Digital Security Helpline page, Access Now, https://www.accessnow.org/help/(accessed August 27, 2020).

[95] International Centre for Missing and Exploited children (ICMEC), Child Sexual Abuse Material: Model Legislation & Global Review,” https://www.icmec.org/wp-content/uploads/2019/02/One-Pager-9th-Edition.pdf (accessed August 12, 2020); ICMEC, “Child Sexual Abuse Material: Model Legislation and Global Review,” 2018, 9th ed, https://www.icmec.org/wp-content/uploads/2018/12/CSAM-Model-Law-9th-Ed-FINAL-12-3-18.pdf (accessed August 12, 2020).

[96] Human Rights Watch has extensively documented a global trend of countries using overly broad definitions of terrorism that include acts that lack any intent to kill or harm civilians for religious, ideological or political purposes. Most recently see, for example, “Philippines: New Anti-Terrorism Act Endangers Rights,” Human Rights news release, June 5, 2020, https://www.hrw.org/news/2020/06/05/philippines-new-anti-terrorism-act-endangers-rights (accessed August 12, 2020).

[97] Human Rights Watch email exchange with a child protection worker, August 6, 2020.

[98] Austin Davis, “German authorities turn to AI to combat child pornography online,” DW, August 5, 2019, https://www.dw.com/en/germany-new-ai-microsoft-combat-child-porn/a-49899882 (accessed July 27, 2020).

[99] “Landmark data sharing agreement to help safeguard victims of sexual abuse imagery,” Internet Watch Foundation news release, December 12, 2019, https://www.iwf.org.uk/news/landmark-data-sharing-agreement-to-help-safeguard-victims-of-sexual-abuse-imagery (accessed July 27, 2020). See also the homepage of the National Center for Missing and Exploited Children, https://www.missingkids.org/footer/about (accessed July 27, 2020).

[101] Department of Justice, Frequently Asked Questions page of the Child Exploitation and Obscenity Section (CEOS), https://www.justice.gov/criminal-ceos/frequently-asked-questions-faqs (accessed July 27, 2020).

[102] Human Rights Watch telephone with a child protection worker, February 5, 2020.

[103] Olivia Fecteau, “Bill introduced in U.S. Congress to fight online child exploitation,” ABC News, December 13, 2019, https://www.news5cleveland.com/news/local-news/cleveland-metro/bill-introduced-in-u-s-congress-to-fight-online-child-exploitation (accessed July 27, 2020). See also “Rep. Gonzalez’s End Child Exploitation Act Passes Senate Judiciary Committee,” Anthony Gonzalez 16th District of Ohio website press release, July 2, 2020, https://anthonygonzalez.house.gov/news/documentsingle.aspx?DocumentID=249 (accessed August 12, 2020).

[104] Olivia Fecteau, “Bill introduced in U.S. Congress to fight online child exploitation,” ABC News, December 13, 2019, https://www.news5cleveland.com/news/local-news/cleveland-metro/bill-introduced-in-u-s-congress-to-fight-online-child-exploitation (accessed July 27, 2020).

[105] Office of Juvenile Justice and Delinquency Protection (OJJDP), National Center for Missing and Exploited Children, overview page, https://ojjdp.ojp.gov/programs/national-center-missing-and-exploited-children (accessed July 29, 2020).

[106] Human Rights Watch telephone with a child protection worker, February 13, 2020.

[107] Zack Whittacker, “Facebook to pay $52 million to content moderators suffering from PTSD,” TechCrunch, May 12, 2020, https://techcrunch.com/2020/05/12/facebook-moderators-ptsd-settlement/?guccounter=1 (accessed July 27, 2020); National Center for Missing and Exploited Children, “The Resilient Employee: A Safeguard Approach,” https://justiceclearinghouse.com/wp-content/uploads/2016/05/ncmec-part-2.pdf (accessed July 27, 2020).

[108] Nathaniel Gleicher, “Coordinated Inauthentic Behavior Explained,” Facebook news release, December 6, 2018, https://about.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/ (accessed July 28, 2020).

[109] United Nations International Residual Mechanism for Criminal Tribunals, Archives webpage, https://www.irmct.org/en/archives (accessed July 29, 2020).

[110] UN Secretariat, “Information sensitivity, classification and handling,” February 12, 2007, UN document ST/SGB/2007/6, https://www.irmct.org/sites/default/files/documents/ST_SGB_2007_6_eng.pdf (accessed July 29, 2020); UN Secretariat, “International Criminal Tribunals: information sensitivity, classification, handling and access,” July 20, 2012, https://www.irmct.org/sites/default/files/documents/120720_secretary-general-bulletin_en.pdf (accessed July 29, 2020); United Nations International Residual Mechanism for Criminal Tribunals, Access Policy for the Records Held by the International Residuals Mechanism for Criminal Tribunals, January 4, 2019, UN doc MICT/17/Rev.1, https://www.irmct.org/sites/default/files/documents/190104-acces-policy-records-irmct.pdf (accessed July 29, 2020).

[111] Antonio Gonzalez Quintana, “Archival Policies in the Protection of Human Rights,” International Council on Archives, 2009, https://www.ica.org/sites/default/files/Report_Gonzalez-Quintana_EN.pdf (accessed August 27, 2020)

[112] Julian Smith, “A Human Rights Breakthrough in Guatemala,” Smithsonian Magazine, October 2009, https://www.smithsonianmag.com/history/a-human-rights-breakthrough-in-guatemala-138629807/ (accessed August 27, 2020).

[113] Anna-Catherine Brigida, “200,000 Died in Guatemala’s Civil War – This Digital Archive Is Finally Bringing Families Closure,” The Verge, December 5, 2017, https://www.theverge.com/2017/12/5/16733592/guatemala-civil-war-genocide-human-rights-archives (accessed August 27, 2020).

[114] Human Rights Watch telephone interview with Kate Doyle, senior analyst at the National Security Archive, August 24, 2020.

[115] The UN Counter-Terrorism Executive Directorate monitors member states’ implementation of UN Security Council counterterrorism mandates. UN Security Council resolution 2354 (2017) and the UN Counter-Terrorism Committee Comprehensive International Framework to Counter Terrorist Narratives led to the creation of the Terrorist Content Analytics Platform (TCAP). “Unanimously Adopting Resolution 2354 (2017), Security Council Urges Member States to Follow New Guidelines on Countering Terrorist Narratives,” United Nations press release, May 24, 2017, https://www.un.org/press/en/2017/sc12839.doc.html (accessed August 26, 2020); “Project Background” Tech Against Terrorism, https://www.techagainstterrorism.org/project-background/ (accessed August 27, 2020).

[116] Human Rights Watch telephone interview with Adam Hadley, executive director of Tech Against Terrorism, July 30, 2020; See also https://www.techagainstterrorism.org/2020/07/02/update-initial-version-of-the-terrorist-content-analytics-platform-to-include-far-right-terrorist-content/.

[117] Human Rights Watch telephone interview with Adam Hadley, executive Director of Tech Against Terrorism, July 30, 2020.

[118] “Launch of GIFCT Working Groups,” GIFCT press release, April 24, 2020, https://gifct.org/press/launch-gifct-working-groups/ (accessed August 18, 2020).

[120] Human Rights Watch telephone interview with a social media company employee, July 17, 2020; Human Rights Watch telephone interview with a human rights organization representative, July 12, 2020.

[121] Social media platforms themselves rely on user upload data and artificial intelligence to make content discoverable. Those same methods could be used in a searchable private database.

[122] For example, law enforcement investigators and researchers s may need access not only the content but also the original metadata embedded in it, log files of uploads, analytics, and any other metadata generated by the social media platform at upload or any other time, as well as the related hashes created by the platform and audit logs. Access to such materials, which can limit user privacy, should be governed by international human rights standards of necessity, proportionality, and legality.

[123] Avi Asher-Schapiro and Ben Barkawi, “’Lost memories’: War crimes evidence threatened by AI moderation,” Reuters, June 19, 2020, https://www.reuters.com/article/us-global-socialmedia-rights-trfn/lost-memories-war-crimes-evidence-threatened-by-ai-moderation-idUSKBN23Q2TO (accessed July 28, 2020); Office of the United Nations High Commissioner for Human Rights (OHCHR), “Content Regulation in the Digital Age,” Submission to the UN Special Rapporteur on the right to freedom of opinion and expression by association for progressive communications (APC), February 2018, https://www.ohchr.org/Documents/Issues/Opinion/ContentRegulation/APC.pdf (accessed July 28, 2020).

[124] Community Standards page, Facebook, https://www.facebook.com/communitystandards/ (accessed July 28, 2020); Community Guidelines page, YouTube, https://www.youtube.com/howyoutubeworks/policies/community-guidelines/ (accessed July 28, 2020); The Twitter Rules page, Twitter, https://help.twitter.com/en/rules-and-policies/twitter-rules (accessed July 28, 2020).

 

Region / Country