Comprehensive data protection laws are essential for protecting human rights – most obviously, the right to privacy, but also many related freedoms that depend on our ability to make choices about how and with whom we share information about ourselves. The European Union General Data Protection Regulation (GDPR) is one of the strongest and most comprehensive attempts globally to regulate the collection and use of personal data by both governments and the private sector. It was enacted in 2016 by the European Union, and went into effect May 25, 2018, across the EU’s 28 Member States. If robustly implemented and enforced, it will bolster privacy protections in Europe and potentially far beyond.

The regulation’s new safeguards are particularly important for human rights in the digital age. Recent scandals involving Facebook and Cambridge Analytica and public concern about digital data breaches, targeted advertising, and private sector profiling have driven calls for greater controls over how personal data is collected and used.

The following questions and answers summarize key portions of the law and discuss what comes next.

  1. What is the EU General Data Protection Regulation (GDPR) and to whom does it apply?

The EU GDPR is a new set of rules that aims to strengthen protections for personal data and to ensure consistency of such protections across the EU. The regulation builds upon the EU’s existing 1995 Data Protection Directive, an important set of laws that predates ubiquitous smart phones and the rise of social media and other online services (search, email, etc.) that companies offer free-of-charge to users, but finance with data-driven targeted advertising. The EU regulation expands the directive’s privacy protections and introduces new safeguards in response to these technological developments.

In the digital age, everything a person does online generates or implicates data that can be highly revealing about their private life. The GDPR provides new ways people can protect their personal data, and by extension their privacy and other human rights. It gives everyone more control, and requires businesses, governments, and other organizations to disclose more about their data practices, and regulates the way they collect, process, and store people’s data.

Personal data” is defined broadly under the GDPR to include “any information relating to an identified or identifiable person.” Thus, even data that does not directly identify a named person, but could still help identify them, is still covered by the law. This definition encompasses online and device identifiers (like IP addresses, cookies, or device IDs), location data, user names, and pseudonymous data.

Although the GDPR is an EU regulation, it will affect the data practices of many organizations outside the EU. It applies to any organization that offers free or paid goods or services to people in the EU, or that monitors the behavior of anyone in the EU, regardless of the organization’s location. This includes, for example, large US internet companies, advertising companies, and data brokers that process the personal data of people in the EU. Under the regulation, “processing data” is defined broadly to include any activity that touches personal data, such as collecting, storing, using, or sharing it.

For the purposes of this Q&A, we refer to the obligations of “organizations” and companies. However, it is important to note that the GDPR’s requirements apply to a broad range of private and public sector entities, including government agencies, companies, and non-governmental organizations.

  1. What are the distinctive protections of the GDPR?

The EU regulation requires all organizations, public and private, that process personal data of people in the EU to put into place certain protections and disclose more information about what data they collect and how they will use and share it. It also provides many more privacy protections for people and the data they may be giving a company or government agency. For example:

  • Companies must ask for consent before collecting or using a person’s data. In most circumstances, companies, governments, and other organizations must now obtain genuine and informed consent before they can collect, use, or share a person’s personal data. The request for consent must be clearly distinguishable, in an intelligible and easily accessible form, and use clear and plain language [Art. 6(1)(a)]. In other words, the request for consent has to be easy to find, and easy to understand.
  • Special protections apply to sensitive information. Processing certain special categories of sensitive data is very tightly regulated. These include information revealing someone’s racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, as well as data about genetics, health, and biometrics (for example, fingerprints, facial recognition and other body measurements) [Art. 9].
  • Companies must treat online identifiers and location data as personal data [Art. 4(1)]. That means that information that advertisers and websites use to track online activity like cookies, device identifiers, and IP addresses are entitled to the same level of protection as other personal data. Such information can be highly revealing about online searches and activity, especially when combined with other data companies hold.
  • Companies must explain how a person’s personal data is used, shared, and stored [Art. 13], even if they obtained their data from another company like a data broker or social media company [Art. 14].
  • Anyone can ask a company what personal data they hold about them free of charge [Art. 15] and then request that it be deleted.
  • A person can download their personal data and move it to a competitor through a new right to data portability [Art. 20]. For example, everyone should be able to take their data from one social media network or financial institution in a format that enables them to more easily switch to another.  
  • Companies are encouraged to build privacy-protecting mechanisms into their systems – a concept known as privacy by design [Art. 25]. Under the regulations, those who process data must carry out technical and organizational security measures designed to protect the data from abuse, loss or misuse – for example, by minimizing the data they collect, and considering the use of pseudonyms and encryption. Where the risk to people’s rights seems high, and particularly where the technology is new, companies are required to conduct data protection impact assessments before processing data [Art 35].
  • Data breaches must be reported to authorities [Art. 33] under almost all circumstances, and people must also be informed if their data is subject to a breach that is likely to result in a “high risk” to their rights and freedoms [Art. 34].

The 1995 EU Data Protection Directive imposed many of the same requirements, but the GDPR strengthens and expands the directive’s obligations.

  1. How does the GDPR protect individuals and human rights?

The GDPR gives people enhanced protections against unnecessary data collection, use of data in unanticipated ways, and biased algorithmic decision-making. In the digital age, personal data is intrinsically linked to people’s private life and other human rights. Everything a person does leaves digital traces that can reveal intimate details of their thoughts, beliefs, movements, associates, and activities. The GDPR seeks to limit abusive intrusions into people’s private lives through their data, which in turn protects a range of other human rights.

The EU regulation gives people in EU member states more control over their personal data, including what information they turn over, how it is used, and with whom it is shared. When a company collects someone’s personal data, it will often need to get consent in plain language, which means the person will often be asked to “opt-in” to collection or use of their data. Companies should collect and process only what data is necessary for the service, whether selling something online or creating a social media account.

Individuals can download and view the data collected on them, ask for corrections, request that their data be erased in some circumstances, and withdraw consent for the data’s continued use. People also have the right to object to online profiling and targeted advertising, and entities must then stop processing their personal data unless the company can demonstrate “compelling legitimate grounds” to do otherwise. Though the regulations don’t define what will be considered “compelling legitimate grounds,” they do provide an absolute right to object to and stop direct marketing by email, phone calls, and text messages.

After data is collected, companies have to be more transparent about how they share it with others. In theory, this means users may be able to learn more about how companies approach online profiling and ad-targeting partnerships, especially those that offer web analytics, advertising, or social media services.

Finally, the new framework also guarantees some protections from decisions based on profiling and from computer-generated decisions [Art. 22]. Systems that incorporate algorithmic decision-making or other forms of profiling can lead to discrimination based on race, sex, religion, national origin, or other status. Even if individuals consent, they still have the right to human review of significant results from automated decision-making systems. As governments and companies increasingly use algorithms to make important decisions about people’s lives, such as whether a person gets public benefits, health insurance, credit, or a job, these protections promise a degree of transparency and accountability and safeguard against discrimination that affects a person’s human rights.

  1. How clear are the rights and duties under the GDPR?

The GDPR, like any new rule, will become clearer over time as people and companies challenge practices and interpretations of its requirements. There are already certain areas that are likely to be contentious and await further resolution.

Member States of the EU have a certain amount of flexibility in deciding how to apply the law and reflect it in their own national data protection regimes. One area in which some variation is expected is the age at which children can themselves consent to the processing of their data without a parent or guardian. The EU regulation allows member states to set the age of consent to anywhere between ages 13 and 16. This raises the risk of inconsistencies in approaches across the European Union.

Another area of uncertainty is when the regulation permits organizations to obtain and process a person’s data without consent if the entity’s “legitimate interests” outweigh a person’s rights and freedoms. Some of the legitimate interests that entities can rely on include fraud prevention, internal administration, information security, and reporting possible criminal acts. But direct marketing is also a legitimate interest, raising a potentially much broader category against which the individual’s rights would be weighed. Depending on how the “legitimate interests” provision is interpreted, it could create a major loophole allowing data collectors to avoid seeking consent. One safeguard is that the EU member states will still need to apply and enforce the regulation in a way that ensures respect for people’s human rights found in the Charter of Fundamental Rights of the European Union.

  1.  What problems will the GDPR not be able to solve?

The EU regulation will not curtail large-scale government surveillance, as it allows for government surveillance under broad exemptions. Government agencies can process personal data without consent if there is a “national security,” “defense,” or “public security” concern, terms the regulation does not define. As the EU’s Court of Justice has established, however, such terms do not provide carte blanche for countries to do whatever they like. International and regional human rights laws (and any national regulations that do not conflict with the EU regulation) still apply to limit the surveillance and data processing activities of intelligence and law enforcement agencies.

However, many European states have expanded their surveillance laws in recent years, undermining protections for privacy and other human rights. In the coming years, the EU Court of Justice is likely to be called on to delineate the regulation’s state interest exceptions in the context of EU, European, and international human rights law.

The strengthened data protections under the EU regulation also highlight how much weaker US data protection regulations are in comparison. It also exacerbates concerns about transfers of EU data to the US under the Privacy Shield agreement. Under EU law, US companies can’t transfer EU personal data to the US unless they show it will be protected in ways “essentially equivalent” to protections in Europe. In a 2015 case against Facebook brought by a privacy advocate, Max Schrems, the EU’s top court invalidated an agreement allowing such transfers, citing concerns that US intelligence agencies could access European data indiscriminately, without meaningful redress if agencies violated rights.

Under pressure to restore cross-Atlantic data flows, in July 2016, the US Commerce Department and the European Commission reached a new deal, the Privacy Shield, with promises of stronger data protection. The deal relies on written assurances by the US director of national intelligence that European data won’t be subject to “indiscriminate mass surveillance.”

However, this deal was flawed from the start since Privacy Shield doesn’t prevent dragnet surveillance of European data. As a result, Human Rights Watch contends that US surveillance laws and practices make the Privacy Shield invalid.

  1. What happens if companies and other institutions don’t comply with the GDPR?

The EU regulation imposes stiff penalties on public and private sector organizations that violate its terms. For example, regulators can fine companies up to €20 million or 4 percent of annual global turnover (revenue) for non-compliance, whichever is larger.   

  1. What effect will the GDPR have outside the EU?

The EU regulation is likely to become a de facto global standard, much as the previous European Data Protection Directive did, because it will apply to any organization that collects or processes the data of EU citizens, regardless of where the organization is based or where the EU data is processed. It is also possible that non-European countries will copy some or many of its protections as they modernize or establish data protection laws.

The GDPR may become the standard many organizations follow by default everywhere, or at least elements of it. Some multinational companies may choose to apply the EU regulation to everyone worldwide, while others may attempt to identify and apply a separate set of rules for people in the EU. For example, Microsoft, Apple, and Twitter announced that they would extend at least some of the regulation’s protections to their customers worldwide, with varying degrees of detail about which provisions would be applied. Facebook has also said it will extend the GDPR’s protections “in spirit” to users located outside the EU, but has stopped short of a commitment to apply the regulation globally. At the same time, the firm took steps to ensure that Facebook users in Africa, Asia, Australia, and Latin America may not fall under the regulation’s ambit.

Still other businesses may exit the EU market altogether or temporarily block people in the EU while they work to come into compliance. In other cases, systems developed in response to the EU regulation, like data portability, could be easily offered for users outside Europe once they are in place.

All countries should adopt comprehensive data protection laws that place individuals’ human rights at their center. The GDPR is not perfect, but it is one of the strongest data protection regime in force anywhere in the world. Governments should regulate the private sector’s treatment of personal data with clear laws, and limit companies’ collection and use of people’s data to safeguard rights.

  1. What happens next with the GDPR?

In recent weeks, many companies and other institutions have sent out a flurry of notices about changes to their terms of service and privacy policies in preparation for the EU regulation’s deadline. Yet some of corporate notices have raised questions about whether companies are already circumventing the spirit of the regulations. For example, the EU regulation requires companies to get informed consent from users before collecting or using their data. But journalists previewing Facebook’s privacy policy consent notices criticized them for being designed to encourage unthinking (rather than informed and meaningful) consent and for failing to provide users adequately detailed controls over their data.

On April 30, the EU’s top data protection official, European data protection supervisor Giovanni Buttarrelli, warned regulators to be “vigilant about attempts to game the system,” pointing to the stream of privacy policy updates that appeared to press users to consent to broad digital tracking as a “take-it-or-leave-it proposition.” This warning underscores the difficulty of ensuring meaningful, informed consent, even with enhanced transparency.

The GDPR will most likely lead to a flood of court cases and enforcement actions as data protection authorities and companies contest the contours of the new rules and the meaning of ambiguous terms. On May 25, privacy advocate Max Schrems had already filed the first complaints against Google and Facebook in France, Belgium, Germany, and Austria alleging failure to give European users specific control over their data. Schrems contends that the companies’ all-or-nothing approach to their terms of service is a form of “forced consent.” If successful, the complaints could result in up to €7.6 billion (around US$8.8 billion) in fines.

Effective implementation, monitoring, and enforcement are now needed to ensure that the GDPR truly protects the personal information that people share with Internet and technology companies, governments, and others.

  1. What impact will the GDPR have on freedom of expression? 

The regulations provide for a right to erasure [Art. 17]. This provision expands what had become known as the “right to be forgotten” that the EU Court of Justice had established in 2014 in a case against Google Spain. Under the GDPR, individuals can ask companies to erase personal data in specific circumstances: for example, if the data is no longer necessary for the purposes for which it was collected; if the individual withdraws consent or objects and there is no overriding justification for keeping it; or if the data was otherwise unlawfully processed in breach of the GDPR. This right also applies if the personal data has been made public, raising considerable implementation difficulties given the ease with which online information can be copied and shared across multiple websites in various jurisdictions.

The rules provide exceptions, including if the data processing is necessary for the exercise of freedom of expression and information or for archival or research purposes. However, these exceptions are not well defined in the GDPR, and are left for national legislation to elaborate. Because private platforms risk penalty for non-compliance, the provision may tend to encourage unnecessary or excessive take-downs of content, infringing freedom of expression. In addition, leaving determinations about when processing is necessary for freedom of expression (and other public interest grounds) to the discretion of companies, rather than impartial tribunals, means there is little procedural recourse for those who wish to continue to have access to information that is removed.

The “right to be forgotten” developed in EU Court of Justice rulings has been criticized for enabling people to suppress truthful, non-defamatory information that simply may be unflattering. For example, people in positions of public trust (such as elected officials, priests, and financial professionals) have attempted to use the right to be forgotten to remove news articles discussing their previous criminal convictions from Google search results.

The contours between data protection and freedom of expression will continue to be contested as individuals invoke the GDPR’s right to erasure.

Finally, the EU regulation is not designed to address the spread of disinformation, hate speech, or other illegal content online.

  1. What else needs to be done to protect data and the right to privacy?

The GDPR is a vital step toward stronger privacy protections. However, it will not be effective without interpretation, implementation, and enforcement.

National data protection authorities will need to rigorously respond to complaints, promptly investigate breaches, and actively pursue investigations to enforce the provisions. Many data protection authorities are poorly resourced, particularly in comparison to large companies, and lack the capacity to play a comprehensive enforcement role. Member states should allocate appropriate financial and human resources to data protection authorities.

Even with strong enforcement, there are still many structural challenges to achieving the GDPR’s vision of data privacy and control. For one, while the regulation requires consent before companies can collect or process data, meaningful informed consent is difficult to achieve without choice. Many large online services have few real competitors, so users are faced with either consenting to a social network’s terms or missing out on a central component of modern social or professional life. Though the Schrems case may force some positive changes, the GDPR doesn’t fully address the effects of this kind of monopoly power.

In addition, informed consent will only become more elusive over time as advertising ecosystems become more complex. The EU regulation doesn’t directly challenge ad-driven business models that invite users to trade their personal data for free online services like email, social networking, or search engines – all while using that data to create detailed profiles to sell to advertising networks. The average user may consent to data processing without a true understanding of the complexities of how their data will be used, despite the regulation’s requirement of clear privacy notices. The GDPR’s approach to consent may make it more difficult for future Cambridge Analyticas to gain unsanctioned access to data, but it is far from clear whether the regulation can fully prevent unexpected or abusive use of personal data, such as for “psychometric” election advertising. Human rights are a minimum standard that cannot be waived by consent, even if all potential uses of data could be foreseen. Ultimately, the digital society may require many more substantive protections than a consent-based model can provide.