The $5 billion fine imposed last month by the U.S. government on Facebook for mishandling users’ personal information will do little to rein in the company’s exploitative data collection regime and manipulative practices. For a company that recorded $22 billion in profits in 2018, it promises to do little to guarantee a change in behavior.
In addition to the fine, the Federal Trade Commission (FTC) ordered changes to Facebook’s privacy practices. But it declined to hold the company’s leaders accountable for repeated failures to comply with past orders, raising doubts about whether the latest order will be any different.
The settlement also releases Facebook from liability for a wide range of privacy and consumer protection claims, letting the company off the hook for data collection practices that may have unlawfully targeted children and patients, among other possible violations.
From 2012 to 2015, Facebook appears to have deployed elaborate design tricks known as “dark patterns” to bury key privacy settings in hard-to-reach corners of its website and mobile interface. The social media giant engaged in a pattern of deceptive conduct that misled users about who would be seeing their data, for how long, and for what purposes.
The deception worked: the FTC found that a “very low percentage” of users activated these settings. This privacy loophole is how Cambridge Analytica, a now-defunct British political consulting firm, was able to harvest Facebook data to target voters during the U.S. 2016 elections.
Facebook’s deception did not stop there. According to the FTC, the company also encouraged users to supply their phone numbers for security purposes without disclosing that they would be used for advertising, and misled millions of people about a facial recognition feature on their accounts that worked by default. The FTC states that Facebook’s privacy lapses were so rampant, and its recordkeeping so poor, that it struggled to comprehend the “full scale of unauthorized collection, use, and disclosure of consumer information.”
These practices raise serious privacy concerns, of course, but also strike at the right of users to form their beliefs and opinions free from undue coercion or inducement—a right so fundamental that international law forbids it from being suspended even in grave crises.
In his dissent, FTC Commissioner Rohit Chopra argued that the settlement fails to address the root cause of the company’s privacy violations: an advertising-driven model that draws on its extensive collection and analysis of user data to “manipulate us into constant engagement and specific actions aligned with its monetization goals.”
David Kaye, the United Nations’ independent expert on freedom of expression, has warned that this form of manipulation harbors enormous potential to interfere with the “mechanisms and processes” of developing our innermost thoughts and beliefs.
The FTC’s complaint adds to mounting evidence that Facebook’s systematic attempts to undercut users’ privacy are inseparable from their strategy to maximize user engagement. As the company collects more data about its users, it develops deeper understandings of their interests, preferences, and moods that enable it to target ads and other content in a way that induces more clicks and “likes.”
My group, Human Rights Watch, has also raised concerns that governments and non-government entities have exploited these microtargeting capabilities to stoke disinformation, hatred, and violence. The Cambridge Analytica scandal underscores another threat: in the wrong hands, these ever-growing repositories of user data leave us vulnerable to voter manipulation and other forms of social engineering.
As part of the settlement, the FTC has ordered Facebook to routinely conduct privacy risk assessments for new products and services, obtain independent audits of its privacy practices, and establish Board-level oversight. Facebook’s Mark Zuckerberg has welcomed these measures, and announced the company will be rolling out a revamped privacy program that holds developers accountable to its policies on user data and establishes “more technical controls to better automate privacy safeguards.”
But these measures tinker around the edges of Facebook’s exploitative practices. They do not set substantive limits on the kinds of information Facebook can collect about its users, or with whom it shares this information. They also give Facebook broad latitude to determine when it should seek user consent for new forms of data collection. That amounts to letting the fox guard the chicken coop.
Fortunately, a separate antitrust investigation opened in June gives the commission another opportunity to examine how the company’s business model—and its dominance on social media—undermines user rights. Comprehensive regulation that protects consumers’ privacy is also long overdue. In this “move fast, break things” era, lawmakers and regulators should take swift, rights-oriented action should take swift, rights-oriented action before our digital world is well and truly broken.