Skip to main content
High-tech video cameras equipped with facial recognition technology hang from an office building in downtown Belgrade, Serbia.  © 2019 Darko Vojinovic / AP Photo

If you’re worried about how facial recognition technology is being used, you should be. And things are about to get a lot scarier unless new regulation is put in place.

Already, this technology is being used in many U.S. cities and around the world. Rights groups have raised alarm about its use to monitor public spaces and protests, to track and profile minorities, and to flag suspects in criminal investigations. The screening of travelersconcertgoers and sports fans with the technology has also sparked privacy and civil liberties concerns.

Facial recognition increasingly relies on machine learning, a form of artificial intelligence, to sift through still images or video of people’s faces and obtain identity matches. Even more dubious forms of AI-enabled monitoring are in the works.

Tech companies have begun hawking a range of products to government customers that attempt to infer and predict emotions, intentions and “anomalous” behavior from facial expressions, body language, voice tone and even the direction of a gaze. These technologies are being touted as powerful tools for governments to anticipate criminal activity, head off terrorist threats and police an increasingly amorphous range of suspicious behaviors. But can they really do that?

Applications of AI for emotion and behavior recognition are at odds with scientific studies warning that facial expressions and other external behaviors are not a reliable indicator of mental or emotional states. And that is worrying.

One concern is that these technologies could single out racial and ethnic minorities and other marginalized populations for unjustified scrutiny, if how they talk, dress or walk deviates from behavior that the software is programmed to interpret as normal — a standard likely to default to the cultural expressions, behaviors and understandings of the majority.

Perhaps cognizant of these challenges, the Organization for Economic Cooperation and Development and the European Union are formulating ethics-based guidelines for AI. The OECD Principles and the Ethics Guidelines developed by the European Commission’s High-Level Expert Group contain important recommendations. But several key recommendations dealing with human rights obligations should not just be voluntary standards: They should be adopted by governments as legally binding rules.

For example, both sets of guidelines recognize that transparency is key. They say that governments should disclose when someone might interact with an AI system — such as when CCTV cameras in a neighborhood are equipped with facial recognition software. They also call for disclosure of a system’s internal logic and real-life impact — which faces or behaviors, say, is the software programmed to flag to police? And if so, what might happen when an individual’s face or behavior is flagged?

Such disclosures should not be optional. Transparency is a prerequisite both for protecting individual rights and for assessing whether government practices are lawful, necessary and proportionate.

Both sets of guidelines also emphasize the importance of developing rules for responsible AI deployment with input from those affected. Discussions should take place before the systems are acquired or deployed. Oakland’s surveillance oversight law provides a promising model.

Under Oakland’s law, government agencies must provide public documentation of what the technologies are, how and where they plan to deploy them, why they are needed and whether there are less intrusive means for accomplishing the agency’s objectives. The law also requires safeguards, such as rules for collecting data, and regular audits to monitor and correct misuse. Such information must be submitted for consideration at a public hearing, and approval by the City Council is required to acquire the technology.

This kind of collaborative process insures a broad discussion of whether a technology threatens privacy or disproportionately affects the rights of marginalized communities. These open discussions may raise enough concerns about the human rights risks of governments using facial recognition that a decision should be made to ban it, as has happened in OaklandSan Francisco and Somerville, Mass.

Companies providing facial recognition for commercial use should also be held legally accountable to high standards. At a minimum, they should be required to maintain comprehensive records about how their software is programmed to sort and identify faces, including logs of the data used to train the software to classify facial features and of changes made to the underlying code that affect how faces are identified or matched.

These record-keeping practices are key to fulfilling the transparency and accountability standards proposed by the OECD and in the EU. They can be critical to analyzing whether facial recognition software is accurate for some faces but not others, or why someone was misidentified.

To provide time to develop these vital regulatory frameworks, governments should impose a moratorium on the use of facial recognition. Without binding regulations in place, we can’t be sure that governments are meeting their human rights obligations.

Your tax deductible gift can help stop human rights violations and save lives around the world.