Skip to main content

Auditing Algorithms in New York City

Coalition Calls for Stronger Oversight of City’s AI-based Decision-Making

A line of police cars are parked along a street in Times Square, in New York, December 29, 2016. © 2016 AP Photo/Kathy Willens

Algorithmic decision-making is becoming the new norm in New York. City agencies use computerized algorithms to make important decisions about New Yorkers’ daily lives, from school assignments to public benefits evaluations and more. But serious concerns persist on how to monitor automated systems and prevent human rights abuses.

The city government recently took up this question of how to best evaluate the ways in which city agencies develop and use algorithmic decision-making systems. A city-appointed task force recommended this month that the city establish a central oversight structure to accomplish this. The Mayor’s Office announced the first piece of a new oversight function shortly thereafter.

Today, a coalition of civil society organizations released a report documenting the task force’s process and identifying automated systems currently in use by city agencies. The report proposes audits of each agency’s systems to ensure rights-respecting use, recommending oversight powers much stronger than what the city now employs.

For example, law enforcement’s investigative tools are currently exempt from the city’s oversight function, even though such tools can significantly impact New Yorkers’ rights. The NYPD deploys a sophisticated network of surveillance systems, many of which have automated components.

One such system is the NYPD’s Patternizr, a tool disclosed earlier this year that the NYPD designed to identify potential future patterns of criminal activity. NYPD analysts built Patternizr to forecast crime patterns by training a computer model on ten years of historical crime data collected during the “stop and frisk” regime of racially discriminatory policing. AI systems are only as good as the data that drives them – Patternizr could very well be replicating those biases.

Oversight of NYPD systems like Patternizr could mitigate biased policing, prevent rights-abusing surveillance, and limit intrusions into peoples’ private lives.

The risks of automated systems aren’t limited to policing. They can impose procedural barriers that limit access to public benefits. Without independent, robust, and ongoing oversight, the public will have trouble holding agencies accountable for errors or abuses linked to algorithmic decision-making.

While the city’s oversight function is an important first step, officials should work with the civil society coalition to implement their recommendations, including extending oversight to law enforcement. New Yorkers deserve to know that the city is auditing its own use of algorithms to protect our rights.

Your tax deductible gift can help stop human rights violations and save lives around the world.

Region / Country

Most Viewed