Courts held hearings on a lawsuit by a number of groups concerned at the draconian Dutch system in late October, with a decision awaited in January.
The Netherlands is consistently ranked as one of the world’s strongest democracies. You might be surprised to learn that it is also home to one of the most intrusive surveillance systems that automates tracking and profiling of the poor.
On 29 October, the District Court of the Hague held hearings on the legality of Systeem Risico Indicatie (SyRI), the Dutch government’s automated system for detecting welfare fraud. The lawsuit, filed by a coalition of civil society groups and activists, argues that the system violates data protection laws and human rights standards.
SyRI is a risk calculation model developed by the Ministry of Social Affairs and Employment to predict an individual’s likelihood of engaging in benefits and tax fraud, and violations of labor laws. SyRI’s calculations tap into vast pools of personal and sensitive data collected by various government agencies, from employment records to benefits information, and personal debt reports to education and housing history.
When the system profiles an individual as a fraud risk, it notifies the relevant government agency, which has up to two years to open an investigation.
The selective rollout of SyRI in predominantly low-income neighborhoods has created a surveillance regime that disproportionately targets poorer citizens for more intrusive scrutiny. So far, the ministry has worked with municipal authorities to implement SyRI in Rotterdam, the Netherlands’ second largest city, which has the highest poverty rate in the country, as well as Eindhoven and Haarlem. During the hearing, the government admitted that SyRI has been targeted at neighborhoods with higher numbers of residents on welfare, despite the lack of evidence that these neighborhoods are responsible for higher rates of benefits fraud.
But SyRI doesn’t just have discriminatory effects on the privacy of welfare beneficiaries. It could also facilitate violations of their right to social security. Because SyRI is shrouded in secrecy, welfare beneficiaries have no meaningful way of knowing when or how the system’s calculations are factored into decisions to cut them off from lifesaving benefits.
The government has refused to disclose how SyRI works, for fear that explaining its risk calculation algorithms will enable fraudsters to game the system. But it has disclosed that the system generates “false positives” – cases in which the system erroneously flags individuals as a fraud risk.
Without more transparent explanations, it is impossible to know whether these errors have led to improper investigations against welfare beneficiaries or the wrongful suspension of their benefits.
The government claims it uses these “false positives” to rectify flaws in its risk calculation model, but there is also no way to test this claim. In fact, it is anyone’s guess whether the system maintains a high enough accuracy rate to justify risk assessments that keep people under suspicion for up to two years.
SyRI is part of a broader global trend to integrate Artificial Intelligence and other data-driven technologies into the administration of welfare benefits and other essential services. But these technologies are frequently rolled out without meaningful consultation with welfare beneficiaries or the broader public.
In the case of SyRI, the system was authorized by Parliament as part of a package of welfare reforms enacted in 2014. However, the government experimented with high-tech fraud detection initiatives for almost a decade before relenting to legislative scrutiny. Local groups have also complained that the legislative process was inadequate. According to the lawsuit, Parliament failed to meaningfully address privacy and data protection concerns that were raised by its own legislative advisory council as well as the government’s data protection watchdog.
The court will issue its decision in January. We will be watching to see if it protects the rights of the poorest and most vulnerable people from the vagaries of automation.