(Brussels, October 9, 2023) – Last week, Human Rights Watch, La Quadrature Du Net and EDRi shared a proposal with the European Council and the European Parliament to strengthen the regulation’s prohibition on social scoring. Investigations in France, the Netherlands, Austria, Poland and Ireland have revealed that AI-based social scoring systems are disrupting people’s access to social security support, compromising their privacy, and profiling them in discriminatory ways and based on stereotypes about poverty. This is making it harder for people to afford housing, buy food, and make a living. The joint proposal urges the EU to adopt critical amendments to the regulation that would address these harms and halt the spread of AI-based social scoring systems.
Proposed amendments to the proposal for an EU AI Act – Social scoring ban
Although the AI Act proposals introduced by EU policymakers seek to restrict AI-based social scoring, their current wording could allow practices and systems that facilitate the spread of AI-based social scoring across the EU.
Current proposals would classify most AI systems used to determine and allocate public assistance benefits and services as “high-risk.” But many of these systems are social scoring systems that draw on a wide range of personal and sensitive data to assess whether beneficiaries are a fraud “risk” that should be investigated and ultimately sanctioned. The AI Act should ban these systems because they lead to disproportionate and detrimental treatment of people based on their socio-economic status, and unduly restrict their rights to social security, privacy, and non-discrimination.
The groups undersigned urge the Council of the European Union and the European Parliament to adopt the following amendments to the European Commission’s proposal for an AI Act:
Recital 17 |
AI systems Such AI systems evaluate or classify the trustworthiness of natural persons based on multiple data points and time occurrences related to their social behaviour in multiple contexts or known or predicted personal or personality characteristics. |
Recital 37 |
Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services, including healthcare services, and essential services, including but not limited to housing, electricity, heating/cooling and internet, and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the rights to social security, non-discrimination, human dignity or an effective remedy. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should |
Article 3 |
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; ‘development’ of an AI system means the collection, labeling and analysis of data in connection with the training and fine tuning of the system, and any testing and trialing of the system prior to placing it on the market or putting it into service. |
Article 5 |
1.The following artificial intelligence practices shall be prohibited: […] (c)the development, placing on the market, putting into service or use of AI systems Evaluations, classifications or scores covered by Article 5(1)(c) include but are not limited to those relating to the person or group’s education, employment, housing, public assistance benefits, health, and socio-economic situation.
|
ANNEX III |
HIGH-RISK AI SYSTEMS REFERRED TO IN ARTICLE 6(2) (a)AI systems (b) AI systems intended to be used for making (b) (c) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, (c) (d) AI systems intended to classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by police and law enforcement, firefighters and medical aid, as well as of emergency healthcare patient triage systems. |
Justification
The proposed amendment to Article 5 addresses the unacceptable risks posed by automated, wide-ranging social scoring systems, deployed by public administrations and private companies in Europe.
These systems are already operational in several countries. In France, La Quadrature du Net has repeatedly denounced and documented a scoring algorithm used by France's family benefits funds, known as CAF. CAF has entrusted an algorithm with the task of predicting which recipients of social benefits should be considered as "untrustworthy" and further controlled by its services. The system gives a score to each recipient, assessing the "risk" that he or she will supposedly represent to the social assistance system. Constructed from the hundreds of data points that CAF has on each claimant, this automated score is then used to select those who will be investigated, and then sanctioned.
La Quadrature du Net has categorized this scoring algorithm as “a policy of institutional harassment” of people based on their socio-economic status. While it is a clear example of social control based on a police logic of generalized suspicion, and sorting and continuous evaluation of people’s movements and activities, it is alarming that such a system would not be covered by the EC or Parliament’s proposed social scoring bans under the AI Act.
CAF’s scoring algorithm is part of a broader trend. In France, other public institutions, funds, and tax authorities are developing their own rating algorithms. In the Netherlands, SyRI, a now-defunct risk assessment tool developed by the Dutch government, tapped into employment and housing records, benefits information, personal debt reports and other sensitive data held by government agencies to flag people for fraud investigations. In Austria, the government uses an employment profiling algorithm that controls a person’s ability to access job support services and replicates the discriminatory realities of the labor market. All of these systems create an unacceptable risk to people’s rights to social security, privacy, and non-discrimination.
These examples underline the growing deployment of AI scoring systems involving the large-scale, unstructured, and automated linking of files pertaining to large groups of citizens, coupled with the processing of their personal data. As these systems are borne from mass data collection and discriminatory profiling, their harms cannot be effectively mitigated or prevented through procedural safeguards; as a result, they should be banned.
These AI systems unduly restrict peoples’ access to social benefits leading to violations of their right to social security. AI-based techniques to evaluate or classify individuals as trustworthy or risky does not have a place in a democratic society. As already stated by a number of MEPs in the context of the negotiations of the EP report on the draft AI Act, it is important to acknowledge that if an outcome for an automated AI evaluation is beneficial for one it means that it is unfavorable to others. As a result, we should prohibit social scoring by AI in all circumstances, as it inherently creates harms and unfavorable treatment due to its very nature.
The proposed amendment to Annex III would ensure that non-social scoring AI systems used to evaluate public benefits, related public services, and health and life insurance benefits would still be classified as “high risk.” These amendments would also classify all credit scoring systems as “high risk.” These systems, such as Germany’s SCHUFA scoring, draw on information that is plausibly connected to a person’s finances – such as their history of unpaid bills, loans, and fines – to generate a score that estimates a person’s likelihood of meeting their payment obligations. This score can in turn interfere with a person’s ability to obtain a lease, a credit card, or an internet contract.
Signatories
La Quadrature du Net
Human Rights Watch
EDRi