Algo Lit / Blog : 05.03.2026 ? Terug

Algorithmic discrimination at the CAF (french social welfare office) : a workshop to identify problematic criteria

Whether it is for calculating social benefits or during hiring processes, we are increasingly confronted with obscure and discriminatory algorithms. Understanding of these systems is often most limited among those who are most targeted and discriminated against. This scissor effect has been highlighted by many activists.

For example, in France, 15 civil society organizations have recently attacked the national social welfare office at the Council of State to repeal an algorithm that assigns a risk score to benefit recipients and targets the most vulnerables.

At the same time, the Defender of Rights published a report proposing ways to improve the transparency of algorithms.

All digital inclusion workers—whether they are caregivers, mediators, social workers, etc., dealing with the automation of public services, or civil servants—are facing increasing demands from their audiences regarding algorithmic systems.

To better inform these professionals on this subject, Datactivist and the activist associations Changer de Cap and Pas Sans Nous 38 (Isère) came together on March 19, 2025 for a practical workshop. This session was organized as part of the European Algo->Lit project, which aims to train digital mediators to help them explain algorithms and support their audiences in exercising their rights to information about these systems.

Couverture du support de l'atelier The complete workshop support is here (in french) : Atelier score de risques V2.pdf

Jaoued Doudouh, an activist from working-class neighborhoods within the association Pas Sans Nous 38, opened the day by reading an inspiring text he wrote to critique algorithms from a decolonial perspective:

“Today, we are addressing a crucial reality that is often ignored but deeply rooted in our lives: discriminatory algorithms. For us, activists and spokespersons for working-class neighborhoods, alongside people living in precarious situations, on the streets, migrants, or the elderly, this issue goes far beyond the technological dimension. It directly affects our collective dignity, our very existence.

Digital brutality and technological colonialism

For a long time now, working-class neighborhoods and marginalized populations have become, often against their will, testing grounds for these algorithmic approaches. We are subjected to a subtle but deeply violent form of digital brutality, where everyone is reduced to data, numbers, statistics, and security predictions. This dynamic echoes the "brutalism" described by Achille Mbembe (2020), that cold and calculating logic that reduces all life and human complexity to something strictly measurable, controllable, and exploitable.

This logic constitutes a contemporary form of technological colonialism, a direct descendant of the historical colonial systems analyzed by Frantz Fanon (1961), aimed at ordering, classifying, and managing populations perceived as problematic or useless.

In our neighborhoods, on the streets, in migrant reception centers, and in facilities for the elderly, this logic manifests itself in the dramatic intensification of automated social control: facial recognition, predictive video surveillance, algorithms for housing allocation, automated control of social assistance, and automated management of aging and health.

These systems no longer perceive individuals as human beings, but only as risks to be managed, anomalies to be corrected, or threats to be anticipated (Eubanks, 2018; Benjamin, 2019).

For example, since 2021, several french local authorities have been using automated systems to allocate social housing. Research conducted by the Defender of Rights (2022) shows that these systems massively reproduce existing discrimination, particularly penalizing families from working-class neighborhoods.

Similarly, experiments with predictive video surveillance in Marseille and Nice, which have been consistently condemned by La Quadrature du Net, use algorithms that have been criticized for their systematic racial and social biases, exacerbating the stigmatization of racialized populations (Amnesty International, 2021).

Ecocide and widespread surveillance

This dynamic is also linked to the concept of "ecocide" developed by Mbembe (2020), namely the planned destruction not only of natural life, but also of human, social, and cultural life. It methodically destroys our social ties, our solidarity, and our artistic creativity, thereby limiting our ability to imagine a society based on anything other than the logic of profit and widespread surveillance (Zuboff, 2019).

The accelerated digitization of society further amplifies these forms of discrimination. Far from being neutral, it establishes an authoritarian technocratic order that imposes a dehumanizing algorithmic logic across the board in essential areas such as housing, health, employment, education, security, and access to fundamental social rights.

Individuals are constantly evaluated, classified, and sorted according to specific economic and security interests. Institutions use these tools to mask their political responsibility, making decisions that directly affect our lives opaque and unaccountable (Morozov, 2013; Crawford, 2021).

In the current context of an unstable Europe facing multiple and profound crises, this security and technological approach is becoming more radical. Faced with rising geopolitical tensions, open or latent conflicts, and a worrying arms race, Europe is refocusing its priorities in an unprecedented security emergency.

Digital technology and algorithms play a central role in this dynamic, serving as key tools for monitoring, controlling, and predicting social behavior on a large scale. This shift is leading to the gradual militarization of civil society, with algorithmic surveillance technologies becoming instruments of widespread social control (Lyon, 2018; Amoore, 2013).

A recent strategic example is the law relating to the 2024 Paris Olympics, which provides for the widespread use of facial recognition and automated behavioral analysis. This measure raises major concerns about algorithmic bias, ethnic profiling, and widespread suspicion of behavior deemed "suspicious," setting a major precedent in terms of algorithmic security in Europe (League of Human Rights, 2023).

Resistance and alternatives

Thus, an essential aspect, often overlooked but fundamental to my thinking, concerns the internal resistance programmed into digital tools themselves. This built-in resistance, algorithmic self-censorship, and imposed neutrality reveal how these tools are designed to serve primarily the dominant interests of their designers. Radical criticism of technology necessarily includes this in-depth analysis of the economic and political model that underpins it. (Morozov, 2013).

The resistance we are waging today from working-class neighborhoods, alongside the precarious, migrants, the homeless, and the elderly, is therefore existential. It aims to reinsert technological tools into a genuine narrative of emancipation and liberation, rather than submission and oppression. Defending this vision means radically asserting that humans can never be reduced to a calculable function, that our dignity will never be negotiable in the face of the imperatives of digital profit. It means promoting a popular ecology in which human, social, and cultural life definitively escapes algorithmic reduction.”

Following this introduction, the following topics were introduced by Loup Cellard, researcher at Datactivist :

  • what is an algorithm? ;
  • why it is important to know how the government and public administrations use computing systems ;
  • the legal framework that guarantees our right to information about them ;
  • examples illustrating the issues raised by the use of algorithms.

Capture d'écran des slides de présentation The slides are available here : Intro Algo.pdf

Finally, the last half hour of the workshop was devoted to an exercise: guessing the problematic criteria of the CNAF's risk scoring algorithm. In fact, in the CAFs, a scoring algorithm assigns risk scores to all beneficiaries, regardless of the benefits paid (housing assistance, family allowances, activity bonus, RSA, adult disability allowance, etc.).

The higher the risk score, the greater the likelihood that a beneficiary will be subject to an audit, as they are more likely to be suspected of having committed errors or fraud. The rating is based on criteria (or "variables") selected by the CNAF, as well as their weighting, from among the thousands of data points held by the CAFs on each of their beneficiaries.

A study of the 2014 rating algorithm showed that it deliberately targets the most vulnerable. Fifteen organizations filed an appeal with the Council of State in October 2024 to demand its repeal.

The practical workshop was conducted as follows: three family situations were presented, each containing three criteria that increase the risk score assigned to recipients. These variables were selected from among the 94 variables in the 2014-2018 version of the algorithm, the only version that associations and the media were able to obtain. Participants were asked to identify these three criteria by reading a description of the family situation. Below you will find one of the family situations followed by the answer key for the exercise.

Situation1 de l'exercice

Solution de la Situation1 de l'exercice

These algorithmic systems, whether used to calculate and allocate rights or to monitor beneficiaries, are subject to obligations regarding impact and transparency that are often not respected. The overuse of personal data (contrary to the principle of minimization under the GDPR) and indirect discrimination are thus at the heart of the appeal filed with the Council of State against the CNAF's rating algorithm by fifteen associations.

Defending citizens' access to these algorithms, which today resemble black boxes, is a democratic issue. It is collectively supported by organizations. However, we must not forget the individual rights that users/beneficiaries/citizens can assert, with the help of social workers and associations if necessary: access to personal data, the right to rectification, and the right to an explanation of the calculation, in particular.

Au sein du projet Algo->Lit, nous souhaitons continuer à mener ce type d’atelier avec des médiateurs numériques — qu’ils soient professionnels de l’inclusion, travailleurs sociaux ou militants — pour, à terme, construire les formations et les outils dont ils ont besoin. Our goal is for them to then be able to explain to their audiences how to deal with algorithms, the decisions they generate, and their very real consequences. Although little respected and misunderstood, we have rights when it comes to algorithmic systems.

For more information : algolit(at)datactivi.st, mj.doudouh(at)hotmail.fr, valerie.changerdecap(at)protonmail.com.

For more information on the project Algo->Lit: algolit(at)datactivi.st