Algorithms are omnipresent in our daily lives: access to rights, public services, digital platforms, content prioritization, and automated decision-making. Yet they remain largely invisible to those they affect. Moreover, when transparency does exist, it is often limited to the publication of technical information, reports, or documentation.
However, publishing explanatory documentation does not guarantee real understanding of the decisions produced by an algorithm. A system can be legally transparent while remaining socially opaque. The question, therefore, is not only what is made public, but who is actually able to understand, question, and challenge these systems.
The webinar Algorithmic Transparency in Europe: State of Play and Perspectives, organized by La Mednum, Datactivist, FARI AI for the Common Good Institute and Waag Futurelab as part of the European Algo-Lit project (funded by Erasmus+), and held on december 2, made a clear statement on the following issue: algorithmic transparency as it is practiced today is largely insufficient. .
During the event, which brought together 290 digital mediation professionals and researchers from France, Belgium, and the Netherlands, participants shared experiences and best practices to address common questions across the three countries: how to make algorithms understandable, and how to support mediators and field practitioners in discussing and explaining them to citizens.
From Formal Transparency to Effective Transparency
The presentations by the french researcher Loup Cellard, Tania Duarte, founder of We and AI in the United Kingdom, and Adrien Pouillet, project manager at La Mednum, emphasized a key distinction: that between formal transparency and effective transparency. The former meets regulatory requirements; the latter focuses on the lived experience of the people concerned. Transparency truly begins when a person can say: “I understand what is happening to me.”
This approach highlights a frequent blind spot in public policy and technological discourse: understanding is not solely a matter for experts. Algorithmic literacy does not mean turning citizens into engineers, but giving them the tools to grasp the underlying logic, identify room for maneuver, and exercise their rights.
Power, Legitimacy, and Social Justice: Tania Duarte’s Perspective
Tania Duarte, founder of the UK-based organization We and AI, offered a distinctly political and critical perspective on algorithmic transparency. For her, the debate cannot be limited to technical understanding of systems: the central issue is power.
Algorithms are never neutral. They embed choices, priorities, and values that have very concrete effects on individuals and social groups. Even when a system is “explainable,” it can still generate discrimination, reinforce existing inequalities, or impose decisions without genuine consent.
Tania Duarte stressed that transparency does not automatically create legitimacy. Understanding how an algorithm works is not enough if those affected have no means to challenge its use or influence decisions made upstream. This leads to an essential question: Should we be using this algorithm, and under what conditions ?
She identifies three possible stances toward AI systems :
This approach shifts the debate: it is no longer only about making algorithms understandable, but about restoring real power to citizens over the technological choices that shape their daily lives.
The Key Role of Mediation
In the face of such complexity, digital mediators emerge as essential actors. They operate at the intersection of institutions, technical systems, and citizens. Their role is not to oversimplify, but to translate algorithmic decisions into understandable experiences, and above all to empower those affected.
Several important points emerged during the discussions :
Thus mediation becomes a concrete alternative to algorithmic opacity.
A European Ambition in the Making
The discussions also showed that, despite common legal frameworks at the European level, algorithmic transparency practices vary greatly from one country to another.
This diversity reminds us that transparency cannot be established by law alone: it is built through everyday practices, professional cultures, and support for the public.
Participants’ feedback highlighted how much field practitioners need shared resources, common frameworks, and spaces for exchange in order to collectively address these challenges.
This is precisely the dynamic behind the Algo-Lit project.
Through the development of resources, training for mediators, and networking among European actors, the project aims to promote algorithmic transparency that is not only compliant with regulations, but genuinely useful, understandable, and actionable for those who support citizens on a daily basis.
Europe has a major opportunity: to make algorithmic literacy a true pillar of democracy, provided it is grounded in concrete mediation practices.

Acknowledgments
We would like to warmly thank all the speakers for the richness of their analyses and feedback :
Loup Cellard, Researcher (France) Tania Duarte, Founder of We and AI (United Kingdom) Adrien Pouillet, Project Manager at La Mednum (France)
As well as the members of the Algo-Lit consortium: Léa Rogliano and Alice Demaret, FARI – AI for the Common Good Institute (Belgium) Tessel van Leeuwen, Bente Zwankhuizen, and Danny Lämmerhirt, Waag Futurelab (Netherlands)
Numerous resources from the webinar are available at the following links:
👉 replay
👉 Study on IA literacy training (available in french, english, and dutch)