Back to All Events

'We'll cross that bridge when we get there': EU AI policy and the AI literacy gap in AI-supported decision-making on asylum

With Gianmarco Gori, Guest Professor at the Law, Science, Technology and Society (LSTS) Research Group at the Vrije Universiteit Brussel (VUB).

Online | 3:30-5:0 PM (CET) | If you're interested in participating, please register via the following link: https://forms.gle/jdtHgzEdXwd4kjJ26 

An increasingly optimistic stance has emerged within policy discourse regarding the potential of AI in the context of international protection procedures (IPPs). AI is framed as a tool for streamlining and accelerating IPPs, curbing authorities’ discretion, and enhancing decisions’ fairness and “accuracy”, understood as the capacity to distinguish “genuine” applicants from mala fide irregular immigrants.

Against this background, the EU AI Act classifies AI systems for evidence reliability assessment and decision-making support in IPPs as high-risk products. In the language of product legislation, they may circulate on the EU market if conforming to the AI Act’s requirements. Within this framework, the AI literacy requirement represents a key mediator between product and human rights logics: AI operators must develop the “knowledge, skills, and understanding” necessary to prevent AI from causing harm in situated contexts of deployment.

Yet, as the AI Act approaches application, operators’ AI literacy needs have remained largely unmet due to the lack of context-specific guidance and learning frameworks for high-risk scenarios such as IPPs. With the recent Digital Omnibus on AI, the Commission has answered this challenge with a policy turnaround, proposing to lift operators’ AI literacy obligations.

The paper argues that the parable of AI literacy foregrounds critical issues at the intersection of EU AI and asylum governance: the techno-solutionism underpinning the AI-as-a-product regulatory paradigm; the lack of engagement with how knowledge is produced and epistemic authority enacted in IPPs; and the failure to account for how AI mediation of these practices may further entrench applicants’ vulnerabilities. Nonetheless, the paper contends that exploring what “skills, knowledge, and understanding” would be necessary to ensure asylum seekers’ protection in concrete AI-supported decision-making settings constitutes a productive exercise: namely, it can help map the sites of discretionary judgment produced throughout AI development and deployment, make visible the translation, alignment, and formalisation work of practitioners, and understand their potential to deepen existing asymmetries in IPPs.

Previous
Previous
February 27

Curating data: Trust and care in multi-level information systems

Next
Next
April 3

Visibility Without Responsibility: Situational Awareness at the Evros Land Border