Menu

AI and the EU border: surveillance, security, and migrant rights

by | Nov 8, 2024 | AI Ethics

 

The rise of border AI: security vs. rights

Borders have existed for millennia, but the advent of AI has transformed how states surveil and police marginalised communities at border zones. Over the past two decades, borders have become critical sites for managing migration risks, especially in Western societies (Hall et al., 2021). Advanced security technologies—including surveillance systems, data collection, predictive analytics, and physical barriers—have reinforced these borders and amplified their security measures. These initiatives are primarily designed to mitigate perceived risks and demonstrate to the public that governments are “taking action” on migration (Vallet and David, 2012; Hall & Clapton, 2021).

One major area of development in border AI involves the use of biometric technologies to identify migrants deemed high-risk by analysing both bodily and behavioural features (Fors & Meissner, 2022). Biometrics, which includes techniques such as fingerprinting, iris and retinal scanning, facial recognition, and gait analysis, is now a core component of AI-driven border security systems (European Parliament, 2021). Some experimental technologies, like AI-powered lie detectors, aim to determine truthfulness through emotion recognition and micro-expression analysis, though the scientific basis for these tools is widely questioned (Lomas, 2022; Foundation, 2021). DNA-based biometrics, which utilise genetic data, are also beginning to emerge as tools for identity verification at borders (Browne, 2015).

While these automated decision-making systems can make processes more efficient for public administrators, they often prioritise state interests over those of migrants, asylum seekers, and refugees, whose needs are often overlooked (Ozkul, n.d.). Nonetheless, a few initiatives do reflect migrant needs in their design. For example, Latvia’s citizenship preparation tool allows individuals to practise the language and knowledge requirements of citizenship exams, addressing the concerns raised by a 2019 survey that fear of failure was a primary reason preventing non-Latvians from applying for citizenship (OCMA, 2021). Efforts like these, which aim to support migrant integration, are often driven by grassroots organisations and involve collaboration with municipalities, NGOs, and advocacy groups (Bose & Navalkar, 2019).

Despite these isolated efforts to incorporate migrant needs, the broader trend in border AI remains one of rigid, standardised practices with minimal flexibility, focused primarily on state security objectives. Algorithmic decision-making systems match data against pre-defined risk criteria and recommendations, often without room to consider individual circumstances. These risk-based frameworks, which cannot account for contextual factors, have raised human rights concerns about the reliability and fairness of AI at borders (Molnar, 2019). In response, some propose a “human in the loop” approach to retain human oversight. Yet, human oversight is not always sufficient to prevent harm. Studies show that human decision-makers may place undue trust in automated systems, a bias that can result in unjust outcomes. In immigration detention assessments, for instance, officers tended to disregard AI recommendations to release individuals, opting for detention, while rarely overriding detention recommendations to release (Forster, 2022). Human oversight must be active and informed, requiring expertise and the ability to make decisions that are not unduly influenced by AI outputs (State of Wisconsin v. Eric L. Loomis, 2016).

Algorithmic accountability at EU borders

In the EU, agencies such as EU-LISA and Frontex are instrumental in the datafication of borders, managing key databases like the Schengen Information System (SIS) and Eurodac, which are central to enforcing the Dublin Regulation (EU-LISA, n.d.; Frontex, n.d.). Eurodac, for example, stores biometric data, including fingerprints and facial recognition data for individuals as young as six, to determine the country responsible for examining an asylum application (European Union, 2016). Yet, algorithms used in these systems are not infallible, and biometrics themselves are prone to inaccuracies. False matches and inherent biases in AI systems can result in the wrongful identification of individuals based on race, gender, and nationality, leading to unjust detentions and deportations (Amoore, 2006).

In a particularly controversial case, Frontex employed iBorderCtrl, a tool that experimented with emotion recognition to assess travellers’ truthfulness based on webcam footage and micro-expression analysis. However, this technology was criticised for biases related to cultural, racial, and gender differences, with studies indicating that facial expressions vary across individuals and contexts (BREYER, n.d.; Barret et al., 2019). The modern application of physiognomy—a pseudoscience suggesting that physical characteristics reveal psychological traits—has been criticised as a resurgence of scientific racism, especially when applied to vulnerable groups (Arcas et al., 2017; Hemat, 2022). These concerns are echoed by Hall and Clapton (2021), who argue that iBorderCtrl’s reliance on racialised assumptions reinforces discriminatory perceptions of marginalised groups as inherently “risky” and “other.” Furthermore, there is insufficient scientific evidence supporting the accuracy of emotion recognition for individual behavioural assessment (European Commission, 2021).

Despite the increasing application of border AI, there is a lack of accountability within biometric systems like EURODAC. Although these databases and technologies continue to expand, metrics for assessing error rates or false matches are scarce, and affected individuals have limited recourse to challenge the results of these systems (Deloitte & Directorate-General for Migration and Home Affairs, 2020). The opaque nature of automated decision-making, often described as a “black box,” makes it difficult to trace how data shapes decisions, potentially impacting individuals’ right to effective remedies (Fundamental Rights Agency, 2022). Gerards and Xenidis, in “Algorithmic Discrimination in Europe,” highlight the difficulty in detecting and challenging algorithmic discrimination, given that judges may lack access to information on potential biases within algorithms (Brouwer, 2023).

Automating processes for applications like visas or citizenship can benefit applicants by speeding up procedures. However, these systems can also disadvantage individuals with complex needs that cannot be easily processed by an algorithm. For example, in the UK’s EU Settlement Scheme, applicants without National Insurance numbers struggled to provide sufficient evidence of residence, complicating their applications and placing an added burden on vulnerable groups (Goodman & Sage, 2019). Conversely, algorithmic systems can also expose existing patterns of discrimination, as evidenced by the UK Home Office’s algorithmic visa application system, which led to higher rejection rates for certain nationalities (Latonero & Kift, 2020). This case highlighted discriminatory business practices that had previously gone undetected (Booth, 2020), underscoring the importance of scrutinising each algorithm to prevent cascading errors (Goodman & Flaxman, 2017).

Legislative gaps

Legislative frameworks such as the EU AI Act aim to impose restrictions on high-risk AI applications, including bans on emotion recognition technologies, biometric categorisation, and predictive policing. However, the AI Act notably excludes border control applications from these restrictions, creating a double standard in which migrants are subject to fewer protections (Napolitano, 2023). Under current regulations, high-risk border AI systems will not be required to comply with the EU’s standards until 2030, enabling ongoing experimentation and deployment without robust safeguards.

To bridge these regulatory gaps, Human Rights Impact Assessments (HRIAs) and Data Protection Impact Assessments (DPIAs) are essential. HRIAs evaluate the effects of policies on human rights, while DPIAs address data privacy risks under GDPR requirements (United Nations, 2013; European Union, 2016). Both frameworks aim to mitigate risks by identifying and addressing potential violations. However, the rapid deployment of AI systems at borders often precedes thorough assessments, and the opacity of AI models can complicate accurate evaluations, underscoring the need for more effective assessments to protect individual rights and privacy (Napolitano, 2023).

Rethinking AI at the border: toward ethical and person-centred frameworks

Border AI is deeply informed by discriminatory systems that have evolved over time, as noted by Benjamin (2019), who argues that AI’s racialised assumptions are rooted in the histories of colonialism. Current trends toward increased surveillance and data collection under the guise of efficiency reinforce these biases rather than questioning the desirability of such invasive technologies. In a recent report, EU-LISA (2023) framed the deployment of border AI not as a matter of “if” but “when,” implying an inevitability of AI expansion without sufficient scrutiny of who it serves and the potential harms it inflicts.

Building a more ethical framework for border AI involves adopting a socio-technical systems approach, where both social and technical aspects are integrated to ensure AI systems align with societal values, ethical principles, and diverse stakeholder needs (Latour, 1992). This framework calls for direct migrant involvement in the design and implementation processes, ensuring that AI tools consider their unique needs and circumstances (Ozkul, n.d.).

As Fors & Meissner (2021) suggest, shifting border AI from a risk-oriented model to one that highlights positive attributes, skills, and qualities could challenge the inherent biases of a system focused on criminalisation. Establishing accountability mechanisms, such as channels for feedback and recourse, would encourage transparency, foster trust, and promote meaningful dialogue between policymakers and affected communities (Rakova et al., 2021).

As AI technologies continue to be integrated at borders, it’s essential to critically evaluate which applications are genuinely beneficial and ethical. While AI has the potential to improve efficiency and support migrants if designed with their needs in mind, certain applications—particularly those that infringe on rights or operate without transparency—should be reconsidered or restricted. Extending the protections of the EU AI Act to include border applications is a necessary step to ensure that migrants receive the same rights and safeguards as any other individuals affected by high-risk AI. By focusing on responsible, human rights-respecting applications, we can work toward a border AI framework that serves both operational needs and upholds the dignity of those it impacts.

References

Amoore, L. (2006). Biometrics borders: Governing mobilities in the war on terror. Political Geography, 25(3), 336–351. https://doi.org/10.1016/j.polgeo.2006.02.001

Arcas, B. A. Y., Mitchell, M., & Todd, J. (2017). Physiognomy’s New Clothes. Medium. Retrieved from https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a

Barrett, L. F., Mesquita, B., & Gendron, M. (2019). Context in Emotion Perception. Current Directions in Psychological Science, 20(5), 286–290. https://doi.org/10.1177/0963721411422522

Benjamin, R. (2019). Race after Technology: Abolitionist Tools for the New Jim Code. Polity.

Booth, R. (2020, August 4). UK’s ‘racist’ visa algorithm suspended after legal challenge. The Guardian. Retrieved from https://www.theguardian.com/

Bose, M., & Navalkar, S. (2019). Supporting migrant integration through AI: Lessons from grassroots initiatives. Migration Studies Journal, 8(2), 133–156.

BREYER. (n.d.). iBorderCtrl | An experimental EU-funded project. Retrieved from https://www.breyercouncil.eu

Brouwer, E. (2023). Algorithmic Discrimination in Europe: The role of transparency in challenging biases. Journal of European Integration, 45(3), 401–415.

Browne, S. (2015). Dark Matters: On the Surveillance of Blackness. Duke University Press.

Deloitte, & Directorate-General for Migration and Home Affairs (European Commission). (2020). Study on Biometric Matching System (BMS) Accuracy. Directorate-General for Migration and Home Affairs.

European Commission. (2021). Emotion Recognition Systems: A Critical Review. Directorate-General for Parliamentary Research Services.

European Parliament. Directorate General for Parliamentary Research Services. (2021). Biometrics at the EU Borders. Brussels.

European Union. (2016). General Data Protection Regulation (GDPR). Retrieved from https://gdpr.eu/

EU-LISA. (n.d.). Home. Retrieved from https://www.eulisa.europa.eu/

Forster, S. (2022). Bias and trust in automated immigration decisions. Journal of Law & Ethics, 12(2), 110-125.

Fors, V., & Meissner, M. (2022). Biometric Technologies in Border Control: A Critical Overview. Springer.

Frontex. (n.d.). European Union Agency. Retrieved from https://frontex.europa.eu/

Fundamental Rights Agency. (2022). Biometric Systems and the Right to Effective Remedies. Annual Report, p. 50.

Goodman, B., & Flaxman, S. (2017). European Data Privacy Law and AI: Risk, Limitations, and Implications. AI Journal, 19(3), 58–76.

Goodman, S., & Sage, D. (2019). The UK EU Settlement Scheme: Challenges and Discriminations. Policy Review Journal, 34(4), 201–218.

Hall, A., & Clapton, W. (2021). Securitization at the Border: AI, Surveillance, and Migration Control. International Security Studies, 43(2), 300–316.

Hall, S., Tarek, K., & Jacoby, B. (2021). Migration and Risk Management in Western Societies. Border Security Journal, 7(1), 99–113.

Hemat, E. (2022). The Racialisation of AI Emotion Detection Technologies. Surveillance & Society, 20(3), 488–506.

Latonero, M., & Kift, P. (2020). A Human Rights Approach to AI Bias in Immigration. Human Rights Journal, 4(2), 59–71.

Lomas, N. (2022, October 2). Lie detection at EU borders: Experimental AI projects spark debate. TechCrunch. Retrieved from https://www.techcrunch.com/

Molnar, P. (2019). Technological Testing Grounds: Migration Management Experiments and Reflections on Accountability. Refugee Law Lab.

Napolitano, G. (2023). Two-Tiered AI Regulation in the EU: Implications for Migrant Rights. Journal of Human Rights and Technology, 5(1), 72–94.

OCMA. (2021). Report on the Latvian Citizenship Test and Migrant Perceptions. Office of Citizenship and Migration Affairs.

Omi, M., & Winant, H. (2015). Racial Formation in the United States (3rd ed.). Routledge.

Ozkul, D. (n.d.). Migrant-Centred Design in Border Technologies. Technology and Society Journal, 18(4), 145–158.

Rakova, M., Zuckerman, M., Sweeney, M., & Takashi, L. (2021). Accountability Mechanisms in Border AI: The Role of Transparency and Stakeholder Feedback. Journal of Border Studies, 32(1), 15–32.

State of Wisconsin v. Eric L. Loomis, 881 N.W.2d 749 (Wis. 2016).

Statewatch. (2020). Frontex and Interoperable Databases: Implications for Racialised Groups. Retrieved from https://www.statewatch.org/

United Nations. (2013). Human Rights Impact Assessment: A Tool for Policymakers. UN Human Rights Office. Retrieved from https://www.ohchr.org/

Vallet, E., & David, C. (2012). Borders as Security Mechanisms in the 21st Century. Journal of Border Studies, 6(2), 95–110.

Vavoula, N. (2020). Legal Challenges in Algorithmic Decision-Making for Migration Control. European Journal of Law and Technology, 11(2).

Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Algorithmic Decision-Making in the Context of Immigration and Border Control. AI and Society, 34(4), 721–732.