The ethical and legal implications of the use of AI for human resources, in particular recruiting, is a well debated topic. In particular, using AI for analysis of video job interviews has sparked widespread controversy, and initiatives for regulation. In the recently established EU AI Act, AI systems used in employment and recruitment are now classified as high-risk, and thus soon regulated and governed accordingly. However, companies providing AI recruiting software are still operating under a rhetoric of their systems being “unbiased”, “fair” and “neutral” , and notably for AI video interview tools to effectively predict candidates’ emotion and personality for assessment. Instances where AI-based hiring tools have shown biases or other algorithmic harm have usually been ascribed to technological constraints, such as skewed data or algorithmic distortions.
Despite the introduction of various governance mechanisms and principles for transparency, data protection, and accountability in AI-based hiring tools, ongoing revelations of algorithmic harm persist. The AI Incident Database catalogues numerous reports of incidents tied to AI in recruitment. Notable examples include Amazon’s internal recruiting algorithm, which was discontinued after having shown serious bias against women in 2018, and Workday’s algorithmic screening systems, which were involved in a 2023 lawsuit for discrimination against African Americans, individuals over 40, and people with disabilities.
Research indicates that virtually all Fortune 500 companies (99%) use software designed to screen talent, reported by a study by Harvard Business School. Additionally, over half (55%) of HR executives in the U.S. employ predictive algorithms to assist in recruitment processes. The nature of AI-based hiring tools are varied, and examples include resume screening, chatbots, and gamification. Various justifications are commonly given for the use of AI video interview analysis. One is usually the increase in efficiency and productivity, notably for mass hiring. Another is business results, AI-based hiring tools are said to significantly increase company profits by more effectively selecting the most profitable candidates. AI video interview analysis deployed for analysing and determining behavioural or emotional cues in candidates are said to outperform human interviewer’s ability to interpret soft skills and personality traits. Hence, AI video interview analysis is closely related to affective computing, defined as “computing that relates to, arises from, or influences emotion”. This field, despite having gained significant attention in recent years, has also been deeply criticised for assuming emotions can be read with validity and reliability from a person’s facial expression only and for adopting a simplistic theoretical notion of human emotion, failing to account for their complexities and temporality. Schuller (2018) and Ahmed (2014) additionally accounts for the historical and culturally political dimensions of affective computing and its perpetuation of biological determinism, biopower regimes, marginalisation and even eugenics.
However, one of the main rationales for using AI VIA is related to bias. Usually, they are said to debias recruitment, resulting in more diverse workforces, and ability for companies to meet goals related to diversity, inclusion and equality. Objectivity and neutrality are usually the key words for why these tools are said to outperform human recruiters and interviewers, as they are said to eradicate characteristics such as gender and race to enable debiased recruiting. AI video interview analysis are claimed to promote impartiality in the phase of the recruitment process most prone to discrimination, since human interviewers might inadvertently exhibit prejudice due to factors like ethnic background, physical attractiveness, or other aspects of appearance. By analysing so-called “culturally invariant facial and posture analysis” such as non-verbal and emotional reactions of candidates, this approach is said to enable transparency and non-discrimination. However, Drage and Mackereth (2022) claim in their analysis that this idea fundamentally fails to acknowledge that racialised and gendered biases cannot be solved through technological solutions as they cannot be isolated from broader systems of power. Attempting to “outsource” the work of actually addressing structural power asymmetries to AI may in fact perpetuate inequality and discrimination. It is a technosolutionist approach to uphold a notion of idealised neutrality instead of facing and engaging with the complexities of discrimination and marginalisation that exist in recruiting. D’Ignazio and Klein in Data Feminism (2020) discuss neutrality and objectivity as an illusory idealisation and misconception in the context of technology, rule-based systems and data visualisation. They contend those are in fact highly value laden and based on masculinist, patriarchal ideals and values.
An ethical approach to AI hiring tools is to actively work with tools and mechanisms e.g. as presented in the Terms we Serve with (TwSw) Framework (https://termsweservewith.org/, Rakova et al., 2021) such as the following:
- Contestability: Mechanisms that enable individuals to voice concerns and share their testimonies of failure modes, algorithmic harm, structural, and institutional issues in the AI tools to facilitate ongoing identification of risks and harms, leading to effective mitigation strategies.
- Informed consent & refusal: Incorporating multifaceted mechanisms that enable users to refuse and opt-out of AI video analysis tools, including temporal dynamics of consent and refusal, as an ongoing practice rather than a one-time event.
- Co-constitution: informing users about conditions and terms of service before interaction with AI video analysis tools, offering them a chance to alter these terms.
- Disclosure-centred mediation: Accountability and enforcement mechanism between candidates and the AI video analysis tool provider.
- Addressing friction: Identifying the friction tensions that exist among stakeholders, which in this case are i) the candidate, ii) the company behind the AI video analysis interview, iii) the organisation that is hiring (employer), iv) policymakers and v) potential HR companies.
These dimensions respectively work in different ways to mitigate algorithmic harm and effectively introduce mechanisms both in design and regulation of AI video analysis tools centred on trust, transparency, and human agency. Furthermore, contrasting TwSw’s decentralised, context-sensitive approach with centralised systems like the EU AI Act, we highlight the benefits of a sociotechnical decentralised perspective in governing and regulating AI video analysis tools. This owes to asymmetrical power and information dynamics inherent in using video analysis products, and accountability mechanisms of TwSw that affords reconciliation.
Ahmed, S. (2014). Cultural politics of emotion. Edinburgh University Press.
AI Incident Database. (2023, a). Incident 37: Female Applicants Down-Ranked by Amazon Recruiting Tool. Retrieved December 16, 2023, from https://incidentdatabase.ai/cite/37/#r610
AI Incident Database. (2023, b). Incident 489: Workday’s AI Tools Allegedly Enabled Employers to Discriminate against Applicants of Protected Groups. Retrieved December 16, 2023, from https://incidentdatabase.ai/cite/489/#r2777
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1), 1-68.
Bjørnsten, T. B., & Sørensen, M.-M. Z. (2020). Uncertainties of facial emotion recognition technologies and the automation of emotional labour. In The Uncertain Image (pp. 43-53). Routledge.
Danner, M., Hadžić, B., Weber, T., Zhu, X., & Rätsch, M. (2023). Towards equitable AI in HR: Designing a fair, reliable, and transparent human resource management application. In International Conference on Deep Learning Theory and Applications (pp. 308-325).
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.
Drage, E., & Mackereth, K. (2022). Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”. Philosophy & Technology, 35(4), 89.
EU AI Act (2021, April 21). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Retrieved January 2, 2024, from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
Fernández-Martínez, C., & Fernández, A. (2020). AI and recruiting software: Ethical and legal implications. Paladyn, Journal of Behavioral Robotics, 11(1), 199-216.
Fuller, J. B., Raman, M., Sage-Gavin, E., & Hines, K. (2021). Hidden workers: Untapped talent. Harvard Business School Project on Managing the Future of Work and Accenture.
Haley, L. (2023). The European Union’s Proposed Artificial Intelligence Regulation on Recruiting and Hiring Processes. Scitech Lawyer, 19(3), 26-30.
Heilweil, R. (2020, January 1). Illinois regulates artificial intelligence like HireVue’s used to analyze online job interviews. Vox. Retrieved from https://www.vox.com/recode/2020/1/1/21043000/artificial-intelligence-job-applications-illinios-video-interivew-act
Kammerer, B. (2021). Hired by a Robot: The Legal Implications of Artificial Intelligence Video Interviews and Advocating for Greater Protection of Job Applicants. Iowa Law Review., 107, 817.
Mercer. (2021). Global Talent Trends 2020–2021: Win with empathy.
Picard, R. (1995). Affective computing. MIT Media Laboratory Perceptual Computing Section Technical Report No. 321.
Rakova, B., Shelby, R., & Ma, M. (2023). Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harm. Big Data & Society, 10(2), 20539517231211553.
Schuller, K. (2018). The biopolitics of feeling: Race, sex, and science in the nineteenth century. Duke University Press.
Uma, V. R., Velchamy, I., & Upadhyay, D. (2023). Recruitment Analytics: Hiring in the Era of Artificial Intelligence. In The Adoption and Effect of Artificial Intelligence on Human Resources Management, Part A (pp. 155-174). Emerald Publishing Limited.