AI has transformed how we interact with the world; from virtual assistants like Siri and Alexa, to AI algorithms that curate our social media feeds, the influence of AI is undeniable. However, this transformation is not always equitable. Marginalised groups, particularly people with disabilities, may face challenges due to AI technologies that have been designed without inclusive considerations (Ajanaku, 2022). The design process, which shapes the scope, use, and training data of AI, is crucial to ensuring equitable outcomes.
Research by Alharbi et al. (2023) emphasises the need to incorporate disability perspectives into AI design, noting how a narrowly defined scope can overlook the social and environmental barriers people with disabilities face. This exclusion can lead to biased or inaccurate AI outcomes that fail to address the real needs of disabled individuals. By adopting inclusive practices, such as engaging people with disabilities during the design phase, AI developers can prevent reinforcing ableist biases and ensure broader accessibility.
In their paper, “Definition Drives Design: Disability Models and Mechanisms of Bias in AI Technologies,” Alharbi et al. (2023) discuss how varying models of disability (medical, social, and relational) influence the design and outcomes of AI technologies. These models shape not only how AI systems are developed but also how they interact with and impact the lives of people with disabilities. The authors advocate for participatory development, where individuals with disabilities are active contributors in the design process.
This inclusive approach contrasts sharply with tokenistic practices, where individuals from marginalised groups are included solely for symbolic purposes rather than genuine input. Tokenism undermines the value of diverse perspectives and often leads to disenfranchised individuals feeling isolated or disregarded (Gillespie, 2022). For AI technologies to be truly inclusive, disability-inclusive design in AI must ensure the participation of disabled individuals is meaningful and not merely a facade to demonstrate diversity.
The paper’s exploration of bias in AI is timely, as discussions around bias tend to focus predominantly on gender and race, often overlooking disability. The team behind this research includes professionals from various fields, ensuring that the complexities of disability, as well as intersecting identities like race and gender, are considered. This intersectional approach is critical to developing AI technologies that serve the needs of all communities.
Disability models and AI design
Alharbi et al. (2023) draw on three major disability models to frame their analysis of AI technologies: the medical model, which views disability as a problem to be fixed; the social model, which highlights societal and environmental barriers; and the relational model, which sees disability as an identity shaped through interactions with society. These models guide how AI technologies can be designed to support individuals with disabilities.
For example, AI technologies built on the medical model may focus on diagnosing or “curing” disabilities, often neglecting broader social factors. In contrast, the social model emphasises accessibility and the removal of societal barriers, encouraging AI systems to consider the broader environment in which disabled individuals live. Meanwhile, the relational model prioritises collaboration and recognises that disabilities are shaped by relationships and social structures.
The research team applies these models to two use cases—government benefits and healthcare. In the government benefits example, the AI system uses medical records under the medical model, while the social model involves functional assessments to understand the individual’s life challenges. The relational model, on the other hand, considers broader social services and support systems, ensuring a more holistic approach to understanding a disabled individual’s needs. Similarly, in healthcare, AI technologies can be designed to either focus narrowly on a person’s medical condition or, under the social and relational models, consider broader environmental factors and social supports.
The cultural model of disability
While the medical, social, and relational models provide valuable insights, the cultural model of disability—still a developing concept (Twardowski, 2022)—adds another dimension. This model views disability not merely as a physical impairment but as a socially constructed phenomenon shaped by cultural beliefs, values, and norms. In many societies, disabilities are interpreted through the lens of religion or superstition. For instance, in India, disability is sometimes seen as karmic retribution for past misdeeds (Gautam, 2020). Such beliefs can lead to exclusion and marginalisation, highlighting the importance of culturally sensitive AI systems.
Incorporating the cultural model into AI design is particularly relevant for global applications, where cultural differences influence how disabilities are perceived and managed. AI systems developed for healthcare, for instance, must account for diverse cultural beliefs around disability to avoid reinforcing harmful stereotypes or alienating certain communities. By recognising these cultural variations in considering disability-inclusive design in AI, AI systems can be better tailored to serve people from different backgrounds and ensure a more inclusive global approach.
Tokenism in the design process
While the paper advocates for disability-inclusive design, it does not fully address the risks of tokenism. Tokenism, the practice of superficially including members of marginalised groups without valuing their actual contributions, is prevalent in many sectors, including AI development (Gillespie, 2022). This practice can undermine genuine inclusivity, particularly when people with disabilities are brought into the design process merely for symbolic purposes. For AI technologies to genuinely reflect the needs of disabled individuals, their involvement must be substantive and their insights actively integrated into the development process.
Tokenistic practices not only undermine the value of diverse perspectives but also negatively affect the mental health of participants. Studies have shown that tokenism can lead to feelings of isolation, stress, and even burnout among those who are included in superficial ways (Simmons, Umphress, & Watkins, 2019). AI developers must ensure that their inclusion efforts are authentic and not driven by a desire to simply appear diverse.
Alharbi et al. (2023) provide a valuable framework for understanding how different models of disability influence the design and outcomes of AI technologies. By examining the medical, social, relational, and cultural models, we can develop AI systems that are more inclusive and responsive to the diverse needs of people with disabilities. However, it is crucial to avoid tokenism and ensure that the participation of disabled individuals in AI design is meaningful and genuine.
By embracing disability-inclusive design in AI, we can create AI systems that are not only innovative but also equitable, ensuring that everyone, regardless of their abilities, can benefit from the advances in AI technology.
References:
Alharbi, R., Hickman, L., Hochheiser, H., Newman-Griffis, D., & Rauchberg, J. S. (2023). *Definition drives design: Disability models and mechanisms of bias in AI Technologies*. First Monday.
Gillespie, C. (2022). *Tokenism isn’t the way to achieve diversity and inclusion*. Health.
Twardowski, A. (2022). *Cultural model of disability – origins, assumptions, advantages*.
Gautam, G. (2020). *Disability in India and what you can do about it—Part 5: Religion and Society*. Medium.