Artificial Intelligence and Intersectionality

Europe of Knowledge |

Inga Ulnicane

Behind the Artificial Intelligence (AI) hype about its numerous benefits, uncomfortable questions concerning the problematic social impacts of AI on issues such as justice, fairness and equality are intensifying. While it has been argued that AI has a potential to eliminate human bias, growing evidence suggests quite the opposite – that AI is amplifying and exacerbating gender, racial, ethnic and other stereotypes. Some widely discussed biased AI applications include hiring algorithms that discriminate against female candidates, facial recognition that performs poorly on black and female faces as well as obedient and subservient digital female voice assistants. At the same time, it is very difficult to find examples where AI has helped to detect, reduce or eliminate human bias.

In two recent articles (Ulnicane 2024; Ulnicane and Aden 2023), I analyse how AI documents frame concerns about bias and inequality in AI and recommendations for tackling it. For this analysis, I use an intersectional lens to highlight the interaction between multiple identities – gender, race, class and others – leading to the marginalization, exclusion and discrimination of certain social groups.

 

Social vs technical framing of bias in AI

Bias is one of the key concerns in policy, media and public discussions about AI. While bias in AI can be presented as a technical issue, it is a multifaceted phenomenon that includes social, technical, political, cultural and historical dimensions. To make sense of discussions about bias in AI, in our recent article (Ulnicane & Aden 2023) we distinguish two competing frames: technical framing and social (socio-technical) one.

According to a technical frame, AI is objective and neutral and can help to detect and eliminate bias. If bias in AI occurs, then it is just a glitch that can be addressed with technical measures. AI is offered as a technical fix to solve human bias. While this technical frame has been quite popular, it has been challenged by an alternative social framing. According to the social frame, AI amplifies and exacerbates human biases and reflects deep rooted historical and systemic inequalities and power asymmetries. It cannot be just fixed with AI but requires a systemic and holistic approach. We suggest approaching bias in AI as a complex and uncertain ‘wicked problem’. To tackle such a problem, a broader strategy is needed that combines technical and social actions based on wide-ranging collaborations including affected communities.

 

Intersectionality and AI: concerns and agenda for tackling them

In my recent article on intersectionality and AI (Ulnicane 2024), I examine four high profile reports on AI and gender focusing on how they frame concerns and recommendations for action. The reports highlight the systemic nature of equality issues in AI, where the diversity crisis among AI developers and founders leads to the building of biased AI systems creating a negative feedback loop and vicious cycle. Concerns that AI might offset progress made towards equality during previous decades are growing.

Lack of women and minorities in computing is not a new problem. There have been a lot of diversity initiatives in computing during the past decades, but they have not led to positive changes. Sometimes these initiatives have even resulted in decline of diversity because they have not sufficiently addressed underlining culture and structural issues in the tech sector that includes harassment, discrimination, stereotypes, unfair pay and lack of promotion opportunities. Despite the acceptance of diversity rhetoric by tech companies, it is often poorly understood and has even experienced pushback.

The reports highlight the urgency of diversity problem in AI. They argue for a broad approach that goes beyond just increasing numbers of women and minorities. Instead, focus should be on shaping culture, power and opportunities to exert influence. Furthermore, it is necessary to involve perspectives from multiple disciplines, sectors and groups. At the same time, it is important to avoid ‘participation washing’ when the participation of a minority representative is supposed to legitimize the project.

While intersectionality provides an illuminating perspective on some of the key concerns in AI, in the existing AI landscape dominated by economic issues it can be perceived as a niche perspective mainly concerning women and minorities. It could be enriching to use intersectionality to reimagine AI in more inclusive and participatory ways.

 

References:

Ulnicane, I. (2024) Intersectionality in Artificial Intelligence: Framing Concerns and Recommendations for Action. Social Inclusion, 12: 7543 https://doi.org/10.17645/si.7543

Ulnicane, I. & Aden, A. (2023) Power and politics in framing bias in artificial intelligence policy. Review of Policy Research, 40(5): 665–687 https://doi.org/10.1111/ropr.12567