Reimagining intersectional AI fairness: A decolonial feminist approach to mitigating auto-essentialization in facial recognition technologies

Facial recognition technologies (FRT), widely used in various applications such as identity verification, surveillance, and access control, often exhibit algorithmic bias, resulting in inaccurate identification of women, people of color, and gender-nonconforming individuals. The prevalence of such b...

Full description

Saved in:
Bibliographic Details
Main Author: Domingo, Rosallia M.
Format: text
Language:English
Published: Animo Repository 2024
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/etdd_philo/14
https://animorepository.dlsu.edu.ph/context/etdd_philo/article/1014/viewcontent/2024_Domingo_Full_text.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
Language: English
Description
Summary:Facial recognition technologies (FRT), widely used in various applications such as identity verification, surveillance, and access control, often exhibit algorithmic bias, resulting in inaccurate identification of women, people of color, and gender-nonconforming individuals. The prevalence of such biases raises concerns about the fairness of facial recognition systems. Current AI fairness methods have attempted to address these issues through an intersectional framework, but the problem of auto-essentialization adds a critical dimension to the bias problem in FRT. Auto-essentialization refers to perpetuating racial and gender inequalities through automated technologies rooted in historical and colonial notions of difference, particularly concerning racialized gender. This historical and deep-seated bias challenges AI fairness methods’ dominant “de-biasing” algorithm approach. The persistence of these historical systems of inequality, if not recognized and addressed, may lead to the perpetuation and reinforcement of bias in FRT. Therefore, the objective of this study is two-fold: firstly, to analyze the algorithmic bias in FRT as a process of auto-essentialization rooted in historical colonial projects of gendered and racialized classification; and secondly, to explore the epistemological and ethical limitations of the prevailing intersectional AI fairness approach in addressing racial and gender bias within FRT as an auto-essentialist tool. In pursuing these objectives, the study emphasizes the critical need for a reimagined intersectional AI fairness approach to mitigating bias in FRT. This approach incorporates decolonial feminist perspectives into existing AI fairness frameworks to comprehensively address the problem of auto-essentialization, which contributes to racial and gender bias in FRT.