The Impact of Human-AI Interaction on Discrimination
A large case study on human oversight of AI-based decision support systems in lending and hiring scenarios.
This large-scale study assesses the impact of human oversight on countering discrimination in AI-aided decision-making for sensitive tasks. We use a mixed research method approach, in a sequential explanatory design whereby a quantitative experiment with HR and banking professionals in Italy and Germany (N=1411) is followed by qualitative analyses through interviews and workshops with volunteer participants in the experiment, fair AI experts and policymakers. We find that human overseers are equally likely to follow advice from a generic AI that is discriminatory as from an AI that is programmed to be fair. Human oversight does not prevent discrimination when the generic AI is used. Choice when a fair AI is used are less gender biased but are still affected by participants' biases. Interviews with participants show they prioritize their company's interests over fairness and highlights the need for guidance on overriding AI recommendations. Fair AI experts emphasize the need for a comprehensive systemic approach when designing oversight systems.
GAUDEUL Alexia;
ARRIGONI Ottla;
CHARISI Vasiliki;
ESCOBAR PLANAS Marina;
HUPONT TORRES Isabelle;
2025-01-08
Publications Office of the European Union
JRC139127
978-92-68-22566-0 (online),
1831-9424 (online),
EUR 40136,
OP KJ-01-24-180-EN-N (online),
https://publications.jrc.ec.europa.eu/repository/handle/JRC139127,
10.2760/0189570 (online),
Additional supporting files
File name | Description | File type | |