TY - GEN
T1 - Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances
AU - Daish, Peter
AU - Roach, Matt
AU - Dix, Alan
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions.
AB - From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions.
KW - Epistemologically Driven Hybrid Human-AI Environment Design
KW - Human-AI Fairness
KW - Human-AI interaction
KW - Justice and Accuracy
UR - http://www.scopus.com/inward/record.url?scp=85215596562&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-74627-7_25
DO - 10.1007/978-3-031-74627-7_25
M3 - Conference contribution
AN - SCOPUS:85215596562
SN - 9783031746260
T3 - Communications in Computer and Information Science
SP - 323
EP - 331
BT - Machine Learning and Principles and Practice of Knowledge Discovery in Databases - International Workshops of ECML PKDD 2023, Revised Selected Papers
A2 - Meo, Rosa
A2 - Silvestri, Fabrizio
PB - Springer Science and Business Media Deutschland GmbH
T2 - Joint European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2023
Y2 - 18 September 2023 through 22 September 2023
ER -