Skip to main content
OpenConf small logo

Providing all your submission and review needs
Abstract and paper submission, peer-review, discussion, shepherding, program, proceedings, and much more

Worldwide & Multilingual
OpenConf has powered thousands of events and journals in over 100 countries and more than a dozen languages.

When expectations shatter: Investigating integrity erosion in human and AI decision-makers after trust violations

Abstract: Integrity violations, such as discrimination, can severely damage trust in decision-makers. As Artificial Intelligence (AI) increasingly takes on decision-making roles in high-stakes domains, questions arise about how these systems are judged and treated differently compared to humans when integrity violations occur. To examine this, we conducted three studies examining how trust violations affect perceptions of integrity for AI (vs. human) decision-makers. Study 1 found that a trust violation caused significant declines in perceived integrity for both AI and human financial advisors, but the drop was steeper for humans. Study 2 extended these findings to a managerial context, showing that human managers experienced greater integrity losses than AI systems after a violation. Study 3 explored trust recovery, showing that apologies with internal attribution were effective for humans but ineffective for AI. These findings provide insights into trust recalibration in AI decision-making systems.

Keywords: Trust Violation; Trust Repair; Artificial Intelligence

Mingyu Li,  The Hong Kong University of Science and Technology, | mlidt@connect.ust.hk

Jungmin Choi,  University of Cambridge, | j.choi@jbs.cam.ac.uk

T. Bradford Bitterly,  The Hong Kong University of Science and Technology, | bbitterly@ust.hk

Melody Manchi Chao,  The Hong Kong University of Science and Technology, | mchao@ust.hk