Search Conferences

Type in any word, words or author name. This searchs through the abstract title, keywords and abstract text and authors. You may search all conferences or just select one conference.

 All Conferences
 EMAC 2019 Annual Conference
 EMAC 2020 Annual Conference
 EMAC 2020 Regional Conference
 EMAC 2021 Annual Conference
 EMAC 2021 Regional Conference
 EMAC 2022 Annual
 EMAC 2022 Regional Conference
 EMAC 2023 Annual
 EMAC 2023 Regional Conference

EMAC 2023 Annual

When Humans Collaborate with AI: Issues of Accountability and Scapegoating

Published: May 24, 2023


Tripat Gill, Wilfrid Laurier University; Ammara Mahmood, Wilfrid Laurier University; Chatura Ranaweera, Wilfrid Laurier University; Ali Anwar, Wilfrid Laurier University


Firms are increasingly adopting an AI-human team for customer-facing decisions. An AI algorithm provides the initial appraisal, which is then used by human managers to make judgments. However, the Marketing literature is confined to the paradigm of “algorithms replacing humans”. We address this gap and investigate (a) accountability judgments (blame/credit for negative/positive outcomes), (b) ethical concerns of scapegoating (human deflecting blame to AI) and capitalizing (human claiming more credit than AI), and (c) support for AI-human teams. In four experimental studies in the contexts of medical triage, tax filing and student grading, we found that the human agent was assigned higher blame (credit) for negative (positive) outcomes when it overrode (vs. followed) the AI appraisal. While we found no evidence for scapegoating for negative outcomes, we found human agents capitalizing (claiming more credit than AI) for positive outcomes. Participants were most supportive of AI-human teams after positive outcomes and when the AI and human assessments were concordant. Negative outcomes or conflicting assessments lowered the support for AI-human teams.