Learning to Guide Human Experts via Personalized Large Language Models

Banerjee, Debodeep, Teso, Stefano, Passerini, Andrea

arXiv.org Artificial Intelligence 

Consider the problem of diagnosing lung pathologies based on the outcome of an X-ray scan. This task cannot be fully automated, for safety reasons, necessitating human supervision at some step of the process. At the same time, it is difficult for human experts to tackle it alone due to how sensitive the decision is, especially under time pressure. High-stakes tasks like this are natural candidates for hybrid decision making (HDM) approaches that support human decision makers by leveraging AI technology for the purpose of improving decision quality and lowering cognitive effort, without compromising control. Most current approaches to HDM rely on a learning to defer (LTD) setup, in which a machine learning model first assesses whether a decision can be taken in autonomy - i.e., it is either safe or can be answered with confidence - and defers it to a human partner whenever this is not the case [Madras et al., 2018, Mozannar and Sontag, 2020, Keswani et al., 2022, Verma and Nalisnick, 2022, Liu et al., 2022]. Other forms of HDM, like learning to complement [Wilder et al., 2021], prediction under human assistance [De et al., 2020], and algorithmic triage [Raghu et al., 2019, Okati et al., 2021] follow a similar pattern.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found