Goto

Collaborating Authors

 Banerjee, Debodeep


Learning To Guide Human Decision Makers With Vision-Language Models

arXiv.org Artificial Intelligence

There is increasing interest in developing AIs for assisting human decision-making in high-stakes tasks, such as medical diagnosis, for the purpose of improving decision quality and reducing cognitive strain. Mainstream approaches team up an expert with a machine learning model to which safer decisions are offloaded, thus letting the former focus on cases that demand their attention. his separation of responsibilities setup, however, is inadequate for high-stakes scenarios. On the one hand, the expert may end up over-relying on the machine's decisions due to anchoring bias, thus losing the human oversight that is increasingly being required by regulatory agencies to ensure trustworthy AI. On the other hand, the expert is left entirely unassisted on the (typically hardest) decisions on which the model abstained. As a remedy, we introduce learning to guide (LTG), an alternative framework in which - rather than taking control from the human expert - the machine provides guidance useful for decision making, and the human is entirely responsible for coming up with a decision. In order to ensure guidance is interpretable} and task-specific, we develop SLOG, an approach for turning any vision-language model into a capable generator of textual guidance by leveraging a modicum of human feedback. Our empirical evaluation highlights the promise of \method on a challenging, real-world medical diagnosis task.


Learning to Guide Human Experts via Personalized Large Language Models

arXiv.org Artificial Intelligence

Consider the problem of diagnosing lung pathologies based on the outcome of an X-ray scan. This task cannot be fully automated, for safety reasons, necessitating human supervision at some step of the process. At the same time, it is difficult for human experts to tackle it alone due to how sensitive the decision is, especially under time pressure. High-stakes tasks like this are natural candidates for hybrid decision making (HDM) approaches that support human decision makers by leveraging AI technology for the purpose of improving decision quality and lowering cognitive effort, without compromising control. Most current approaches to HDM rely on a learning to defer (LTD) setup, in which a machine learning model first assesses whether a decision can be taken in autonomy - i.e., it is either safe or can be answered with confidence - and defers it to a human partner whenever this is not the case [Madras et al., 2018, Mozannar and Sontag, 2020, Keswani et al., 2022, Verma and Nalisnick, 2022, Liu et al., 2022]. Other forms of HDM, like learning to complement [Wilder et al., 2021], prediction under human assistance [De et al., 2020], and algorithmic triage [Raghu et al., 2019, Okati et al., 2021] follow a similar pattern.