human decision maker
- North America > United States > Florida > Broward County (0.04)
- North America > United States > California (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- North America > United States > Texas > Dallas County > Richardson (0.04)
- (2 more...)
- Health & Medicine (1.00)
- Banking & Finance (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.41)
- North America > United States > Florida > Broward County (0.04)
- North America > United States > California (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- North America > United States > Texas > Dallas County > Richardson (0.04)
- (2 more...)
- Health & Medicine (1.00)
- Banking & Finance (1.00)
Towards Human-AI Complementarity in Matching Tasks
Arnaiz-Rodriguez, Adrian, Benz, Nina Corvelo, Thejaswi, Suhas, Oliver, Nuria, Gomez-Rodriguez, Manuel
Data-driven algorithmic matching systems promise to help human decision makers make better matching decisions in a wide variety of high-stakes application domains, such as healthcare and social service provision. However, existing systems are not designed to achieve human-AI complementarity: decisions made by a human using an algorithmic matching system are not necessarily better than those made by the human or by the algorithm alone. Our work aims to address this gap. To this end, we propose collaborative matching (comatch), a data-driven algorithmic matching system that takes a collaborative approach: rather than making all the matching decisions for a matching task like existing systems, it selects only the decisions that it is the most confident in, deferring the rest to the human decision maker. In the process, comatch optimizes how many decisions it makes and how many it defers to the human decision maker to provably maximize performance. We conduct a large-scale human subject study with $800$ participants to validate the proposed approach. The results demonstrate that the matching outcomes produced by comatch outperform those generated by either human participants or by algorithmic matching on their own. The data gathered in our human subject study and an implementation of our system are available as open source at https://github.com/Networks-Learning/human-AI-complementarity-matching.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Valencian Community > Alicante Province > Alicante (0.04)
- (2 more...)
- Government (1.00)
- Health & Medicine > Consumer Health (0.46)
- Health & Medicine > Health Care Providers & Services (0.34)
Supporting Data-Frame Dynamics in AI-assisted Decision Making
Zheng, Chengbo, Miller, Tim, Bialkowski, Alina, Soyer, H Peter, Janda, Monika
High stakes decision-making often requires a continuous interplay between evolving evidence and shifting hypotheses, a dynamic that is not well supported by current AI decision support systems. In this paper, we introduce a mixed-initiative framework for AI assisted decision making that is grounded in the data-frame theory of sensemaking and the evaluative AI paradigm. Our approach enables both humans and AI to collaboratively construct, validate, and adapt hypotheses. We demonstrate our framework with an AI-assisted skin cancer diagnosis prototype that leverages a concept bottleneck model to facilitate interpretable interactions and dynamic updates to diagnostic hypotheses.
- Oceania > Australia > Queensland > Brisbane (0.06)
- North America > United States > Virginia > Fairfax County > McLean (0.04)
- North America > United States > California > Ventura County > Thousand Oaks (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Therapeutic Area > Dermatology (0.69)
- Health & Medicine > Therapeutic Area > Oncology > Skin Cancer (0.36)
Conformal Prediction and Human Decision Making
Hullman, Jessica, Wu, Yifan, Xie, Dawei, Guo, Ziyang, Gelman, Andrew
Methods to quantify uncertainty in predictions from arbitrary models are in demand in high-stakes domains like medicine and finance. Conformal prediction has emerged as a popular method for producing a set of predictions with specified average coverage, in place of a single prediction and confidence value. However, the value of conformal prediction sets to assist human decisions remains elusive due to the murky relationship between coverage guarantees and decision makers' goals and strategies. How should we think about conformal prediction sets as a form of decision support? We outline a decision theoretic framework for evaluating predictive uncertainty as informative signals, then contrast what can be said within this framework about idealized use of calibrated probabilities versus conformal prediction sets. Informed by prior empirical results and theories of human decisions under uncertainty, we formalize a set of possible strategies by which a decision maker might use a prediction set. We identify ways in which conformal prediction sets and posthoc predictive uncertainty quantification more broadly are in tension with common goals and needs in human-AI decision making. We give recommendations for future research in predictive uncertainty quantification to support human decision makers.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Minnesota (0.04)
- Europe > France (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
Confounding-Robust Policy Improvement with Human-AI Teams
Human-AI collaboration has the potential to transform various domains by leveraging the complementary strengths of human experts and Artificial Intelligence (AI) systems. However, unobserved confounding can undermine the effectiveness of this collaboration, leading to biased and unreliable outcomes. In this paper, we propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM). Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden. We present a deferral collaboration framework for incorporating the MSM into policy learning from observational data, enabling the system to control for the influence of unobserved confounding factors. In addition, we propose a personalized deferral collaboration system to leverage the diverse expertise of different human decision-makers. By adjusting for potential biases, our proposed solution enhances the robustness and reliability of collaborative outcomes. The empirical and theoretical analyses demonstrate the efficacy of our approach in mitigating unobserved confounding and improving the overall performance of human-AI collaborations.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New York (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.93)
- Banking & Finance (0.68)
Learning When to Advise Human Decision Makers
Artificial intelligence (AI) is increasingly used to support human decision making in high-stake settings in which the human operator, rather than the AI algorithm, needs to make the final decision. For example, in the criminal justice system, algorithmic risk assessments are being used to assist judges in making pretrialrelease decisions and at sentencing and parole [20, 69, 65, 18]; in healthcare, AI algorithms are being used to assist physicians to assess patients' risk factors and to target health inspections and treatments [63, 26, 77, 49]; and in human services, AI algorithms are being used to predict which children are at risk of abuse or neglect, in order to assist decisions made by child-protection staff [79, 16]. In such systems, decisions are often based on risk assessments, and statistical machine-learning algorithms' abilities to excel at prediction tasks [60, 21, 34, 68, 62] are leveraged to provide predictions as advice to human decision makers [45]. For example, the decision that judges make on whether it is safe to release a defendant until his trial, is based on their assessment of how likely this defendant is, if released, to violate his release terms, i.e., to commit another crime until his trial or to fail to appear in court for his trial. For making such risk predictions, judges in the US are assisted by a "risk score" predicted for the defendant by a machine-learning algorithm [20, 69].
- North America > United States > Kentucky (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > Middle East > Israel > Southern District > Eilat (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law > Criminal Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
Statistical Tests for Replacing Human Decision Makers with Algorithms
Feng, Kai, Hong, Han, Tang, Ke, Wang, Jingyuan
This paper proposes a statistical framework with which artificial intelligence can improve human decision making. The performance of each human decision maker is first benchmarked against machine predictions; we then replace the decisions made by a subset of the decision makers with the recommendation from the proposed artificial intelligence algorithm. Using a large nationwide dataset of pregnancy outcomes and doctor diagnoses from prepregnancy checkups of reproductive age couples, we experimented with both a heuristic frequentist approach and a Bayesian posterior loss function approach with an application to abnormal birth detection. We find that our algorithm on a test dataset results in a higher overall true positive rate and a lower false positive rate than the diagnoses made by doctors only. We also find that the diagnoses of doctors from rural areas are more frequently replaceable, suggesting that artificial intelligence assisted decision making tends to improve precision more in less developed regions.
- North America > United States > New York (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Africa > Sub-Saharan Africa (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.67)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.45)
- Health & Medicine > Health Care Technology > Medical Record (0.45)