prudence
Help! My Husband Is Floundering Under The Weight of All the Chores I Refuse to Do.
This week, we're helping you round out your summer reading lists by asking some of our favorite authors to step in as Prudie for the day and give you advice. This is part of our Guest Prudie series. Today's columnist is American author and "King of Horror" Stephen King, whose renowned for his horror, supernatural fiction, suspense, crime, science-fiction, and fantasy novels, including It, The Shining, Carrie, and many more. His iconic books and stories have been adapted into numerous films and television series--including The Boogeyman which was released just last month. His new novel, Holly, hits shelves this coming September.
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support
In this paper, we argue for a paradigm shift from the current model of explainable artificial intelligence (XAI), which may be counter-productive to better human decision making. In early decision support systems, we assumed that we could give people recommendations and that they would consider them, and then follow them when required. However, research found that people often ignore recommendations because they do not trust them; or perhaps even worse, people follow them blindly, even when the recommendations are wrong. Explainable artificial intelligence mitigates this by helping people to understand how and why models give certain recommendations. However, recent research shows that people do not always engage with explainability tools enough to help improve decision making. The assumption that people will engage with recommendations and explanations has proven to be unfounded. We argue this is because we have failed to account for two things. First, recommendations (and their explanations) take control from human decision makers, limiting their agency. Second, giving recommendations and explanations does not align with the cognitive processes employed by people making decisions. This position paper proposes a new conceptual framework called Evaluative AI for explainable decision support. This is a machine-in-the-loop paradigm in which decision support tools provide evidence for and against decisions made by people, rather than provide recommendations to accept or reject. We argue that this mitigates issues of over- and under-reliance on decision support tools, and better leverages human expertise in decision making.
- North America > United States > New York > New York County > New York City (0.14)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Indiana (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
- Health & Medicine > Therapeutic Area > Dermatology (0.46)
We Asked the Scary-Good Chatbot to Answer an Advice Question. Could It Fool You?
We decided to have some fun with ChatGPT, the scary-good chatbot from OpenAI that's been garnering headlines. We fed it a fake letter, cobbled together with common tropes, and asked it to reply in a few different ways. I'm recently engaged and in the throes of planning my early 2024 wedding. My handsome fiancé, the timing, my mother's own hand-me-down ring--it's all felt like a perfect fairytale. Until I heard what my mother-in-law has in store for us.
Rationally Biased Learning
When we assess pros and cons in decision making, we weigh losses more than gains (Kahneman and Tversky (1979)). We are more frightened by a snake or a spider than by a passing car or an electrical shuffle. Such human assessments are qualified of biases, because they depart from physical measurements or objective statistical estimates. Thus, there is "bias" when a behavior is not aligned with a given "rationality benchmark" (like expected utility theory), as documented in the "heuristics and biases" literature (Kahneman et al. (1982); Gilovich et al. (2002)). However, if such biases are found consistently in human behavior, they must certainly have a reason. Some scholars (see (Gigerenzer (2004, 2008); Hutchinson and Gigerenzer (2005))) claim that those"so-called bias" were in
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > Massachusetts > Middlesex County > Belmont (0.04)
- (4 more...)
Your AI May Be Ethical, But Is It Prudent?
Do you remember a story about an irate father who marched into a Target to complain that his teenage daughter received maternity coupons, only to find out a few days later that she was pregnant? The story came from a 2012 New York times article and it signaled the arrival of predictive analytics. Despite reasonable skepticism over whether the story was real, it helped initiate an ethical debate over consumer privacy that has only intensified. Today, we live in a world with more powerful predictive capabilities and more personal data to be leveraged. We've reached an era in which AI can do more than out a teenage pregnancy.
- Health & Medicine (1.00)
- Banking & Finance (0.73)
- Information Technology > Security & Privacy (0.68)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining (0.76)
Your AI May Be Ethical, But Is It Prudent?
Do you remember a story about an irate father who marched into a Target to complain that his teenage daughter received maternity coupons, only to find out a few days later that she was pregnant? The story came from a 2012 New York times article and it signaled the arrival of predictive analytics. Despite reasonable skepticism over whether the story was real, it helped initiate an ethical debate over consumer privacy that has only intensified. Today, we live in a world with more powerful predictive capabilities and more personal data to be leveraged. We've reached an era in which AI can do more than out a teenage pregnancy.
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.68)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining (0.76)
Prudence When Assuming Normality: an advice for machine learning practitioners
In a binary classification problem the feature vector (predictor) is the input to a scoring function that produces a decision value (score), which is compared to a particular chosen threshold to provide a final class prediction (output). Although the normal assumption of the scoring function is important in many applications, sometimes it is severely violated even under the simple multinormal assumption of the feature vector. This article proves this result mathematically with a counter example to provide an advice for practitioners to avoid blind assumptions of normality. On the other hand, the article provides a set of experiments that illustrate some of the expected and well-behaved results of the Area Under the ROC curve (AUC) under the multinormal assumption of the feature vector. Therefore, the message of the article is not to avoid the normal assumption of either the input feature vector or the output scoring function; however, a prudence is needed when adopting either of both.