Goto

Collaborating Authors

 wachter


Actionable Counterfactual Explanations Using Bayesian Networks and Path Planning with Applications to Environmental Quality Improvement

Valero-Leal, Enrique, Larrañaga, Pedro, Bielza, Concha

arXiv.org Artificial Intelligence

Counterfactual explanations study what should have changed in order to get an alternative result, enabling end-users to understand machine learning mechanisms with counterexamples. Actionability is defined as the ability to transform the original case to be explained into a counterfactual one. We develop a method for actionable counterfactual explanations that, unlike predecessors, does not directly leverage training data. Rather, data is only used to learn a density estimator, creating a search landscape in which to apply path planning algorithms to solve the problem and masking the endogenous data, which can be sensitive or private. We put special focus on estimating the data density using Bayesian networks, demonstrating how their enhanced interpretability is useful in high-stakes scenarios in which fairness is raising concern. Using a synthetic benchmark comprised of 15 datasets, our proposal finds more actionable and simpler counterfactuals than the current state-of-the-art algorithms. We also test our algorithm with a real-world Environmental Protection Agency dataset, facilitating a more efficient and equitable study of policies to improve the quality of life in United States of America counties. Our proposal captures the interaction of variables, ensuring equity in decisions, as policies to improve certain domains of study (air, water quality, etc.) can be detrimental in others. In particular, the sociodemographic domain is often involved, where we find important variables related to the ongoing housing crisis that can potentially have a severe negative impact on communities.


Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

Pawelczyk, Martin, Datta, Teresa, van-den-Heuvel, Johannes, Kasneci, Gjergji, Lakkaraju, Himabindu

arXiv.org Artificial Intelligence

As machine learning models are increasingly being employed to make consequential decisions in real-world settings, it becomes critical to ensure that individuals who are adversely impacted (e.g., loan denied) by the predictions of these models are provided with a means for recourse. While several approaches have been proposed to construct recourses for affected individuals, the recourses output by these methods either achieve low costs (i.e., ease-of-implementation) or robustness to small perturbations (i.e., noisy implementations of recourses), but not both due to the inherent trade-offs between the recourse costs and robustness. Furthermore, prior approaches do not provide end users with any agency over navigating the aforementioned trade-offs. In this work, we address the above challenges by proposing the first algorithmic framework which enables users to effectively manage the recourse cost vs. More specifically, our framework Probabilistically ROBust rEcourse (PROBE) lets users choose the probability with which a recourse could get invalidated (recourse invalidation rate) if small changes are made to the recourse i.e., the recourse is implemented somewhat noisily. To this end, we propose a novel objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates, minimizes recourse costs, and also ensures that the resulting recourse achieves a positive model prediction. We develop novel theoretical results to characterize the recourse invalidation rates corresponding to any given instance w.r.t. Experimental evaluation with multiple real world datasets demonstrates the efficacy of the proposed framework. Machine learning (ML) models are increasingly being deployed to make a variety of consequential decisions in domains such as finance, healthcare, and policy. Consequently, there is a growing emphasis on designing tools and techniques which can provide recourse to individuals who have been adversely impacted by the predictions of these models (Voigt & Von dem Bussche, 2017).


Should we be worried about AI's growing energy use?

New Scientist

Amid the many debates about the potential dangers of artificial intelligence, some researchers argue that an important concern is being overlooked: the energy used by computers to train and run large AI models. Alex de Vries at the VU Amsterdam School of Business and Economics warns that AI's growth is poised to make it a significant contributor to global carbon emissions. He estimates that if Google switched its whole search business to AI, it would end up using 29.3 terawatt hours per year – equivalent to the electricity consumption of Ireland, and almost double the company's total energy consumption of 15.4 terawatt hours in 2020. On one hand, there is good reason not to panic. Making that sort of switch is practically impossible, as it would require more than 4 million powerful computer chips known as graphics processing units (GPUs) that are currently in huge demand, with limited supply.


Artificial intelligence is infiltrating health care. We shouldn't let it make all the decisions.

MIT Technology Review

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK. "Sometimes we don't actually know what kinds of systems are being used," says Wachter.


CEnt: An Entropy-based Model-agnostic Explainability Framework to Contrast Classifiers' Decisions

Zini, Julia El, Mansour, Mohammad, Awad, Mariette

arXiv.org Artificial Intelligence

Current interpretability methods focus on explaining a particular model's decision through present input features. Such methods do not inform the user of the sufficient conditions that alter these decisions when they are not desirable. Contrastive explanations circumvent this problem by providing explanations of the form "If the feature $X>x$, the output $Y$ would be different''. While different approaches are developed to find contrasts; these methods do not all deal with mutability and attainability constraints. In this work, we present a novel approach to locally contrast the prediction of any classifier. Our Contrastive Entropy-based explanation method, CEnt, approximates a model locally by a decision tree to compute entropy information of different feature splits. A graph, G, is then built where contrast nodes are found through a one-to-many shortest path search. Contrastive examples are generated from the shortest path to reflect feature splits that alter model decisions while maintaining lower entropy. We perform local sampling on manifold-like distances computed by variational auto-encoders to reflect data density. CEnt is the first non-gradient-based contrastive method generating diverse counterfactuals that do not necessarily exist in the training data while satisfying immutability (ex. race) and semi-immutability (ex. age can only change in an increasing direction). Empirical evaluation on four real-world numerical datasets demonstrates the ability of CEnt in generating counterfactuals that achieve better proximity rates than existing methods without compromising latency, feasibility, and attainability. We further extend CEnt to imagery data to derive visually appealing and useful contrasts between class labels on MNIST and Fashion MNIST datasets. Finally, we show how CEnt can serve as a tool to detect vulnerabilities of textual classifiers.


ai-papers-from-chatgpt-fool-scientists

#artificialintelligence

You may have heard the news of ChatGPT fooling professors. Recently, it bamboozled scientists with convincing AI papers. The reports came from a preprint from the scientific bioRxiv server in December 2022. Researchers asked ChatGPT to create 50 abstracts based on several scientific sources. They found that medical researchers struggled to distinguish the fakes from the originals.


Abstracts written by ChatGPT fool scientists

#artificialintelligence

Scientists and publishing specialists are concerned that the increasing sophistication of chatbots could undermine research integrity and accuracy.Credit: Ted Hsu/Alamy An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science. "I am very worried," says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. "If we're now in a situation where the experts are not able to determine what's true or not, we lose the middleman that we desperately need to guide us through complicated topics," she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts.


AI experts question tech industry's ethical commitments

#artificialintelligence

From healthcare and education to finance and policing, artificial intelligence (AI) is becoming increasingly embedded in people's daily lives. Despite being posited by advocates as a dispassionate and fairer means of making decisions, free from the influence of human prejudice, the rapid development and deployment of AI has prompted concern over how the technology can be used and abused. These concerns include how it affects people's employment opportunities, its potential to enable mass surveillance, and its role in facilitating access to basic goods and services, among others. In response, the organisations that design, develop and deploy AI technologies – often with limited input from those most affected by its operation – have attempted to quell people's fears by setting out how they are approaching AI in a fair and ethical manner. Since around 2018, this has led to a deluge of ethical AI principles, guidelines, frameworks and declarations being published by both private organisations and government agencies around the world.


The challenge of making moral machines

#artificialintelligence

As applications for AIs proliferate, so are questions about ethical development and embedded bias.Credit: MF3d In the waning days of 2020, Timnit Gebru, an artificial intelligence (AI) ethicist at Google, submitted a draft of an academic paper to her employer. Gebru and her collaborators had analysed natural language processing (NLP), and specifically the data-intensive approach of training NLP artificial intelligences (AIs). Such AIs can accurately interpret documents produced by humans, and respond naturally to human commands or queries. In their study, the team found the process of training a NLP AI requires immense resources and creates a considerable risk of embedding significant bias into the AI. That bias can lead to inappropriate or even harmful responses.


AI researcher says police tech suppliers are hostile to transparency

#artificialintelligence

Artificial intelligence (AI) researcher Sandra Wachter says that although the House of Lords inquiry into police technology "was a great step in the right direction" and succeeded in highlighting the major concerns around police AI and algorithms, the conflict of interest between criminal justice bodies and their suppliers could still hold back meaningful change. Wachter, who was invited to the inquiry as an expert witness, is an associate professor and senior research fellow at the Oxford Internet Institute who specialises in the law and ethics of AI. Speaking with Computer Weekly, Wachter said she is hopeful that at least some of the recommendations will be taken forward into legislation, but is worried about the impact of AI suppliers' hostility to transparency and openness. "I am worried about it mainly from the perspective of intellectual property and trade secrets," she said. "There is an unwillingness or hesitation in the private sector to be completely open about what is actually going on for various reasons, and I think that might be a barrier to implementing the inquiry's recommendations."