Goto

Collaborating Authors

Results


Expanding Explainability: Towards Social Transparency in AI systems

arXiv.org Artificial Intelligence

As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.


Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science

arXiv.org Artificial Intelligence

This paper presents a focused analysis of human studies in explainable artificial intelligence (XAI) entailing qualitative investigation. We draw on the social science corpora of qualitative research to illustrate opportunities for making the human studies where XAI researchers used observations, interviews, focus groups, and/or questionnaires to capture qualitative data more rigorous. We contextualize the presentation of the XAI contributions included in our analysis according to the components of rigor described in the qualitative research literature: 1) underlying theories or frameworks, 2) methodological approaches, 3) data collection methods, and 4) data analysis processes. The results of our analysis support calls from others in the XAI community advocating for collaboration with experts from social disciplines to bolster rigor and effectiveness in human studies.


Google tackles the black box problem with Explainable AI

#artificialintelligence

There is a problem with artificial intelligence. It can be amazing at churning through gigantic amounts of data to solve challenges that humans struggle with. But understanding how it makes its decisions is often very difficult to do, if not impossible. That means when an AI model works it is not as easy as it should be to make further refinements, and when it exhibits odd behaviour it can be hard to fix. But at an event in London this week, Google's cloud computing division pitched a new facility that it hopes will give it the edge on Microsoft and Amazon, which dominate the sector.


Explainable AI: what is it and who cares?

#artificialintelligence

In this Q&A on Explainable AI, Andrea Brennen speaks with In-Q-Tel's Peter Bronez about descriptive vs. prescriptive models, "white box" vs. "black box" explanation techniques, and why some models are easier to explain than others. Peter also discusses the reproducibility crisis in Psychology and why good experiment design is so important. Peter is a VP on the technical staff at IQT. Could you tell me about your experience with machine learning and AI? PETER: As an undergraduate, I studied econometrics and operations research, so my exposure to machine learning was in the context of designing models of the world that you could test mathematically -- basically, doing hypothesis testing using statistics. Afterwards, I worked at the Department of Defense and used a lot of the same techniques. From there, I went to the private sector and [worked on] social media and data mining in marketing applications, trying to create mathematical models to categorize people, activities, and messages in order to understand them better.


Why is explainable artificial intelligence a must for the enterprise? EM360

#artificialintelligence

Artificial intelligence (AI) is one of the most exciting technologies in the world right now. In particular, it's bringing life to ideas that were once just a figment of Hollywood films. However, it has also created polarised viewpoints. Many AI experts are working towards reaping its full potential, while others worry about creating a Black Mirror-esque reality. Perhaps the best way to meet in the middle is by exploring explainable AI.


IBM Research launches explainable AI toolkit

#artificialintelligence

IBM Research today introduced AI Explainability 360, an open source collection of state-of-the-art algorithms that use a range of techniques to explain AI model decision-making. The launch follows IBM's release a year ago of AI Fairness 360 for the detection and mitigation of bias in AI models. IBM is sharing its latest toolkit in order to increase trust and verification of artificial intelligence and help businesses that must comply with regulations to use AI, IBM Research fellow and responsible AI lead Saska Mojsilovic told VentureBeat in a phone interview. "That's fundamentally important, because we know people in organizations will not use or deploy AI technologies unless they really trust their decisions. And because we create infrastructure for a good part of this world, it is fundamentally important for us -- not because of our own internal deployments of AI or products that we might have in this space, but it's fundamentally important to create these capabilities because our clients and the world will leverage them," she said.


Explaining Machine Learning and Artificial Intelligence in Collections to the Regulator

#artificialintelligence

There is significant growth in the application of machine learning (ML) and artificial intelligence (AI) techniques within collections as it has been proven to create countless efficiencies; from enhancing the results of predictive models, to powering AI bots that interact with customers leaving staff free to address more complex issues. At present, one of the major constraining factors to using this advanced technology is the difficulty that comes with explaining the decisions made by these solutions to regulators. This regulatory focus is unlikely to diminish, especially with the various examples of AI bias which continue to be uncovered within various applications, resulting in discriminatory behaviors towards different groups of people. While collections-specific regulations remain somewhat undefined on the subject, major institutions are resorting to their broader policy; namely that any decision needs be fully explainable. Although there are explainable Artificial Intelligence (xAI) techniques that can help us gain deeper insights from ML models such as FICO's xAI Toolkit, the path to achieving sign-off within an organization can be a challenge.


Why I agree with Geoff Hinton: I believe that Explainable AI is over-hyped by media

#artificialintelligence

Geoffrey Hinton dismissed the need for explainable AI. A range of experts have explained why he is wrong. I actually tend to agree with Geoff. Explainable AI is overrated and hyped by the media. A whole industry has sprung up with a business model of scaring everyone about AI being not explainable.


Metrics for Explainable AI: Challenges and Prospects

arXiv.org Artificial Intelligence

The question addressed in this paper is: If we present to a user an AI system that explains how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? In other words, how do we know that an explanainable AI system (XAI) is any good? Our focus is on the key concepts of measurement. We discuss specific methods for evaluating: (1) the goodness of explanations, (2) whether users are satisfied by explanations, (3) how well users understand the AI systems, (4) how curiosity motivates the search for explanations, (5) whether the user's trust and reliance on the AI are appropriate, and finally, (6) how the human-XAI work system performs. The recommendations we present derive from our integration of extensive research literatures and our own psychometric evaluations.