Goto

Collaborating Authors

 factsheet


Evaluating a Methodology for Increasing AI Transparency: A Case Study

Piorkowski, David, Richards, John, Hind, Michael

arXiv.org Artificial Intelligence

In reaction to growing concerns about the potential harms of artificial intelligence (AI), societies have begun to demand more transparency about how AI models and systems are created and used. To address these concerns, several efforts have proposed documentation templates containing questions to be answered by model developers. These templates provide a useful starting point, but no single template can cover the needs of diverse documentation consumers. It is possible in principle, however, to create a repeatable methodology to generate truly useful documentation. Richards et al. [25] proposed such a methodology for identifying specific documentation needs and creating templates to address those needs. Although this is a promising proposal, it has not been evaluated. This paper presents the first evaluation of this user-centered methodology in practice, reporting on the experiences of a team in the domain of AI for healthcare that adopted it to increase transparency for several AI models. The methodology was found to be usable by developers not trained in user-centered techniques, guiding them to creating a documentation template that addressed the specific needs of their consumers while still being reusable across different models and use cases. Analysis of the benefits and costs of this methodology are reviewed and suggestions for further improvement in both the methodology and supporting tools are summarized.


The Four Pillars of Trusted AI

#artificialintelligence

I recently started an AI-focused educational newsletter, that already has over 100,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Quantifying trust and fairness is one of the most important challenges to ensure the mainstream adoption of deep learning systems. But what does trust truly means in the context of deep learning systems?


IBM announces new AI language, explainability, and automation services

#artificialintelligence

During IBM's virtual AI Summit this week, the company announced updates across its Watson family of products in the areas of language, explainability, and workplace automation. A new feature called Reading Comprehension surfaces answers from databases of enterprise documents in response to natural language questions, assigning a confidence score to each response. A novel module in Watson Assistant called FAQ Extraction automatically generates question-and-answer documents. And AI Factsheets automatically captures key facts on a machine learning model's performance and generates reports to "foster transparency and ensure compliance." According to IBM, Reading Comprehension, which was built atop a top-performing question-answering system from IBM Research, is intended to help identify more precise answers in response to queries referring to business documents.


Towards evaluating and eliciting high-quality documentation for intelligent systems

Piorkowski, David, González, Daniel, Richards, John, Houde, Stephanie

arXiv.org Artificial Intelligence

A vital component of trust and transparency in intelligent systems built on machine learning and artificial intelligence is the development of clear, understandable documentation. However, such systems are notorious for their complexity and opaqueness making quality documentation a non-trivial task. Furthermore, little is known about what makes such documentation "good." In this paper, we propose and evaluate a set of quality dimensions to identify in what ways this type of documentation falls short. Then, using those dimensions, we evaluate three different approaches for eliciting intelligent system documentation. We show how the dimensions identify shortcomings in such documentation and posit how such dimensions can be use to further enable users to provide documentation that is suitable to a given persona or use case.


A Methodology for Creating AI FactSheets

Richards, John, Piorkowski, David, Hind, Michael, Houde, Stephanie, Mojsilović, Aleksandra

arXiv.org Artificial Intelligence

As AI models and services are used in a growing number of highstakes areas, a consensus is forming around the need for a clearer record of how these models and services are developed to increase trust. Several proposals for higher quality and more consistent AI documentation have emerged to address ethical and legal concerns and general social impacts of such systems. However, there is little published work on how to create this documentation. This is the first work to describe a methodology for creating the form of AI documentation we call FactSheets. We have used this methodology to create useful FactSheets for nearly two dozen models. This paper describes this methodology and shares the insights we have gathered. Within each step of the methodology, we describe the issues to consider and the questions to explore with the relevant people in an organization who will be creating and consuming the AI facts in a FactSheet. This methodology will accelerate the broader adoption of transparent AI documentation.


Financial institutions can gain new AI model risk management

#artificialintelligence

Many financial institutions are rapidly developing and adopting AI models. They're using the models to achieve new competitive advantages such as being able to make faster and more successful underwriting decisions. However, AI models introduce new risks. In a previous post, I describe why AI models increase risk exposure compared to the more traditional, rule-based models that have been in use for decades. In short, if AI models have been trained on biased data, lack explainability, or perform inadequately, they can expose organizations to as much as seven-figure losses or fines.


The Four Components of Trusted Artificial Intelligence

#artificialintelligence

Trust and transparency are at the forefront of conversations related to artificial intelligence(AI) these days. While we intuitively understand the idea of trusting AI agents, we are still trying to figure out the specific mechanics to translate trust and transparency into programmatic constructs. After all, what does trust means in the context of an AI system? Trust is a foundational building block of human socio-economic dynamics. In software development, during the last few decades, we steadily built mechanisms for asserting trust on specific applications.


Towards AI Transparency: Four Pillars Required to Build Trust in Artificial Intelligence Systems

#artificialintelligence

Trust is a foundational building block of human socio-economic dynamics. In software development, during the last few decades, we steadily built mechanisms for asserting trust on specific applications. When we get on planes that fly on auto-pilot or cars completely driven by robots we are intrinsically expressing trust on the creators of a specific software application. In software, trust mechanisms are fundamentally based on the deterministic nature of most software applications in which their behavior is uniquely determine by the code workflow which makes it intrinsically predictable. The non-deterministic nature of artificial intelligence(AI) systems breaks the pattern of traditional software applications and introduces new dimensions to enable trust in AI agents.


IBM announces cloud service to help businesses detect and mitigate AI bias

#artificialintelligence

Bias is a serious problem in artificial intelligence (AI). Research shows that popular smart speakers are 30 percent less likely to understand non-native U.S. accents, for example, and that facial recognition systems such as those from Cognitec perform demonstrably worse on African American faces. In fact, according to a recent study commissioned by IBM, two-thirds of businesses are wary of adopting AI because of potential liability concerns. In an effort to help enterprises address this problem, IBM today announced the launch of a cloud-based, fully automated service that "continually provides [insights]" into how AI systems are making their decisions. It also scans for signs of prejudice and recommends adjustments -- such as algorithmic tweaks or counterbalancing data -- that might lessen their impact.


IBM researchers propose 'factsheets' for AI transparency

#artificialintelligence

Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients. Haven Life is using AI to extend life insurance policies to people who wouldn't traditionally be eligible, such as people with chronic illnesses and non-U.S. And Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people. But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM thinks part of the problem is a lack of standard practices.