Collaborating Authors

Analyzing the Interpretability Robustness of Self-Explaining Models Artificial Intelligence

Recently, interpretable models called self-explaining models (SEMs) have been proposed with the goal of providing interpretability robustness. We evaluate the interpretability robustness of SEMs and show that explanations provided by SEMs as currently proposed are not robust to adversarial inputs. Specifically, we successfully created adversarial inputs that do not change the model outputs but cause significant changes in the explanations. We find that even though current SEMs use stable co-efficients for mapping explanations to output labels, they do not consider the robustness of the first stage of the model that creates interpretable basis concepts from the input, leading to non-robust explanations. Our work makes a case for future work to start examining how to generate interpretable basis concepts in a robust way.

Unsupervised Multilingual Alignment using Wasserstein Barycenter Machine Learning

We study unsupervised multilingual alignment, the problem of finding word-to-word translations between multiple languages without using any parallel data. One popular strategy is to reduce multilingual alignment to the much simplified bilingual setting, by picking one of the input languages as the pivot language that we transit through. However, it is well-known that transiting through a poorly chosen pivot language (such as English) may severely degrade the translation quality, since the assumed transitive relations among all pairs of languages may not be enforced in the training process. Instead of going through a rather arbitrarily chosen pivot language, we propose to use the Wasserstein barycenter as a more informative ''mean'' language: it encapsulates information from all languages and minimizes all pairwise transportation costs. We evaluate our method on standard benchmarks and demonstrate state-of-the-art performances.

Issues with post-hoc counterfactual explanations: a discussion Artificial Intelligence

Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier. However, the assumptions they make about the data and the classifier make them unreliable in many contexts. In this paper, we discuss three desirable properties and approaches to quantify them: proximity, connectedness and stability. In addition, we illustrate that there is a risk for post-hoc counterfactual approaches to not satisfy these properties.

What doctor? Why AI and robotics will define New Health


Modern health systems can treat and cure more diseases than ever before. New technology is bringing innovation to old treatments. Yet significant quality, access and cost issues remain and our health systems are becoming increasingly unsustainable. The emergence and increasing use of artificial intelligence (AI) and robotics will have a significant impact on healthcare systems around the world. How will AI and robotics define New Health?

Regional Health Plans New Health Campus Serving Spearfish

U.S. News

Thomas Worsley, interim president of Spearfish Regional Hospital and the Spearfish and Belle Fourche markets, tells the newspaper that the facility's construction timeline will largely be determined by fundraising.