Collaborating Authors

Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology Machine Learning

Digital pathology is not only one of the most promising fields of diagnostic medicine, but at the same time a hot topic for fundamental research. Digital pathology is not just the transfer of histopathological slides into digital representations. The combination of different data sources (images, patient records, and *omics data) together with current advances in artificial intelligence/machine learning enable to make novel information accessible and quantifiable to a human expert, which is not yet available and not exploited in current medical settings. The grand goal is to reach a level of usable intelligence to understand the data in the context of an application task, thereby making machine decisions transparent, interpretable and explainable. The foundation of such an "augmented pathologist" needs an integrated approach: While machine learning algorithms require many thousands of training examples, a human expert is often confronted with only a few data points. Interestingly, humans can learn from such few examples and are able to instantly interpret complex patterns. Consequently, the grand goal is to combine the possibilities of artificial intelligence with human intelligence and to find a well-suited balance between them to enable what neither of them could do on their own. This can raise the quality of education, diagnosis, prognosis and prediction of cancer and other diseases. In this paper we describe some (incomplete) research issues which we believe should be addressed in an integrated and concerted effort for paving the way towards the augmented pathologist.

A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop Machine Learning

The goal of Machine Learning to automatically learn from data, extract knowledge and to make decisions without any human intervention. Such automatic (aML) approaches show impressive success. Recent results even demonstrate intriguingly that deep learning applied for automatic classification of skin lesions is on par with the performance of dermatologists, yet outperforms the average. As human perception is inherently limited, such approaches can discover patterns, e.g. that two objects are similar, in arbitrarily high-dimensional spaces what no human is able to do. Humans can deal only with limited amounts of data, whilst big data is beneficial for aML; however, in health informatics, we are often confronted with a small number of data sets, where aML suffer of insufficient training samples and many problems are computationally hard. Here, interactive machine learning (iML) may be of help, where a human-in-the-loop contributes to reduce the complexity of NP-hard problems. A further motivation for iML is that standard black-box approaches lack transparency, hence do not foster trust and acceptance of ML among end-users. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations, make black-box approaches difficult to use, because they often are not able to explain why a decision has been made. In this paper, we present some experiments to demonstrate the effectiveness of the human-in-the-loop approach, particularly in opening the black-box to a glass-box and thus enabling a human directly to interact with an learning algorithm. We selected the Ant Colony Optimization framework, and applied it on the Traveling Salesman Problem, which is a good example, due to its relevance for health informatics, e.g. for the study of protein folding. From studies of how humans extract so much from so little data, fundamental ML-research also may benefit.


Los Angeles Times

The ruling, announced by Constitutional Court chief judge Gerhart Holzinger, represents a victory for the Freedom Party, which challenged the May 22 runoff on claims of widespread irregularities. With Britain's pending departure from the European Union, a win by Freedom Party candidate Norbert Hofer would boost not only his party but also kindred movements in France, the Netherlands and elsewhere lobbying for less EU power or outright exits from the European Union. Hofer was leading after the polls closed in May, but final results after a count of absentee ballots put former Green party politician Alexander Van der Bellen ahead by only a little more than 30,000 votes. The court ruled broadly with Freedom Party claims that absentee ballots were sorted before electoral commission officials arrived; some officials stayed away during absentee vote counts but signed documents saying they were present; some ballot envelopes were opened without authorization; and related violations.

TrackIt: A Team-Based Application for Health and Wellness Monitoring

AAAI Conferences

Health and wellness monitoring is an emerging area for mobile applications.One problem that has not been addressed is that tools have mostly focused on the individual and ignore many aspects of health and wellness that depends on teams. These teams could either be a team of professionals or a support network including friends and family. We present a team-based tool, TrackIt, that enables tracking of both activity duration and location, and enables users to be part of multiple teams. TrackIt supports distributed generation of tasks that belong to multiple people, each of whom can report on status and monitor progress, and will provide intelligent collaboration tools and activity recognition capabilities.

Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations Artificial Intelligence

Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale (SCS) to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al., 2019) combined with concepts adapted from a widely accepted usability scale.