A prospective multicentre study testing the diagnostic accuracy of an automated cough sound centred analytic system for the identification of common respiratory disorders in children


In paediatrics, respiratory disorders represent the second most common reason for attendance at Emergency Departments (ED) [1, 2] and are a significant global disease burden [3]. Common conditions in childhood include croup, upper respiratory tract infections (URTI), and lower respiratory tract diseases (LRTDs) such as asthma/reactive airway disease (RAD), bronchiolitis, pneumonitis and pneumonia [2, 4]. Lower respiratory tract infections are a significant cause of mortality in children aged under 5 years and a leading cause of disability-adjusted life years lost worldwide [5–7]. Asthma represents the leading cause of non-fatal disease burden in Australian children under age 14 years [8, 9]. The differential diagnosis of respiratory disorders can be challenging even for experienced clinicians with access to diagnostic support services.

Scientists develop artificial intelligence system to detect cardiac arrest in sleep


Washington: Scientists have developed a new artificial intelligence (AI) system to monitor people for cardiac arrest while they are asleep without touching them. People experiencing cardiac arrest will suddenly become unresponsive and either stop breathing or gasp for air, a sign known as agonal breathing, said rese-archers at the University of Washington (UW) in the US. A new skill for a smart speaker -- like Google Home and Amazon Alexa -- or smartphone lets the device detect the gasping sound of agonal breathing and call for help. Immediate Cardiop-ulmonary resuscitation (CPR) can double or triple someone's chance of survival, but that requires a bystander to be present. CPR is an emergency procedure that combines chest compressions often with artificial ventilation in an effort to manually preserve intact brain function.

Extracting Interpretable Concept-Based Decision Trees from CNNs

arXiv.org Machine Learning

In an attempt to gather a deeper understanding of how convolutional neural networks (CNNs) reason about human-understandable concepts, we present a method to infer labeled concept data from hidden layer activations and interpret the concepts through a shallow decision tree. The decision tree can provide information about which concepts a model deems important, as well as provide an understanding of how the concepts interact with each other. Experiments demonstrate that the extracted decision tree is capable of accurately representing the original CNN's classifications at low tree depths, thus encouraging human-in-the-loop understanding of discriminative concepts.

ASP-based Discovery of Semi-Markovian Causal Models under Weaker Assumptions

arXiv.org Machine Learning

In recent years the possibility of relaxing the so-called Faithfulness assumption in automated causal discovery has been investigated. The investigation showed (1) that the Faithfulness assumption can be weakened in various ways that in an important sense preserve its power, and (2) that weakening of Faithfulness may help to speed up methods based on Answer Set Programming. However, this line of work has so far only considered the discovery of causal models without latent variables. In this paper, we study weakenings of Faithfulness for constraint-based discovery of semi-Markovian causal models, which accommodate the possibility of latent variables, and show that both (1) and (2) remain the case in this more realistic setting.

Explainable Reinforcement Learning Through a Causal Lens

arXiv.org Artificial Intelligence

Prevalent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen. In this paper, we use causal models to derive causal explanations of behaviour of reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigated: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.

Classification and Regression Analysis with Decision Trees


A decision tree is a supervised machine learning model used to predict a target by learning decision rules from features. As the name suggests, we can think of this model as breaking down our data by making a decision based on asking a series of questions. Let's consider the following example in which we use a decision tree to decide upon an activity on a particular day: Based on the features in our training set, the decision tree model learns a series of questions to infer the class labels of the samples. As we can see, decision trees are attractive models if we care about interpretability. Although the preceding figure illustrates the concept of a decision tree based on categorical targets (classification), the same concept applies if our targets are real numbers (regression).

Domain Adaptive Transfer Learning for Fault Diagnosis

arXiv.org Machine Learning

Thanks to digitization of industrial assets in fleets, the ambitious goal of transferring fault diagnosis models fromone machine to the other has raised great interest. Solving these domain adaptive transfer learning tasks has the potential to save large efforts on manually labeling data and modifying models for new machines in the same fleet. Although data-driven methods have shown great potential in fault diagnosis applications, their ability to generalize on new machines and new working conditions are limited because of their tendency to overfit to the training set in reality. One promising solution to this problem is to use domain adaptation techniques. It aims to improve model performance on the target new machine. Inspired by its successful implementation in computer vision, we introduced Domain-Adversarial Neural Networks (DANN) to our context, along with two other popular methods existing in previous fault diagnosis research. We then carefully justify the applicability of these methods in realistic fault diagnosis settings, and offer a unified experimental protocol for a fair comparison between domain adaptation methods for fault diagnosis problems.

Optimal Sparse Decision Trees

arXiv.org Machine Learning

Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980's. The problem that has plagued decision tree algorithms since their inception is their lack of optimality, or lack of guarantees of closeness to optimality: decision tree algorithms are often greedy or myopic, and sometimes produce unquestionably suboptimal models. Hardness of decision tree optimization is both a theoretical and practical obstacle, and even careful mathematical programming approaches have not been able to solve these problems efficiently. This work introduces the first practical algorithm for optimal decision trees for binary variables. The algorithm is a co-design of analytical bounds that reduce the search space and modern systems techniques, including data structures and a custom bit-vector library. We highlight possible steps to improving the scalability and speed of future generations of this algorithm based on insights from our theory and experiments.

Learning Optimal Decision Trees from Large Datasets

arXiv.org Machine Learning

Inferring a decision tree from a given dataset is one of the classic problems in machine learning. This problem consists of buildings, from a labelled dataset, a tree such that each node corresponds to a class and a path between the tree root and a leaf corresponds to a conjunction of features to be satisfied in this class. Following the principle of parsimony, we want to infer a minimal tree consistent with the dataset. Unfortunately, inferring an optimal decision tree is known to be NP-complete for several definitions of optimality. Hence, the majority of existing approaches relies on heuristics, and as for the few exact inference approaches, they do not work on large data sets. In this paper, we propose a novel approach for inferring a decision tree of a minimum depth based on the incremental generation of Boolean formula. The experimental results indicate that it scales sufficiently well and the time it takes to run grows slowly with the size of dataset.

Facebook announces AI system to detect revenge porn, says accounts posting it will be deleted


Facebook first started trialling a system to combat revenge porn late last year, but it had one rather scary aspect: you had to upload your own nudes so the platform knew which images it should block. The earlier system, which is still in use and set for expanded rollout, relied on users uploading photos they were afraid might be shared, allowing Facebook to create a digital fingerprint to block uploads of matching images. You send the nude to yourself in Messenger, and Facebook creates a hashed digital fingerprint of the photo – an encrypted version of the raw data in the image file. Anytime someone tries to upload a photo, it is checked against that fingerprint and rejected if it matches. Facebook says its new AI-based system is designed to automatically detect nude or near-nude images, before passing them for a human moderator to decide whether the photo or video should be blocked.