Collaborating Authors


AI4COVID-19: AI Enabled Preliminary Diagnosis for COVID-19 from Cough Samples via an App Machine Learning

Inability to test at scale has become Achille's heel in humanity's ongoing war against COVID-19 pandemic. An agile, scalable and cost-effective testing, deployable at a global scale, can act as a game changer in this war. To address this challenge, building on the promising results of our prior work on cough-based diagnosis of a motley of respiratory diseases, we develop an Artificial Intelligence (AI)-based test for COVID-19 preliminary diagnosis. The test is deployable at scale through a mobile app named AI4COVID-19. The AI4COVID-19 app requires 2-second cough recordings of the subject. By analyzing the cough samples through an AI engine running in the cloud, the app returns a preliminary diagnosis within a minute. Unfortunately, cough is common symptom of over two dozen non-COVID-19 related medical conditions. This makes the COVID-19 diagnosis from cough alone an extremely challenging problem. We solve this problem by developing a novel multi-pronged mediator centered risk-averse AI architecture that minimizes misdiagnosis. At the time of writing, our AI engine can distinguish between COVID-19 patient coughs and several types of non-COVID-19 coughs with over 90% accuracy. AI4COVID-19's performance is likely to improve as more and better data becomes available. This paper presents a proof of concept to encourage controlled clinical trials and serves as a call for labeled cough data. AI4COVID-19 is not designed to compete with clinical testing. Instead, it offers a complementing tele-testing tool deployable anytime, anywhere, by anyone, so clinical-testing and treatment can be channeled to those who need it the most, thereby saving more lives.

New Research Shows How AI Can Act as Mediators


According to VentureBeat, AI researchers at Uber have recently posted a paper to Arxiv outlining a new platform intended to assist in the creation of distributed AI models. The platform is called Fiber, and it can be used to drive both reinforcement learning tasks and population-based learning. Fiber is designed to make large-scale parallel computation more accessible to non-experts, letting them take advantage of the power of distributed AI algorithms and models. Fiber has recently been made open-source on GitHub, and it's compatible with Python 3.6 or above, with Kubernetes running on a Linux system and running in a cloud environment. According to the team of researchers, the platform is capable of easily scaling up to hundreds or thousands of individual machines.

Estimating Treatment Effects with Observed Confounders and Mediators Machine Learning

Given a causal graph, the do-calculus can express treatment effects as functionals of the observational joint distribution that can be estimated empirically. Sometimes the do-calculus identifies multiple valid formulae, prompting us to compare the statistical properties of the corresponding estimators. For example, the backdoor formula applies when all confounders are observed and the frontdoor formula applies when an observed mediator transmits the causal effect. In this paper, we investigate the over-identified scenario where both confounders and mediators are observed, rendering both estimators valid. Addressing the linear Gaussian causal model, we derive the finite-sample variance for both estimators and demonstrate that either estimator can dominate the other by an unbounded constant factor depending on the model parameters. Next, we derive an optimal estimator, which leverages all observed variables to strictly outperform the backdoor and frontdoor estimators. We also present a procedure for combining two datasets, with confounders observed in one and mediators in the other. Finally, we evaluate our methods on both simulated data and the IHDP and JTPA datasets.

UN will use AI to learn what people want from peace deals


The UN will help people in warzones to influence peace deals through an AI conversation tool they can access through their smartphones. The system will be launched within the next year, the Financial Times reports. The technology was developed by UN officials alongside a startup called Remesh, which produces a tool that creates online conversations with up to 1,000 participants. Their thoughts are analyzed in real-time through polls and open-ended questions to provide insights at scale. The product is typically used for market research and employee engagement.

Mediation Perspectives: Artificial Intelligence in Conflict Resolution « CSS Blog Network


Mediation Perspectives is a periodic blog entry that's provided by the CSS' Mediation Support Team and occasional guest authors. How is artificial intelligence (AI) affecting conflict and its resolution? Peace practitioners and scholars cannot afford to disregard ongoing developments related to AI-based technologies – both from an ethical and a pragmatic perspective. In this blog, I explore AI as an evolving field of information management technologies that is changing both the nature of armed conflict and the way we can respond to it. AI encompasses the use of computer programmes to analyse big amounts of data (such as online communication and transactions) in order to learn from patterns and predict human behaviour on a massive scale.

Correlated Adversarial Imitation Learning Machine Learning

A novel imitation learning algorithm is introduced by applying a game-theoretic notion of correlated equilibrium to the generative adversarial imitation learning. This imitation learning algorithm is equipped with queues of discriminators and agents, in contrast with the classical approach, where there are single discriminator and single agent. The achievement of a correlated equilibrium is due to a mediating neural architecture, which augments the observations that are being seen by queues of discriminators and agents. At every step of the training, the mediator network computes feedback using the rewards of discriminators and agents, to augment the next observations accordingly. By interacting in the game, it steers the training dynamic towards more suitable regions. The resulting imitation learning provides three important benefits. First, it makes adaptability and transferability of the learned model to new environments straightforward. Second, it is suitable for imitating a mixture of state-action trajectories. Third, it avoids the difficulties of non-convex optimization faced by the discriminator in the generative adversarial type architectures.

Nonparametric inference for interventional effects with multiple mediators Machine Learning

Understanding the pathways whereby an intervention has an effect on an outcome is a common scientific goal. A rich body of literature provides various decompositions of the total intervention effect into pathway specific effects. Interventional direct and indirect effects provide one such decomposition. Existing estimators of these effects are based on parametric models with confidence interval estimation facilitated via the nonparametric bootstrap. We provide theory that allows for more flexible, possibly machine learning-based, estimation techniques to be considered. In particular, we establish weak convergence results that facilitate the construction of closed-form confidence intervals and hypothesis tests. Finally, we demonstrate multiple robustness properties of the proposed estimators. Simulations show that inference based on large-sample theory has adequate small-sample performance. Our work thus provides a means of leveraging modern statistical learning techniques in estimation of interventional mediation effects.

#Cybermediation: What role for blockchain technology and natural language processing AI?


Blockchain technology links records using cryptography in such a way that they are resistant to modification. Mediation efforts often build on keeping records and finding evidence that is acceptable as neutral and accurate to all involved. Can blockchain technology serve mediators in providing a neutral record as the basis for conflict resolution? Recent accomplishments in the area of AI illustrate that it is getting better at being able to understand, process, and generate natural language. It has huge potential to be a tool for mediators.

Comment on "Blessings of Multiple Causes" Machine Learning

This scenario is dir ectly analogous to longitudinal causal inference problems with multiple time-varying treatments that conta in time-varying confounders, variables that serve as confounders for some treatments and as mediators for othe r treatments. If there is an unmeasured con-founder for the R -Y relationship (represented by V and the dashed arrows in Figure 1 (a)), then conditioning on R fails to identify the direct effects of A on Y, because it opens a confounding pathway through V . See Hernan and Robins (2020) for an overview of these issues. The answer to the question posed in Appendix B of WB, "Can the c auses be causally dependent among themselves?" is therefore "no." If they are causally depend ent then the deconfounder, by dint of rendering the causes independent, breaks some of the structure among t he causes A, and as was originally established in the time-varying treatment setting, this undermines the identification of joint effects of A on Y by covariate adjustment. Analysis of Lemma 4. This simple argument also serves as a counterexample to Lemm a 4, which states that the deconfounder does not pick up any post-treatment va riables and can be treated as a pre-treatment covariate. This is necessarily false whenever the causes ar e causally dependent among themselves, but it need not hold even if the causes are not causally dependent, s ee below. The proof of Lemma 4 in Appendix I states that "Inferring the s ubstitute confounder Z

Coarse Correlation in Extensive-Form Games Artificial Intelligence

Coarse correlation models strategic interactions of rational agents complemented by a correlation device, that is a mediator that can recommend behavior but not enforce it. Despite being a classical concept in the theory of normal-form games for more than forty years, not much is known about the merits of coarse correlation in extensive-form settings. In this paper, we consider two instantiations of the idea of coarse correlation in extensive-form games: normal-form coarse-correlated equilibrium (NFCCE), already defined in the literature, and extensive-form coarse-correlated equilibrium (EFCCE), which we introduce for the first time. We show that EFCCE is a subset of NFCCE and a superset of the related extensive-form correlated equilibrium. We also show that, in two-player extensive-form games, social-welfare-maximizing EFCCEs and NFCEEs are bilinear saddle points, and give new efficient algorithms for the special case of games with no chance moves. In our experiments, our proposed algorithm for NFCCE is two to four orders of magnitude faster than the prior state of the art.