Bayesian Inference of Spreading Processes on Networks

arXiv.org Machine Learning

Infectious diseases are studied to understand their spreading mechanisms, to evaluate control strategies and to predict the risk and course of future outbreaks. Because people only interact with a small number of individuals, and because the structure of these interactions matters for spreading processes, the pairwise relationships between individuals in a population can be usefully represented by a network. Although the underlying processes of transmission are different, the network approach can be used to study the spread of pathogens in a contact network or the spread of rumors in an online social network. We study simulated simple and complex epidemics on synthetic networks and on two empirical networks, a social / contact network in an Indian village and an online social network in the U.S. Our goal is to learn simultaneously about the spreading process parameters and the source node (first infected node) of the epidemic, given a fixed and known network structure, and observations about state of nodes at several points in time. Our inference scheme is based on approximate Bayesian computation (ABC), an inference technique for complex models with likelihood functions that are either expensive to evaluate or analytically intractable. ABC enables us to adopt a Bayesian approach to the problem despite the posterior distribution being very complex. Our method is agnostic about the topology of the network and the nature of the spreading process. It generally performs well and, somewhat counter-intuitively, the inference problem appears to be easier on more heterogeneous network topologies, which enhances its future applicability to real-world settings where few networks have homogeneous topologies.


Dr. Robot Will See You Now: AI, Blockchain Technology & the Future of Healthcare

#artificialintelligence

Blockchain technology and artificial intelligence, two cutting-edge technologies, have the potential to change the face of healthcare as we know it by improving the quality and reducing costs through improved efficiencies. Most of us are at least somewhat familiar with artificial intelligence primarily through virtual assistants such as Siri and Alexa. Artificial intelligence automates repetitive learning and discovery through data after initially being set up by a human being. As many people also know, you have to be fairly specific when asking Siri and Alexa any questions -- the question must be posed in the right way -- to get the answer you are looking for. As an example, our interactions with Alexa, Siri, Google Search and Google Photos are based on deep learning.


Global Bigdata Conference

#artificialintelligence

News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people to warn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level. However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences.


A Bayesian Method for Joint Clustering of Vectorial Data and Network Data

arXiv.org Machine Learning

We present a new model-based integrative method for clustering objects given both vectorial data, which describes the feature of each object, and network data, which indicates the similarity of connected objects. The proposed general model is able to cluster the two types of data simultaneously within one integrative probabilistic model, while traditional methods can only handle one data type or depend on transforming one data type to another. Bayesian inference of the clustering is conducted based on a Markov chain Monte Carlo algorithm. A special case of the general model combining the Gaussian mixture model and the stochastic block model is extensively studied. We used both synthetic data and real data to evaluate this new method and compare it with alternative methods. The results show that our simultaneous clustering method performs much better. This improvement is due to the power of the model-based probabilistic approach for efficiently integrating information.


IBM is funding new Watson AI lab at MIT with $240 Million

#artificialintelligence

IBM said on Thursday it will spend $240 million over the next decade to fund a new artificial intelligence research lab at the Massachusetts Institute of Technology. The resulting MIT–IBM Watson AI Lab will focus on a handful of key AI areas including the development of new "deep learning" algorithms. Deep learning is a subset of AI that aims to bring human-like learning capabilities to computers so they can operate more autonomously. The Cambridge, Mass.-based lab will be led by Dario Gil, vice president of AI for IBM Research and Anantha Chandrakasan, dean of MIT's engineering school. It will draw upon about 100 researchers from IBM (ibm) itself and the university.