Watson Will Soon Be a Bus Driver In Washington D.C.

#artificialintelligence

IBM has teamed up with Local Motors, a Phoenix-based automotive manufacturer that made the first 3D-printed car, to create a self-driving electric bus. Named "Olli," the bus has room for 12 people and uses IBM Watson's cloud-based cognitive computing system to provide information to passengers. In addition to automatically driving you where you want to go using Phoenix Wings autonomous driving technology, Olli can respond to questions and provide information, similar to Amazon's Echo home assistant. The bus debuts today in the Washington D.C. area for the public to use during select times over the next several months, and the IBM-Local Motors team hopes to introduce Olli to the Miami and Las Vegas areas by the end of the year. By using Watson's speech to text, natural language classifier, entity extraction, and text to speech APIs, the bus can provide several services beyond taking you to your destination.


Mixture Model Averaging for Clustering

arXiv.org Machine Learning

In mixture model-based clustering applications, it is common to fit several models from a family and report clustering results from only the `best' one. In such circumstances, selection of this best model is achieved using a model selection criterion, most often the Bayesian information criterion. Rather than throw away all but the best model, we average multiple models that are in some sense close to the best one, thereby producing a weighted average of clustering results. Two (weighted) averaging approaches are considered: averaging the component membership probabilities and averaging models. In both cases, Occam's window is used to determine closeness to the best model and weights are computed within a Bayesian model averaging paradigm. In some cases, we need to merge components before averaging; we introduce a method for merging mixture components based on the adjusted Rand index. The effectiveness of our model-based clustering averaging approaches is illustrated using a family of Gaussian mixture models on real and simulated data.


Variational Bayes Approximations for Clustering via Mixtures of Normal Inverse Gaussian Distributions

arXiv.org Machine Learning

Parameter estimation for model-based clustering using a finite mixture of normal inverse Gaussian (NIG) distributions is achieved through variational Bayes approximations. Univariate NIG mixtures and multivariate NIG mixtures are considered. The use of variational Bayes approximations here is a substantial departure from the traditional EM approach and alleviates some of the associated computational complexities and uncertainties. Our variational algorithm is applied to simulated and real data. The paper concludes with discussion and suggestions for future work.


Parsimonious Shifted Asymmetric Laplace Mixtures

arXiv.org Machine Learning

A family of parsimonious shifted asymmetric Laplace mixture models is introduced. We extend the mixture of factor analyzers model to the shifted asymmetric Laplace distribution. Imposing constraints on the constitute parts of the resulting decomposed component scale matrices leads to a family of parsimonious models. An explicit two-stage parameter estimation procedure is described, and the Bayesian information criterion and the integrated completed likelihood are compared for model selection. This novel family of models is applied to real data, where it is compared to its Gaussian analogue within clustering and classification paradigms.


A Modern Retrospective on Probabilistic Numerics

arXiv.org Machine Learning

The field of probabilistic numerics (PN), loosely speaking, attempts to provide a statistical treatment of the errors and/or approximations that are made en route to the output of a deterministic numerical method, e.g. the approximation of an integral by quadrature, or the discretised solution of an ordinary or partial differential equation. This decade has seen a surge of activity in this field. In comparison with historical developments that can be traced back over more than a hundred years, the most recent developments are particularly interesting because they have been characterised by simultaneous input from multiple scientific disciplines: mathematics, statistics, machine learning, and computer science. The field has, therefore, advanced on a broad front, with contributions ranging from the building of overarching generaltheory to practical implementations in specific problems of interest. Over the same period of time, and because of increased interaction among researchers coming from different communities, the extent to which these developments were -- or were not -- presaged by twentieth-century researchers has also come to be better appreciated. Thus, the time appears to be ripe for an update of the 2014 Tübingen Manifesto on probabilistic numerics[Hennig, 2014, Osborne, 2014d,c,b,a] and the position paper[Hennig et al., 2015] to take account of the developments between 2014 and 2019, an improved awareness of the history of this field, and a clearer sense of its future directions. In this article, we aim to summarise some of the history of probabilistic perspectives on numerics (Section 2), to place more recent developments into context (Section 3), and to articulate a vision for future research in, and use of, probabilistic numerics (Section 4).