Goto

Collaborating Authors

Doubly Robust Bayesian Inference for Non-Stationary Streaming Data with $\beta$-Divergences

Neural Information Processing Systems

We present the very first robust Bayesian Online Changepoint Detection algorithm through General Bayesian Inference (GBI) with $\beta$-divergences. The resulting inference procedure is doubly robust for both the predictive and the changepoint (CP) posterior, with linear time and constant space complexity. We provide a construction for exponential models and demonstrate it on the Bayesian Linear Regression model. In so doing, we make two additional contributions: Firstly, we make GBI scalable using Structural Variational approximations that are exact as $\beta \to 0$. Secondly, we give a principled way of choosing the divergence parameter $\beta$ by minimizing expected predictive loss on-line. Reducing False Discovery Rates of \CPs from up to 99\% to 0\% on real world data, this offers the state of the art.


TensorFlow Object Detection API: basics of detection (2/2)

#artificialintelligence

My first (at all!) post was devoted to 2 basic questions of training detection models using TensorFlow Object Detection API: how are negative examples mined and how the loss for training is chosen. This time I'd like to cover 3 more questions regarding the following: As before, I totally recommend to recap the SSD architecture features following the same links as were provided in my previous post. In SSD, there is no region-proposal step (in contrast with R-CNN models) and the set of regions to be considered by the model is completely predefined by the configuration. In short, the features from the feature-head of the network are passed to a pipeline of the detection blocks. Every detection block receives a reduced in spatial size tensor (which is still a somewhat representation of input image) and overlays it with a regular grid which nodes are later used as centers for the set of assumed bounding boxes.


Bayesian Model Selection Approach to Boundary Detection with Non-Local Priors

Neural Information Processing Systems

Based on non-local prior distributions, we propose a Bayesian model selection (BMS) procedure for boundary detection in a sequence of data with multiple systematic mean changes. The BMS method can effectively suppress the non-boundary spike points with large instantaneous changes. We establish the consistency of the estimated number and locations of the change points under various prior distributions. Extensive simulation studies are conducted to compare the BMS with existing methods, and our approach is illustrated with application to the magnetic resonance imaging guided radiation therapy data. Papers published at the Neural Information Processing Systems Conference.


Selective Inference for Change Point Detection in Multi-dimensional Sequences

arXiv.org Machine Learning

We study the problem of detecting change points (CPs) that are characterized by a subset of dimensions in a multi-dimensional sequence. A method for detecting those CPs can be formulated as a two-stage method: one for selecting relevant dimensions, and another for selecting CPs. It has been difficult to properly control the false detection probability of these CP detection methods because selection bias in each stage must be properly corrected. Our main contribution in this paper is to formulate a CP detection problem as a selective inference problem, and show that exact (non-asymptotic) inference is possible for a class of CP detection methods. We demonstrate the performances of the proposed selective inference framework through numerical simulations and its application to our motivating medical data analysis problem.


Apple Reportedly Acquires Xnor

#artificialintelligence

Reports are circulating that the Seattle-based AI at the edge company Xnor has been quietly acquired by Apple. An investigation by GeekWire suggests the deal was worth in the region of $200 million. This development could mean Xnor's low-power algorithms for object detection in photos end up on the iPhone. Xnor, a spin-out from the Allen Institute for Artificial Intelligence (AI2), had raised $14.6 million in funding since it was founded three years ago. Xnor's founders, Ali Farhadi and Mohammed Rastegari, are the creators of YOLO, a well-known neural network widely used for object detection.