Goto

Collaborating Authors

Detection of Anomalous Crowd Behavior Using Spatio-Temporal Multiresolution Model and Kronecker Sum Decompositions

arXiv.org Machine Learning

In this work we consider the problem of detecting anomalous spatio-temporal behavior in videos. Our approach is to learn the normative multiframe pixel joint distribution and detect deviations from it using a likelihood based approach. Due to the extreme lack of available training samples relative to the dimension of the distribution, we use a mean and covariance approach and consider methods of learning the spatio-temporal covariance in the low-sample regime. Our approach is to estimate the covariance using parameter reduction and sparse models. The first method considered is the representation of the covariance as a sum of Kronecker products as in (Greenewald et al 2013), which is found to be an accurate approximation in this setting. We propose learning algorithms relevant to our problem. We then consider the sparse multiresolution model of (Choi et al 2010) and apply the Kronecker product methods to it for further parameter reduction, as well as introducing modifications for enhanced efficiency and greater applicability to spatio-temporal covariance matrices. We apply our methods to the detection of crowd behavior anomalies in the University of Minnesota crowd anomaly dataset, and achieve competitive results.



Efficient inference in matrix-variate Gaussian models with \iid observation noise

Neural Information Processing Systems

Inference in matrix-variate Gaussian models has major applications for multioutput prediction and joint learning of row and column covariances from matrixvariate data. Here, we discuss an approach for efficient inference in such models that explicitly account for iid observation noise. Computational tractability can be retained by exploiting the Kronecker product between row and column covariance matrices. Using this framework, we show how to generalize the Graphical Lasso in order to learn a sparse inverse covariance between features while accounting for a low-rank confounding covariance between samples. We show practical utility on applications to biology, where we model covariances with more than 100,000 dimensions. We find greater accuracy in recovering biological network structures and are able to better reconstruct the confounders.


Efficient inference in matrix-variate Gaussian models with \iid observation noise

Neural Information Processing Systems

Inference in matrix-variate Gaussian models has major applications for multioutput prediction and joint learning of row and column covariances from matrixvariate data. Here, we discuss an approach for efficient inference in such models that explicitly account for iid observation noise. Computational tractability can be retained by exploiting the Kronecker product between row and column covariance matrices. Using this framework, we show how to generalize the Graphical Lasso in order to learn a sparse inverse covariance between features while accounting for a low-rank confounding covariance between samples. We show practical utility on applications to biology, where we model covariances with more than 100,000 dimensions. We find greater accuracy in recovering biological network structures and are able to better reconstruct the confounders.


Learning to Discover Sparse Graphical Models

arXiv.org Machine Learning

We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. Popular methods rely on estimating a penalized maximum likelihood of the precision matrix. However, in these approaches structure recovery is an indirect consequence of the data-fit term, the penalty can be difficult to adapt for domain-specific knowledge, and the inference is computationally demanding. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function, parametrized by a neural network that maps empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. Applying this framework, we find our learnable graph-discovery method trained on synthetic data generalizes well: identifying relevant edges in both synthetic and real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain performance generally superior to analytical methods.