Goto

Collaborating Authors

Bystander effect: Famous psychology result could be completely wrong

New Scientist

If you were being attacked, would anyone stop to help you? A famous result in psychology known as the bystander effect says probably not, but now a review of real-life violent situations says this commonly-held view may be wrong. The bystander effect purports that in situations such as in a robbery or a stabbing, bystanders are less likely to intervene if there are a large number of people in the area. If there are more people present, the likelihood of intervention decreases. The idea has its roots in the 1964 case of Kitty Genovese, a 28-year-old woman who was raped and murdered in the early morning in her quiet neighbourhood in Queens, New York.


Divide and Recombine for Large and Complex Data: Model Likelihood Functions using MCMC

arXiv.org Machine Learning

In Divide & Recombine (D&R), big data are divided into subsets, each analytic method is applied to subsets, and the outputs are recombined. This enables deep analysis and practical computational performance. An innovate D\&R procedure is proposed to compute likelihood functions of data-model (DM) parameters for big data. The likelihood-model (LM) is a parametric probability density function of the DM parameters. The density parameters are estimated by fitting the density to MCMC draws from each subset DM likelihood function, and then the fitted densities are recombined. The procedure is illustrated using normal and skew-normal LMs for the logistic regression DM.


r/MachineLearning - [D] How is the log marginal likelihood of generative models reported?

#artificialintelligence

Many papers on generative models report the log-marginal likelihood in order to quantitatively compare different generative models. Since the log-marginal likelihood is intractable, the Importance Weighted Autoencoder (IWAE)'s bound is commonly reported instead. I don't understand how the bound is computed. I assume that the IWAE is first trained on the dataset and then some synthetic samples from the model in question are used to compute the marginal LL bound. However, I am not entirely sure about the procedure.


Characterizing response behavior in multisensory perception with conflicting cues

Neural Information Processing Systems

We explore a recently proposed mixture model approach to understanding interactions between conflicting sensory cues. Alternative model formulations, differing in their sensory noise models and inference methods, are compared based on their fit to experimental data. Heavy-tailed sensory likelihoods yield a better description of the subjects' response behavior than standard Gaussian noise models. We study the underlying cause for this result, and then present several testable predictions of these models.


An Improved EM algorithm

arXiv.org Artificial Intelligence

In this paper, we firstly give a brief introduction of expectation maximization (EM) algorithm, and then discuss the initial value sensitivity of expectation maximization algorithm. Subsequently, we give a short proof of EM's convergence. Then, we implement experiments with the expectation maximization algorithm (We implement all the experiments on Gaussion mixture model (GMM)). Our experiment with expectation maximization is performed in the following three cases: initialize randomly; initialize with result of K-means; initialize with result of K-medoids. The experiment result shows that expectation maximization algorithm depend on its initial state or parameters. And we found that EM initialized with K-medoids performed better than both the one initialized with K-means and the one initialized randomly.