mcmcp
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > Canada (0.04)
- (2 more...)
- Media > Music (0.68)
- Leisure & Entertainment (0.68)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.51)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.41)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.41)
Gibbs Sampling with People
A core problem in cognitive science and machine learning is to understand how humans derive semantic representations from perceptual objects, such as color from an apple, pleasantness from a musical chord, or seriousness from a face. Markov Chain Monte Carlo with People (MCMCP) is a prominent method for studying such representations, in which participants are presented with binary choice trials constructed such that the decisions follow a Markov Chain Monte Carlo acceptance rule. However, while MCMCP has strong asymptotic properties, its binary choice paradigm generates relatively little information per trial, and its local proposal function makes it slow to explore the parameter space and find the modes of the distribution. Here we therefore generalize MCMCP to a continuous-sampling paradigm, where in each iteration the participant uses a slider to continuously manipulate a single stimulus dimension to optimize a given criterion such as'pleasantness'. We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as'Gibbs Sampling with People' (GSP). Further, we introduce an aggregation parameter to the transition step, and show that this parameter can be manipulated to flexibly shift between Gibbs sampling and deterministic optimization. In an initial study, we show GSP clearly outperforming MCMCP; we then show that GSP provides novel and interpretable results in three other domains, namely musical chords, vocal emotions, and faces.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- (3 more...)
- Media > Music (0.68)
- Leisure & Entertainment (0.68)
7880d7226e872b776d8b9f23975e2a3d-AuthorFeedback.pdf
We have addressed the reviewers' comments by running seven new experiments, which shed useful new light on some of R2: GSP seems intuitively dependent on parametrization, can you discuss? R3: Does the benefit of aggregation disappear once you take into account the number of responses required? R3: How do the experimenters avoid subjects merely making the same response 10 times? R3: It would be worth discussing how the technique differs from e.g. GSP is more mode-seeking than MCMCP, but nonetheless recovers the utility function more reliably (Fig. D).
Review for NeurIPS paper: Gibbs Sampling with People
Weaknesses: Overall, I thought this was a strong paper. The main concerns I had were as follows: (1) Mode-seeking versus showing the distribution: The aggregated results in the first experiment seem to show much more homogeneity than the results for GSP or MCMCP. It seems like one limitation of this approach might be that there is limited exploration of the space, perhaps making it hard to move between modes, and also makes it more difficult to see the full shape of the distribution, which I have often taken to be a goal in work using MCMCP. The movement between optimization and seeking a distribution is discussed to some extent in the paper, but I would be interested in seeing this discussed more (and perhaps whether GP without aggregation is likely to lead to more optimization than MCMCP). In the author response, they have shown additional information suggesting that GSP is more mode-seeking but also does a better job of capturing the distribution.
Gibbs Sampling with People
A core problem in cognitive science and machine learning is to understand how humans derive semantic representations from perceptual objects, such as color from an apple, pleasantness from a musical chord, or seriousness from a face. Markov Chain Monte Carlo with People (MCMCP) is a prominent method for studying such representations, in which participants are presented with binary choice trials constructed such that the decisions follow a Markov Chain Monte Carlo acceptance rule. However, while MCMCP has strong asymptotic properties, its binary choice paradigm generates relatively little information per trial, and its local proposal function makes it slow to explore the parameter space and find the modes of the distribution. Here we therefore generalize MCMCP to a continuous-sampling paradigm, where in each iteration the participant uses a slider to continuously manipulate a single stimulus dimension to optimize a given criterion such as'pleasantness'. We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as'Gibbs Sampling with People' (GSP).
Gibbs Sampling with People
Harrison, Peter M. C., Marjieh, Raja, Adolfi, Federico, van Rijn, Pol, Anglada-Tort, Manuel, Tchernichovski, Ofer, Larrouy-Maestri, Pauline, Jacoby, Nori
A core problem in cognitive science and machine learning is to understand how humans derive semantic representations from perceptual objects, such as color from an apple, pleasantness from a musical chord, or trustworthiness from a face. Markov Chain Monte Carlo with People (MCMCP) is a prominent method for studying such representations, in which participants are presented with binary choice trials constructed such that the decisions follow a Markov Chain Monte Carlo acceptance rule. However, MCMCP's binary choice paradigm generates relatively little information per trial, and its local proposal function makes it slow to explore the parameter space and find the modes of the distribution. Here we therefore generalize MCMCP to a continuous-sampling paradigm, where in each iteration the participant uses a slider to continuously manipulate a single stimulus dimension to optimize a given criterion such as 'pleasantness'. We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as 'Gibbs Sampling with People' (GSP). Further, we introduce an aggregation parameter to the transition step, and show that this parameter can be manipulated to flexibly shift between Gibbs sampling and deterministic optimization. In an initial study, we show GSP clearly outperforming MCMCP; we then show that GSP provides novel and interpretable results in three other domains, namely musical chords, vocal emotions, and faces. We validate these results through large-scale perceptual rating experiments. The final experiments combine GSP with a state-of-the-art image synthesis network (StyleGAN) and a recent network interpretability technique (GANSpace), enabling GSP to efficiently explore high-dimensional perceptual spaces, and demonstrating how GSP can be a powerful tool for jointly characterizing semantic representations in humans and machines.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (3 more...)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.46)
- Media > Music (0.86)
- Leisure & Entertainment (0.86)
- Health & Medicine (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.80)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.80)