deep recurrent model
BRUNO: A Deep Recurrent Model for Exchangeable Data
We present a novel model architecture which leverages deep learning tools to perform exact Bayesian inference on sets of high dimensional, complex observations. Our model is provably exchangeable, meaning that the joint distribution over observations is invariant under permutation: this property lies at the heart of Bayesian inference. The model does not require variational approximations to train, and new samples can be generated conditional on previous samples, with cost linear in the size of the conditioning set. The advantages of our architecture are demonstrated on learning tasks that require generalisation from short observed sequences while modelling sequence variability, such as conditional image generation, few-shot learning, and anomaly detection.
Reviews: BRUNO: A Deep Recurrent Model for Exchangeable Data
This paper introduces an unsupervised approach to modeling exchangeable data. The proposed method learns an invertible mapping from a latent representation, distributed as correlated-but-exchangeable multivariate-t RVs, to an implicit data distribution that can be efficiently evaluated via recurrent neural networks. I found the paper interesting and well-written. Justification and evaluation of the method could, however, be much better. In particular, the authors do not provide good motivation for their choice of a multivariate t-distribution beyond the standard properties that 1) the posterior variance is data-dependent 2) it is heavier tailed compared to normal.
Game of Sketches: Deep Recurrent Models of Pictionary-Style Word Guessing
Sarvadevabhatla, Ravi Kiran (Indian Institute of Science) | Surya, Shiv (Indian Institute of Science) | Mittal, Trisha (Indian Institute of Science) | Babu, R. Venkatesh (Indian Institute of Science)
The ability of machine-based agents to play games in human-like fashion is considered a benchmark of progress in AI. In this paper, we introduce the first computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, an elementary version of Visual Question Answering task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual data. Notably, Sketch-QA involves asking a fixed question ("What object is being drawn?") and gathering open-ended guess-words from human guessers. To mimic Pictionary-style guessing, we propose a deep neural model which generates guess-words in response to temporally evolving human-drawn sketches. Our model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate our model on the large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test to obtain human impressions of the guess-words generated by humans and our model. Experimental results demonstrate the promise of our approach for Pictionary and similarly themed games.
- Leisure & Entertainment > Games (1.00)
- Health & Medicine (0.68)
- Information Technology > Artificial Intelligence > Vision (0.94)
- Information Technology > Artificial Intelligence > Natural Language (0.89)
- Information Technology > Communications > Social Media (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)