Altitude Training: Strong Bounds for Single-Layer Dropout

Neural Information Processing Systems

Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions. Papers published at the Neural Information Processing Systems Conference.


Generating Videos with Scene Dynamics

Neural Information Processing Systems

We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.


Convolutional neural network with The Simpsons

#artificialintelligence

Convolutional Neural Network(CNN) is a type of neural network especially useful for image classification tasks. I applied CNN on thousands of Simpsons images training the classifier to recognise 10 characters from the TV show with an accuracy of more than 90 percent.


Building AI to inform people's fashion choices with Fashion

#artificialintelligence

An AI system that proposes easy changes to a person's outfit to make it more fashionable. Our Fashion system uses a deep image-generation neural network to recognize garments and offer suggestions on what to remove, add, or swap. It can also recommend ways to adjust a piece of clothing, such as tucking in a shirt or rolling up the sleeves. Whereas previous work in this area has explored ways to recommend an entirely new outfit or to identify garments that are similar to one another, Fashion instead aims to suggest subtle alterations to an existing outfit that will make it more stylish. Fashion focuses specifically on minimal edits, suggesting adjustments that are more realistic and practical than buying an entirely new outfit.


Structural Causal Bandits: Where to Intervene?

Neural Information Processing Systems

We study the problem of identifying the best action in a sequential decision-making setting when the reward distributions of the arms exhibit a non-trivial dependence structure, which is governed by the underlying causal model of the domain where the agent is deployed. In this paper, we show that whenever the underlying causal model is not taken into account during the decision-making process, the standard strategies of simultaneously intervening on all variables or on all the subsets of the variables may, in general, lead to suboptimal policies, regardless of the number of interventions performed by the agent in the environment. We formally acknowledge this phenomenon and investigate structural properties implied by the underlying causal model, which lead to a complete characterization of the relationships between the arms' distributions. We leverage this characterization to build a new algorithm that takes as input a causal structure and finds a minimal, sound, and complete set of qualified arms that an agent should play to maximize its expected reward. We empirically demonstrate that the new strategy learns an optimal policy and leads to orders of magnitude faster convergence rates when compared with its causal-insensitive counterparts.