Review for NeurIPS paper: Counterfactual Vision-and-Language Navigation: Unravelling the Unseen

Neural Information Processing Systems 

Summary and Contributions: This paper introduces a method for generating *counterfactual* visual features for augmenting the training of vision-and-language navigation (VLN) models (which predict a sequence of actions to carry out a natural language instruction, conditioning on a sequence of visual inputs). Counterfactual training examples are produced by perturbing the visual features in an original training example with a linear combination of visual features from a similar training example. Weights (exogenous variables) in the linear combination are optimized to jointly minimize the edit to the original features and maximize the probability that a separate speaker (instruction generation) model assigns to the true instruction conditioned on the resulting counterfactual features, subject to the constraint that the counterfactual features change the interpretation model's predicted timestep at every action. Once these counterfactual features are produced, the model is trained to encourage it to assign equal probability to actions in the original example when conditioning on the original and the counterfactual features (in imitation learning), or to obtain equal reward (in reinforcement learning). The method improves performance on unseen environments for the R2R benchmark for VLN, and also shows improvements on embodied question answering.