Carnegie Mellon University, Language Technologies Institute
A Continuous Relaxation of Beam Search for End-to-End Training of Neural Sequence Models
Goyal, Kartik (Carnegie Mellon University, Language Technologies Institute) | Neubig, Graham (Carnegie Mellon University, Language Technologies Institute) | Dyer, Chris (DeepMind) | Berg-Kirkpatrick, Taylor (Carnegie Mellon University, Language Technologies Institute)
Beam search is a desirable choice of test-time decoding algorithm for neural sequence models because it potentially avoids search errors made by simpler greedy methods. However, typical cross entropy training procedures for these models do not directly consider the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined, this "direct loss" objective is itself discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure.In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.
Gated-Attention Architectures for Task-Oriented Language Grounding
Chaplot, Devendra Singh (Carnegie Mellon University) | Sathyendra, Kanthashree Mysore (Carnegie Mellon University, Language Technologies Institute) | Pasumarthi, Rama Kumar (Carnegie Mellon University, Language Technologies Institute) | Rajagopal, Dheeraj (Carnegie Mellon University, Language Technologies Institute) | Salakhutdinov, Ruslan (Carnegie Mellon University)
To perform tasks specified by natural language instructions, autonomous agents need to extract semantically meaningful representations of language and map it to visual elements and actions in the environment. This problem is called task-oriented language grounding. We propose an end-to-end trainable neural architecture for task-oriented language grounding in 3D environments which assumes no prior linguistic or perceptual knowledge and requires only raw pixels from the environment and the natural language instruction as input. The proposed model combines the image and text representations using a Gated-Attention mechanism and learns a policy to execute the natural language instruction using standard reinforcement and imitation learning methods. We show the effectiveness of the proposed model on unseen instructions as well as unseen maps, both quantitatively and qualitatively. We also introduce a novel environment based on a 3D game engine to simulate the challenges of task-oriented language grounding over a rich set of instructions and environment states.