This is the second of a two-part post in which I describe four broad research trends that I observed at ACL 2017. In Part One I explored the shifting assumptions we make about language, both at the sentence and the word level, and how these shifts are prompting both a comeback of linguistic structure and a re-evaluation of word embeddings. In this part, I will discuss two more very inter-related themes: interpretability and attention. Throughout, green links are ordinary hyperlinks, while blue links lead to papers, and offer bibliographic information when you hover over them (not supported on mobile). I've been thinking about interpretability a lot recently, and I'm not alone – among deep learning practitioners, the dreaded "black box" quality of neural networks makes them notoriously hard to control, hard to debug and thus hard to develop.
Sep-14-2017, 05:25:18 GMT