These machine learning systems continually learn and readjust to be able to carry out the task set out before them. At the 2017 Conference on Empirical Methods on Natural Language Processing earlier this month, MIT researchers demonstrated a new general-purpose technique for making sense of neural networks that are able to carry out natural language processing tasks where they attempt to extract data written in normal text opposed to something of a structured language like database-query language. As part of the research, Jaakkola, and colleague David Alvarez-Melis, an MIT graduate student in electrical engineering and computer science and first author on the paper, used a black-box neural net in which to generate test sentences to feed black-box neural nets. As the training continues the encoder and decoder get evaluated simultaneously depending on how closely the decoder's output matches up with the encoder's input.
Sep-14-2017, 10:25:17 GMT