Goto

Collaborating Authors

Lecture 8: Recurrent Neural Networks and Language Models

#artificialintelligence

Lecture 8 covers traditional language models, RNNs, and RNN language models. Also reviewed are important training problems and tricks, RNNs for other sequence tasks, and bidirectional and deep RNNs. This lecture series provides a thorough introduction to the cutting-edge research in deep learning applied to NLP, an approach that has recently obtained very high performance across many different NLP tasks including question answering and machine translation. It emphasizes how to implement, train, debug, visualize, and design neural network models, covering the main technologies of word vectors, feed-forward models, recurrent neural networks, recursive neural networks, convolutional neural networks, and recent models involving a memory component. For additional learning opportunities please visit: http://stanfordonline.stanford.edu/


My Process for Learning Natural Language Processing with Deep Learning

#artificialintelligence

I currently work as a Data Scientist for Informatica and I thought I'd share my process for learning new things. Recently I've been wanting to explore more into Deep Learning, especially Machine Vision and Natural Language Processing. I've been procrastinating a lot, mostly because it's been summer, but now that it's fall and starting to cool down and get dark early, I'm going to be spending more time learning when it's dark out. And the thing that deeply interests me is Deep Learning and Artificial Intelligence, partly out of intellectual curiosity and partly out of greed, as most businesses and products will incorporate Deep Learning/ML in some way. I started doing research and realized that an understanding and knowledge of Deep Learning was within my reach, but I also realized that I still have a lot to learn, more than I initially thought.


Getting started with deep learning in R

#artificialintelligence

There are good reasons to get into deep learning: Deep learning has been outperforming the respective "classical" techniques in areas like image recognition and natural language processing for a while now, and it has the potential to bring interesting insights even to the analysis of tabular data. For many R users interested in deep learning, the hurdle is not so much the mathematical prerequisites (as many have a background in statistics or empirical sciences), but rather how to get started in an efficient way.


Deep Recursive Neural Networks for Compositionality in Language

Neural Information Processing Systems

Recursive neural networks comprise a class of architecture that can operate on structured input. They have been previously successfully applied to model compositionality in natural language using parse-tree-based structural representations. Even though these architectures are deep in structure, they lack the capacity for hierarchical representation that exists in conventional deep feed-forward networks as well as in recently investigated deep recurrent neural networks. In this work we introduce a new architecture --- a deep recursive neural network (deep RNN) --- constructed by stacking multiple recursive layers. We evaluate the proposed model on the task of fine-grained sentiment classification.


Language, trees, and geometry in neural networks

#artificialintelligence

Left image in each pair, a traditional parse tree view, but the vertical length of each branch represents embedding distance. Right images: PCA projection of context embeddings, where color shows deviation from expected distance.