Text Classification with Deep Neural Network in TensorFlow -- Simple Explanation

#artificialintelligence

Text classification implementation with TensorFlow can be simple. One of the areas where text classification can be applied -- chatbot text processing and intent resolution. I will describe step by step in this post, how to build TensorFlow model for text classification and how classification is done. Please refer to my previous post related to similar topic -- Contextual Chatbot with TensorFlow, Node.js and Oracle JET -- Steps How to Install and Get It Working. I would recommend to go through this great post about chatbot implementation -- Contextual Chatbots with Tensorflow.


Theano Implementation of LambProp? • /r/MachineLearning

#artificialintelligence

Does anyone have an implementation of LambProp, preferably in Theano? The arxiv paper isn't out yet but I think that enough details have been leaked that a strong PhD student should be able to reproduce it. I incorporated gradient noise in my implementation since it seems to help.


ofirnachum/tree_rnn

@machinelearnbot

Includes implementation of TreeLSTMs as described in "Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks" by Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Also includes implementation of TreeGRUs derived using similar methods. Code for evaluation on the Standford Sentiment Treebank (used by the paper) is also available in sentiment.py. To run this, you'll need to download the relevant data. After this, you'll also need to prepare the Glove vectors as .npy



Convexified Convolutional Neural Networks – implementation –

#artificialintelligence

We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve performance competitive with CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.