Goto

Collaborating Authors

jxieeducation/DIY-Data-Science

#artificialintelligence

Please make Pull Requests for good resources, or create Issues for any feedback! Seq2Seq solves the traditional fixed-size input problem thatEffective Approaches to Attention-based Neural Machine Translation prevents traditional DNNs from mastering sequence based tasks such as translation and question answering. It has been shown to have state of the art performances in English-French and English-German translations and in responding to short questions. Seq2Seq was first introduced in late 2014 by 2 papers (Sequence to Sequence Learning with Neural Networks and Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation) from Google Brain and Yoshua Bengio's group. The two papers took a similar approach in machine translation, in which Seq2Seq was developed upon.


TensorFlow -- Sequence to Sequence – Illia Polosukhin – Medium

#artificialintelligence

Today I want to show an example of Sequence to Sequence model with all the latest TensorFlow APIs [as of TF 1.3]. Seq2Seq models are very useful when both your input and output have some structure or time component. Most popular applications are all in the language domain, but one can use it to process time series, trees, and many other intrinsically structured data. Translation has been domain where this models advanced the most, as it has a large enough dataset to train large and complicated models and provides a clear value from advancing state-of-the-art. If you haven't seen, here are few papers on Neural Language Translation with Seq2Seqs: https://arxiv.org/abs/1409.3215,


Chinese Pinyin Aided IME, Input What You Have Not Keystroked Yet

arXiv.org Artificial Intelligence

Chinese pinyin input method engine (IME) converts pinyin into character so that Chinese characters can be conveniently inputted into computer through common keyboard. IMEs work relying on its core component, pinyin-to-character conversion (P2C). Usually Chinese IMEs simply predict a list of character sequences for user choice only according to user pinyin input at each turn. However, Chinese inputting is a multi-turn online procedure, which can be supposed to be exploited for further user experience promoting. This paper thus for the first time introduces a sequence-to-sequence model with gated-attention mechanism for the core task in IMEs. The proposed neural P2C model is learned by encoding previous input utterance as extra context to enable our IME capable of predicting character sequence with incomplete pinyin input. Our model is evaluated in different benchmark datasets showing great user experience improvement compared to traditional models, which demonstrates the first engineering practice of building Chinese aided IME.


CHIME: An Efficient Error-Tolerant Chinese Pinyin Input Method

AAAI Conferences

Chinese Pinyin input methods are very important for Chinese language processing. In many cases, users may make typing errors. For example, a user wants to type in "shenme" (什么, meaning "what" in English) but may type in "shenem" instead. Existing Pinyin input methods fail in converting such a Pinyin sequence with errors to the right Chinese words. To solve this problem, we developed an efficient error-tolerant Pinyin input method called "CHIME'' that can handle typing errors. By incorporating state-of-the-art techniques and language-specific features, the method achieves a better performance than state-of-the-art input methods. It can efficiently find relevant words in milliseconds for an input Pinyin sequence.


Natural Language Processing: the age of Transformers

#artificialintelligence

This article is the first installment of a two-post series on Building a machine reading comprehension system using the latest advances in deep learning for NLP. Stay tuned for the second part, where we'll introduce a pre-trained model called BERT that will take your NLP projects to the next level! In the recent past, if you specialized in natural language processing (NLP), there may have been times when you felt a little jealous of your colleagues working in computer vision. It seemed as if they had all the fun: the annual ImageNet classification challenge, Neural Style Transfer, Generative Adversarial Networks, to name a few. At last, the dry spell is over, and the NLP revolution is well underway!