Natural Language: Overviews


Comparing Machine Learning as a Service: Amazon, Microsoft Azure, Google Cloud AI, IBM Watson

#artificialintelligence

For most businesses, machine learning seems close to rocket science, appearing expensive and talent demanding. And, if you're aiming at building another Netflix recommendation system, it really is. But the trend of making everything-as-a-service has affected this sophisticated sphere, too. You can jump-start an ML initiative without much investment, which would be the right move if you are new to data science and just want to grab the low hanging fruit. One of ML's most inspiring stories is the one about a Japanese farmer who decided to sort cucumbers automatically to help his parents with this painstaking operation. Unlike the stories that abound about large enterprises, the guy had neither expertise in machine learning, nor a big budget. But he did manage to get familiar with TensorFlow and employed deep learning to recognize different classes of cucumbers. By using machine learning cloud services, you can start building your first working models, yielding valuable insights from predictions with a relatively small team. We've already discussed machine learning strategy. Now let's have a look at the best machine learning platforms on the market and consider some of the infrastructural decisions to be made.


End-to-End Content and Plan Selection for Data-to-Text Generation

arXiv.org Artificial Intelligence

Learning to generate fluent natural language from structured data with neural networks has become an common approach for NLG. This problem can be challenging when the form of the structured data varies between examples. This paper presents a survey of several extensions to sequence-to-sequence models to account for the latent content selection process, particularly variants of copy attention and coverage decoding. We further propose a training method based on diverse ensembling to encourage models to learn distinct sentence templates during training. An empirical evaluation of these techniques shows an increase in the quality of generated text across five automated metrics, as well as human evaluation.


Toward Human-Understandable, Explainable AI

IEEE Computer

Recent increases in computing power, coupled with rapid growth in the availability and quantity of data have rekindled our interest in the theory and applications of artificial intelligence (AI). However, for AI to be confidently rolled out by industries and governments, users want greater transparency through explainable AI (XAI) systems. The author introduces XAI concepts, and gives an overview of areas in need of further exploration--such as type-2 fuzzy logic systems--to ensure such systems can be fully understood and analyzed by the lay user.



New Amazon Echos: A guide to all the Alexa products that were just released

The Independent

Amazon has revealed a whole host of new Echo devices, as well as products that you can control by talking to the Alexa assistant that lives inside them. In all, it announced more than 70 updates – which touched on almost every part of its Alexa line-up. The very short version is this: Amazon updated just about every Echo to give it a better-looking grey mesh on the outside and to make it louder and better sounding on the inside. If you want the slightly less short version of each of the updates, then read on. Here's what happened to each of those products in slightly more detail.


How is AI transforming the future of the Healthcare industry?

#artificialintelligence

Over the last 10 years, we have come to see robots perform and execute jobs that were once exclusive to humans – be it, manufacturing cars or filling warehouse orders. As of today, we are no strangers to the fact that there are multiple industries that AI/ML have significantly impacted over the last couple years. However, the integration of Artificial Intelligence in Healthcare with a chatbot as your doctor is set to witness a significant paradigm shift. We are already seeing image recognition algorithms assist in detecting diseases at an astounding rate and are only beginning to scratch the surface. Chatbots are slowly being adopted within healthcare, albeit being in their nascent stage.


Deep Learning for NLP: An Overview of Recent Trends

#artificialintelligence

In a timely new paper, Young and colleagues discuss some of the recent trends in deep learning based natural language processing (NLP) systems and applications. The focus of the paper is on the review and comparison of models and methods that have achieved state-of-the-art (SOTA) results on various NLP tasks such as visual question answering (QA) and machine translation. In this comprehensive review, the reader will get a detailed understanding of the past, present, and future of deep learning in NLP. In addition, readers will also learn some of the current best practices for applying deep learning in NLP. Natural language processing (NLP) deals with building computational algorithms to automatically analyze and represent human language.


Decision-support for the Masses by Enabling Conversations with Open Data

arXiv.org Artificial Intelligence

Open data refers to data that is freely available for reuse. Although there has been rapid increase in availability of open data to public in the last decade, this has not translated into better decision-support tools for them. We propose intelligent conversation generators as a grand challenge that would automatically create data-driven conversation interfaces (CIs), also known as chatbots or dialog systems, from open data and deliver personalized analytical insights to users based on their contextual needs. Such generators will not only help bring Artificial Intelligence (AI)-based solutions for important societal problems to the masses but also advance AI by providing an integrative testbed for human-centric AI and filling gaps in the state-of-art towards this aim.


Answering Science Exam Questions Using Query Rewriting with Background Knowledge

arXiv.org Artificial Intelligence

Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that rewrites a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query rewriting, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.


Weakly-Supervised Neural Text Classification

arXiv.org Machine Learning

Deep neural networks are gaining increasing popularity for the classic text classification task, due to their strong expressive power and less requirement for feature engineering. Despite such attractiveness, neural text classification models suffer from the lack of training data in many real-world applications. Although many semi-supervised and weakly-supervised text classification models exist, they cannot be easily applied to deep neural models and meanwhile support limited supervision types. In this paper, we propose a weakly-supervised method that addresses the lack of training data in neural text classification. Our method consists of two modules: (1) a pseudo-document generator that leverages seed information to generate pseudo-labeled documents for model pre-training, and (2) a self-training module that bootstraps on real unlabeled data for model refinement. Our method has the flexibility to handle different types of weak supervision and can be easily integrated into existing deep neural models for text classification. We have performed extensive experiments on three real-world datasets from different domains. The results demonstrate that our proposed method achieves inspiring performance without requiring excessive training data and outperforms baseline methods significantly.