Natural Language: Overviews


Artificial Intelligence Research and Application Advancement

#artificialintelligence

Recent advances in the field of artificial intelligence are gaining widespread attention from the world because of the impact that they can have on our lives. From speech recognition, virtual home assistants to learning platforms, things have gotten very interesting in the tech industry. Tech-giants have been racing against each other to incorporate AI aspects into their newest creations so as to make the human experience much more comfortable. By adding characteristics that understand children, empathy and work routines, artificial intelligence technology is set to become a revolution. Here are some of the recent advances in the field of artificial intelligence, in terms of research and technology.


A framework for AI-powered agile project management

#artificialintelligence

Researchers at the University of Wollongong, Deakin University, Monash University and Kyushu University have developed a framework that could be used to build a smart, AI-powered agile project management assistant. Their paper, pre-published on arXiv, has been accepted at the 41st International Conference on Software Engineering (ICSE) 2019, in the New Ideas and Emerging Results track. "Our research was driven by our experience working in and with the industry," Hoa Khanh Dam, one of the researchers who carried out the study, told TechXplore. "We saw the real challenges in running agile software projects and the serious lack of meaningful support for software teams and practitioners. We also saw the potential of AI in offering significant support for managing agile projects, not only in automating routine tasks, but also in learning and harvesting valuable insights from project data for making predictions and estimations, planning and recommending concrete actions."


AI has come of age, but is British business ready to embrace it? - Business Voice - CBI

#artificialintelligence

Artificial Intelligence is starting to deliver on its promise, but widespread adoption is essential to help drive the UK economy, says IBM's Bill Kelleher There is little doubt about the transformative benefit of AI. CBI research from last year shows that business leaders see the adoption of AI and other technologies as vital for increasing levels of productivity across the economy. This is echoed by a recent IBM Institute for Business Value study of 5,000 C-Suite executives, which revealed that 82 per cent of enterprises are now either implementing or considering an AI solution. This year Prime Minister Theresa May also called out her commitment to AI. Her vision is for the UK to become the best place in the world for businesses developing and deploying AI to start, grow and thrive.


Modern Deep Learning Techniques Applied to Natural Language Processing by Authors

#artificialintelligence

This project contains an overview of recent trends in deep learning based natural language processing (NLP). It covers the theoretical descriptions and implementation details behind deep learning models, such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), and reinforcement learning, used to solve various NLP tasks and applications. The overview also contains a summary of state of the art results for NLP tasks such as machine translation, question answering, and dialogue systems. There are various ways to contribute to this project. Refer to the issue section of the GitHub repository to learn more about how you can help.


Artificial intelligence will be a major theme at the world's largest tech show next week

#artificialintelligence

Techies and gadget geeks alike have been talking about it for years already, but artificial intelligence made serious waves in 2018, showing up prominently in pop culture and our everyday devices. With companies like Apple, Google, Amazon, and Microsoft investing millions in AI, this will ultimately make it one of the major themes to look out for at the annual Consumer Electronics Show, which kicks off in Las Vegas next week. CES is an opportunity to showcase the consumer use for that technology, so much of what will be displayed are "smart" devices or "smart" products -- take, for instance, this smart bathroom with voice-enabled lighting technology. While there are dozens of players in the AI space, we can expect that Google Assistant and Amazon's Alexa are going to dominate the show this year. Both voice assistants are compatible with more than 10,000 devices, which -- as Wired noted-- will make the showroom floor quite noisy.


How do chatbots work? An overview of the architecture of a chatbot

#artificialintelligence

Humans are constantly fascinated with auto-operating AI-driven gadgets. The latest trend that is catching the eye of the majority of the tech industry is chatbots. And with so much research and advancement in the field, the programming is winding up more human-like, on top of being automated. The blend of immediate response reaction and consistent connectivity makes them an engaging change to the web applications trend. In general terms, a bot is nothing but a software that will perform automatic tasks.


Unsupervised Cross-Modal Alignment of Speech and Text Embedding Spaces

Neural Information Processing Systems

Recent research has shown that word embedding spaces learned from text corpora of different languages can be aligned without any parallel data supervision. Inspired by the success in unsupervised cross-lingual word embeddings, in this paper we target learning a cross-modal alignment between the embedding spaces of speech and text learned from corpora of their respective modalities in an unsupervised fashion. The proposed framework learns the individual speech and text embedding spaces, and attempts to align the two spaces via adversarial training, followed by a refinement procedure. We show how our framework could be used to perform the tasks of spoken word classification and translation, and the experimental results on these two tasks demonstrate that the performance of our unsupervised alignment approach is comparable to its supervised counterpart. Our framework is especially useful for developing automatic speech recognition (ASR) and speech-to-text translation systems for low- or zero-resource languages, which have little parallel audio-text data for training modern supervised ASR and speech-to-text translation models, but account for the majority of the languages spoken across the world.


Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models

Neural Information Processing Systems

Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks. However, NLMs are very computationally demanding largely due to the computational cost of the decoding process, which consists of a softmax layer over a large vocabulary.We observe that in the decoding of many NLP tasks, only the probabilities of the top-K hypotheses need to be calculated preciously and K is often much smaller than the vocabulary size. This paper proposes a novel softmax layer approximation algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a given context, a set of K words that are most likely to occur according to a NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude while attaining close to the full softmax baseline accuracy on neural machine translation and language modeling tasks. We also prove the theoretical guarantee on the softmax approximation quality.


A Neural Compositional Paradigm for Image Captioning

Neural Information Processing Systems

Mainstream captioning models often follow a sequential structure to generate cap- tions, leading to issues such as introduction of irrelevant semantics, lack of diversity in the generated captions, and inadequate generalization performance. In this paper, we present an alternative paradigm for image captioning, which factorizes the captioning procedure into two stages: (1) extracting an explicit semantic representation from the given image; and (2) constructing the caption based on a recursive compositional procedure in a bottom-up manner. Compared to conventional ones, our paradigm better preserves the semantic content through an explicit factorization of semantics and syntax. By using the compositional generation procedure, caption construction follows a recursive structure, which naturally fits the properties of human language. Moreover, the proposed compositional procedure requires less data to train, generalizes better, and yields more diverse captions.


Query Complexity of Bayesian Private Learning

Neural Information Processing Systems

We study the query complexity of Bayesian Private Learning: a learner wishes to locate a random target within an interval by submitting queries, in the presence of an adversary who observes all of her queries but not the responses. How many queries are necessary and sufficient in order for the learner to accurately estimate the target, while simultaneously concealing the target from the adversary? Our main result is a query complexity lower bound that is tight up to the first order. We show that if the learner wants to estimate the target within an error of $\epsilon$, while ensuring that no adversary estimator can achieve a constant additive error with probability greater than $1/L$, then the query complexity is on the order of $L\log(1/\epsilon)$ as $\epsilon \to 0$. Our result demonstrates that increased privacy, as captured by $L$, comes at the expense of a \emph{multiplicative} increase in query complexity. The proof builds on Fano's inequality and properties of certain proportional-sampling estimators.