Goto

Collaborating Authors

Generation


Google is bringing automatic summaries to Docs and Chat

Engadget

Google is making it easy to catch up on long documents with a new auto summarization feature, which will soon be available on Google Docs. It relies on machine learning to break down the key points in a file and generate a readable briefing. Think of it like automatic Cliff's Notes for all of those work reports you never read. Google originally announced the feature in March, but now it's closer to reaching the public. As Alphabet CEO Sundar Pichai explained at today's Google I/O keynote, the automatic summary feature relies on language understanding, information compression and natural language generation to work its magic.


How Language-Generation AIs Could Transform Science

Scientific American: Technology

Machine-learning algorithms that generate fluent language from vast amounts of text could change how science is done -- but not necessarily for the better, says Shobita Parthasarathy, a specialist in the governance of emerging technologies at the University of Michigan in Ann Arbor. In a report published on 27 April, Parthasarathy and other researchers try to anticipate societal impacts of emerging artificial-intelligence (AI) technologies called large language models (LLMs). These can churn out astonishingly convincing prose, translate between languages, answer questions and even produce code. The corporations building them -- including Google, Facebook and Microsoft -- aim to use them in chatbots and search engines, and to summarize documents. They sometimes parrot errors or problematic stereotypes in the millions or billions of documents they're trained on.


Natural Language Generation (NLG): Everything You Need to Know

#artificialintelligence

Chatbots, voice assistants, and AI blog writers (to name a few) all use natural language generation. NLG systems can turn numbers into narratives based on pre-set templates. They can predict which words need to be generated next (in, say, an email you're actively typing). Or, the most sophisticated systems can formulate entire summaries, articles, or responses.


Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning

Journal of Artificial Intelligence Research

Developing artificial learning systems that can understand and generate natural language has been one of the long-standing goals of artificial intelligence. Recent decades have witnessed an impressive progress on both of these problems, giving rise to a new family of approaches. Especially, the advances in deep learning over the past couple of years have led to neural approaches to natural language generation (NLG). These methods combine generative language learning techniques with neural-networks based frameworks. With a wide range of applications in natural language processing, neural NLG (NNLG) is a new and fast growing field of research. In this state-of-the-art report, we investigate the recent developments and applications of NNLG in its full extent from a multidimensional view, covering critical perspectives such as multimodality, multilinguality, controllability and learning strategies. We summarize the fundamental building blocks of NNLG approaches from these aspects and provide detailed reviews of commonly used preprocessing steps and basic neural architectures. This report also focuses on the seminal applications of these NNLG models such as machine translation, description generation, automatic speech recognition, abstractive summarization, text simplification, question answering and generation, and dialogue generation. Finally, we conclude with a thorough discussion of the described frameworks by pointing out some open research directions.


Arria NLG Appoints Mark Goodey to Lead Arria's Investment Analyst Business

#artificialintelligence

Arria NLG, a leading provider of natural language generation (NLG) technologies, has appointed Managing Director and Innovation Strategist, Mark Goodey, to cement Arria Investment Analyst as the Banking, Financial Services, and Insurance (BFSI) industry leader. Arria Investment Analyst uses natural language technologies to bring 100 percent accuracy to investment analysis and to create data-driven investment commentary. "I am excited to lead this initiative," said Goodey. "Arria's Investment Analyst uses natural language technology to analyze investment portfolio performance. It's a technology uniquely placed to support asset managers, asset owners, and the financial services industry, so what used to take hours or days can now be accomplished in seconds."


Personalized Prompt Learning for Explainable Recommendation

arXiv.org Artificial Intelligence

Providing user-understandable explanations to justify recommendations could help users better understand the recommended items, increase the system's ease of use, and gain users' trust. A typical approach to realize it is natural language generation. However, previous works mostly adopt recurrent neural networks to meet the ends, leaving the potentially more effective pre-trained Transformer models under-explored. In fact, user and item IDs, as important identifiers in recommender systems, are inherently in different semantic space as words that pre-trained models were already trained on. Thus, how to effectively fuse IDs into such models becomes a critical issue. Inspired by recent advancement in prompt learning, we come up with two solutions: find alternative words to represent IDs (called discrete prompt learning), and directly input ID vectors to a pre-trained model (termed continuous prompt learning). In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages. To bridge the gap, we further propose two training strategies: sequential tuning and recommendation as regularization. Extensive experiments show that our continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation.


Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text

arXiv.org Artificial Intelligence

Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.


Typical Decoding for Natural Language Generation

arXiv.org Artificial Intelligence

Despite achieving incredibly low perplexities on myriad natural language corpora, today's language models still often underperform when used to generate text. This dichotomy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language as a communication channel (\`a la Shannon, 1948) can provide new insights into the behaviors of probabilistic language generators, e.g., why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, and do so in an efficient yet error-minimizing manner, choosing each word in a string with this (perhaps subconscious) goal in mind. We propose that generation from probabilistic models should mimic this behavior. Rather than always choosing words from the high-probability region of the distribution--which have a low Shannon information content--we sample from the set of words with an information content close to its expected value, i.e., close to the conditional entropy of our model. This decision criterion can be realized through a simple and efficient implementation, which we call typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, typical sampling offers competitive performance in terms of quality while consistently reducing the number of degenerate repetitions.


Stoyanchev

AAAI Conferences

A modular conversational dialog system, in contrast to end-to-end, includes natural language understanding, dialog management, and natural language generation components.


Briggs

AAAI Conferences

One of the hallmarks of humans as social agents is the ability to adjust their language to the norms of the particular situational context. When necessary, they can be terse, direct, and task-oriented, and in other situations they can be more indirect and polite. For future robots to truly earn the label "social," it is necessary to develop mechanisms to enable robots with NL capabilities to adjust their language in similar ways. In this paper, we highlight the various dimensions involved in this challenge, and discuss how socially-sensitive natural-language generation can be implemented in a cognitive, robotic architecture.