"As for why I tell a lot of stories, there's a joke about that. There was once a man who had a computer, and he asked it, 'Do you compute that you will ever be able to think like a human being?' And after assorted grindings and beepings, a slip of paper came out of the computer that said, 'That reminds me of a story . . . "
– from ANGELS FEAR: TOWARDS AN EPISTEMOLOGY OF THE SACRED. Gregory Bateson & Mary Catherine Bateson. (Part III 'Metalogue').
Jury is an evaluation package for NLG systems. It allows using many metrics in one go. Also, it implements concurrency among evaluation metrics and supports evaluating with multiple predictions. Jury uses datasets package for metrics, and thus supports any metrics that datasets package has. Default evaluation metrics are, BLEU, METEOR and ROUGE-L. As of today 28 metrics are available in the "datasets" package, to see all supported metrics, see datasets/metrics.
Figure 1: Document Grounded Generation – An example of a conversation that is grounded in the given document (text in green shows information from the document that was used to generate the response). Natural language generation (NLG) systems are increasingly expected to be naturalistic, content-rich, and situation-aware due to their popularity and pervasiveness in human life. This is particularly relevant in dialogue systems, machine translation systems, story generation, and question answering systems. Despite these mainstream applications, NLG systems face the challenges of being bland, devoid of content, generating generic outputs and hallucinating information (Wiseman et al., EMNLP 2017; Li et al., NAACL 2016; Holtzman et al., ICLR 2020). Grounding the generation in different modalities like images, videos, and structured data alleviates some of these issues. Generating natural language from schematized or structured data such as database records, slot-value pair, and Wikipedia Infobox has been explored extensively in prior work.
Synechron, a leading digital transformation consulting firm launched an annual report, Top Strategic Technology Trends. The report noted data science as one of its eight major trends for 2021, and the company's experts put our three critical trends. The first trend talks about the business applications of self-supervised models, where AI teaches itself to solve problems without human classification of data. The second trend refers to the increased adoption of the Natural Language Generation that uses AI to create several hand-produced documents that are needed every day. The third and final trend is concerned with technologies like ML, Optical Character Recognition, and NLP that will increase efficiency, reduce costs, and detect financial crimes during KYC.
Controlling neural network-based models for natural language generation (NLG) has broad applications in numerous areas such as machine translation, document summarization, and dialog systems. Approaches that enable such control in a zero-shot manner would be of great importance as, among other reasons, they remove the need for additional annotated data and training. In this work, we propose novel approaches for controlling encoder-decoder transformer-based NLG models in zero-shot. This is done by introducing three control knobs, namely, attention biasing, decoder mixing, and context augmentation, that are applied to these models at generation time. These knobs control the generation process by directly manipulating trained NLG models (e.g., biasing cross-attention layers) to realize the desired attributes in the generated outputs. We show that not only are these NLG models robust to such manipulations, but also their behavior could be controlled without an impact on their generation performance. These results, to the best of our knowledge, are the first of their kind. Through these control knobs, we also investigate the role of transformer decoder's self-attention module and show strong evidence that its primary role is maintaining fluency of sentences generated by these models. Based on this hypothesis, we show that alternative architectures for transformer decoders could be viable options. We also study how this hypothesis could lead to more efficient ways for training encoder-decoder transformer models.
Applied AI is put to work in various forms, depending on its purpose. These forms include natural language generation, chatbots, speech or image recognition, and sentiment analysis. This technology has become so omnipresent that it has made its place even in the creation of CRM platforms that allow better customer handling and lead to increased customer satisfaction. Industries like marketing use applied AI to target the right advertisement to the right audience, the education industry uses applied AI to decide the right curriculum, law enforcement uses chatbots for threat detection, finance uses applied AI for analyzing trade trends, the manufacturing industry uses applied AI for logistical support, and the healthcare industry uses applied AI for early detection and disease diagnosis, amongst many other uses.
Automatic evaluations for natural language generation (NLG) conventionally rely on token-level or embedding-level comparisons with the text references. This is different from human language processing, for which visual imaginations often improve comprehension. In this work, we propose ImaginE, an imagination-based automatic evaluation metric for natural language generation. With the help of CLIP and DALL-E, two cross-modal models pre-trained on large-scale image-text pairs, we automatically generate an image as the embodied imagination for the text snippet and compute the imagination similarity using contextual embeddings. Experiments spanning several text generation tasks demonstrate that adding imagination with our ImaginE displays great potential in introducing multi-modal information into NLG evaluation, and improves existing automatic metrics' correlations with human similarity judgments in many circumstances.
This tutorial of Deep Learning on Graphs for Natural Language Processing (DLG4NLP) is timely for the computational linguistics community, and covers relevant and interesting topics, including automatic graph construction for NLP, graph representation learning for NLP, various advanced GNN based models (e.g., graph2seq, graph2tree, and graph2graph) for NLP, and the applications of GNNs in various NLP tasks (e.g., machine translation, natural language generation, information extraction and semantic parsing). The intended audiences for this tutorial mainly include graduate students and researchers in the field of Natural Language Processing and industry professionals who want to know how the state-of-the-art deep learning on graphs techniques can help solve important yet challenging Natural Language Processing problems.
Microsoft has been making major investments in very large language models, from the hardware to run them in Azure (which it talks about as an'AI supercomputer') to the DeepSpeed library that speeds up training and running machine-learning models with billions of parameters by spreading them across multiple GPUs. In 2020, Microsoft got an exclusive licence for the powerful (and sometimes controversial) GPT-3 natural language generation model from OpenAI, which uses 175 billion parameters to produce what can look very much like something written by a person. OpenAI has a GPT-3 API that's trained and run on Azure, but it's in private beta and researchers and academics have to apply individually to join a waitlist. Similarly, Microsoft hasn't yet started even a private preview for what it calls the Open AI GPT and Azure Service and the page to sign up for notifications says there is no release date yet. But Microsoft is already using GPT-3 and other natural language generation in its products for features that are much more sophisticated than writing automatic captions for images.
Despite the recent advancement in NLP research, cross-lingual transfer for natural language generation is relatively understudied. In this work, we transfer supervision from high resource language (HRL) to multiple low-resource languages (LRLs) for natural language generation (NLG). We consider four NLG tasks (text summarization, question generation, news headline generation, and distractor generation) and three syntactically diverse languages, i.e., English, Hindi, and Japanese. We propose an unsupervised cross-lingual language generation framework (called ZmBART) that does not use any parallel or pseudo-parallel/back-translated data. In this framework, we further pre-train mBART sequence-to-sequence denoising auto-encoder model with an auxiliary task using monolingual data of three languages. The objective function of the auxiliary task is close to the target tasks which enriches the multi-lingual latent representation of mBART and provides good initialization for target tasks. Then, this model is fine-tuned with task-specific supervised English data and directly evaluated with low-resource languages in the Zero-shot setting. To overcome catastrophic forgetting and spurious correlation issues, we applied freezing model component and data argumentation approaches respectively. This simple modeling approach gave us promising results.We experimented with few-shot training (with 1000 supervised data points) which boosted the model performance further. We performed several ablations and cross-lingual transferability analyses to demonstrate the robustness of ZmBART.
The software system generates narratives and reports based on input data. It can also translate this text into audible speech. Here is a list of the top companies providing natural language generation services. About: Arria NLG is a form of artificial intelligence that transforms structured data into natural language. Through data analysis, knowledge automation, language generation and tailored information delivery, Arria software replicates the human process of expertly analysing and communicating data insights. The Arria NLG Platform automatically writes rich, compelling narratives based on insights extracted from datasets.