"As for why I tell a lot of stories, there's a joke about that. There was once a man who had a computer, and he asked it, 'Do you compute that you will ever be able to think like a human being?' And after assorted grindings and beepings, a slip of paper came out of the computer that said, 'That reminds me of a story . . . "
– from ANGELS FEAR: TOWARDS AN EPISTEMOLOGY OF THE SACRED. Gregory Bateson & Mary Catherine Bateson. (Part III 'Metalogue').
In this paper, we propose a novel pretraining-based encoder-decoder framework, which can generate the output sequence based on the input sequence in a two-stage manner. For the encoder of our model, we encode the input sequence into context representations using BERT. For the decoder, there are two stages in our model, in the first stage, we use a Transformer-based decoder to generate a draft output sequence. In the second stage, we mask each word of the draft sequence and feed it to BERT, then by combining the input sequence and the draft representation generated by BERT, we use a Transformer-based decoder to predict the refined word for each masked position. To the best of our knowledge, our approach is the first method which applies the BERT into text generation tasks. As the first step in this direction, we evaluate our proposed method on the text summarization task. Experimental results show that our model achieves new state-of-the-art on both CNN/Daily Mail and New York Times datasets.
Hierarchical Neural story generation had the most realistic paragraphs of any text generation paper I came across last summer when I was doing research on this topic. I also find it fun that they used a subreddit as their training set (writingprompts). The architecture is a mixture of self attention layers and convolutional layers. Generate a prompt and then generate a full story. I tried to extend their work to prompt, outline, story but the results were meh (they were similar quality to not bothering with the outline step).
Cross-domain natural language generation (NLG) is still a difficult task within spoken dialogue modelling. Given a semantic representation provided by the dialogue manager, the language generator should generate sentences that convey desired information. Traditional template-based generators can produce sentences with all necessary information, but these sentences are not sufficiently diverse. With RNN-based models, the diversity of the generated sentences can be high, however, in the process some information is lost. In this work, we improve an RNN-based generator by considering latent information at the sentence level during generation using the conditional variational autoencoder architecture. We demonstrate that our model outperforms the original RNN-based generator, while yielding highly diverse sentences. In addition, our model performs better when the training data is limited.
Humans have a tendency to make things smarter and smarter: the telephone became a smartphone, the wristwatch became a smartwatch. Another example is where humans enabled a computer to ingest data, process it, provide an outcome, then learn from additional new data and provide an improved outcome. In layman terms, this is cognition and technologies that enable cognition are cognitive technologies such as Machine Learning, Natural Language Processing, Natural Language Generation, etc. This in my view, is one of the most important change that will impact the human race. I believe this will compliment humans and not replace them.
Gmail's Smart Compose can save you valuable time when you're firing off a quick message, but don't expect it to refer to people as "him" or "her" -- Google is playing it safe on that front. Product leaders have revealed to Reuters that Google removed gender pronouns from Smart Compose's phrase suggestions after realizing that the AI-guided feature could be biased. When a scientist talked about meeting an investor in January, for example, Gmail offered the follow-up "do you want to meet him" -- not considering the possibility that the investor could be a woman. The problem is a typical one with natural language generation systems like Google's: it's based on huge volumes of historical data. As certain fields tend to be dominated by people from one gender, the AI can sometimes assume that a person belongs to that gender.
If you've spent much time on Nanalyze, you know that we're passionate about technology and believe that we're living in the most exciting times in history. Our job is to keep you up-to-date about these changes in a variety of fields, so you can make informed financial decisions about where to invest or not--and learn some pretty cool stuff along the way. We talk about the good, the bad and ugly no matter what. Then we came across Narrative Science and its natural language generation (NLG) platform Quill, which uses artificial intelligence technology to write everything from financial reports to sports news. We knew that English degree would be obsolete someday.
Workday outlined People Analytics, an application that will filter through human resources data and give executives an at-a-glance view of workforce trends they need to act on. Using artificial intelligence, machine learning and analytics, Workday People Analytics will look for enterprise employee patterns within Workday Human Capital Management. People Analytics will aim to find connections, predict the most important issue to see and explain workplace trends in a narrative powered by natural language generation. People Analytics, which was highlighted at the Workday Rise conference in Las Vegas, falls under a category called augmented analytics by Workday. The gist is that People Analytics will dynamically generate HR analytics to surface issues such as organizational composition, diversity, hiring, retention and attrition, talent gaps and performance.
This endowment will empower the renowned academic institution with industry specific knowledge and resources to help create solutions to accelerate the growth and adoption of Data Science and AI globally, the company said in a statement. Through this endowment, Mindtree will help accelerate the development of technology innovation in fields like AI, data analytics, and machine learning, it added. "AI and Data Science are key priorities for our clients as these technologies offer immense potential to create new business opportunities. IIT Madras is one of the global leaders in this field and the collaboration between Mindtree and IIT Madras will help accelerate innovation and push the boundaries of knowledge," the company's CEO and Managing Director Rostow Ravanan said. Mindtree will further extend the partnership with IIT Madras to include research projects focusing on related topics such as personalisation, conversational interfaces and natural language generation, the statement said.
We present a data resource which can be useful for research purposes on language grounding tasks in the context of geographical referring expression generation. The resource is composed of two data sets that encompass 25 different geographical descriptors and a set of associated graphical representations, drawn as polygons on a map by two groups of human subjects: teenage students and expert meteorologists.