Salesforce teases its emerging AI capabilities


Since introducing its Einstein AI platform a few years ago, Salesforce has built AI into more and more of its tools. At this year's Dreamforce conference, for instance, the CRM giant announced new tools for customizing voice assistants and for incorporating AI into contact centers. The new capabilities showcase how Salesforce is progressively making work easier for its customers -- albeit in incremental steps. To give Dreamforce attendees a more forward-looking glimpse into its product capabilities, the Salesforce Research team demonstrated some of its breakthroughs in areas like conversational AI and natural language generation. Their research is focused on building an AI-driven world so far only found in sci-fi, said Salesforce Chief Scientist Dr. Richard Socher.

News Automation – The rewards, risks and realities of 'machine journalism' - WAN-IFRA


This report focuses on a specific part of news automation: the automated generation of news texts based on structured data. This is not about crystal ball gazing. News automation is already making itself felt in the daily life of newsrooms, and the examples presented in this report show how automation can aid journalism as well as the implications, and the ethics involved. Media outlets face ever-growing commercial pressure to extract higher margins from dwindling resources and that is a key driver for news automation. Right now, one of the main goals of automated content is to save journalistic effort, especially on repetitive tasks, while increasing output volume.

AI wrote fake Trump speeches and 60% of people couldn't tell the difference


In a test of how online technology could be used to interfere with the upcoming presidential election, 6 in 10 people could not tell the difference between a real speech from President Trump and a fake one generated through artificial intelligence. In a unique project shared with Secrets, a computer program dubbed "RoboTrump" successfully wrote passages of Trump-like speeches that tricked Americans, especially Trump supporters. Overall, the correct source -- Trump or RoboTrump -- was picked 40% of the time, according to the project's manager The analysis said, "While Trump's rambling style probably makes differentiating between real and fake more difficult than it would be for a more eloquent and talented speaker, today's new natural language generation AI models have reached a tipping point in their ability to generate fake, real-sounding text." The project tested 20 different paragraphs on 10 topics.

How insurance can mitigate AI risks


There is a growing consensus that artificial intelligence (AI) will fundamentally transform our economy and society.1 A wide range of commercial applications are being used across many industries. Among these are anomaly detection (e.g., for fraud mitigation), image recognition (e.g., for public safety), speech recognition and natural language generation (e.g., for virtual assistants), recommendation engines (e.g., for robo-advice), and automated decision-making systems (e.g., for workflow applications). While AI's potential benefits are huge, the concerns are substantial as well. Fears exist regarding potential discrimination, safety, privacy, ethics, and accountability for undesired outcomes.

Can Machine Learning Really Flag False News? New Research Says No


Research is still being done on how to detect fake news without manual intervention. Detecting fake news by using stylometry-based provenance to track a text's writing style back to its first source has been accepted as one way to solve the challenge. Earlier, researchers from Harvard University and MIT-IBM Watson Lab had come up with an AI-powered tool to recognise AI-generated text. Known as the Giant Language Model Test Room (GLTR), the system works on finding out if a particular piece of writing was produced by a language model algorithm, aka computer or a human. With AI and natural language generation models being used to make fake news, GLTR can be used to differentiate machine-generated text from human-written text to a non-expert reader.

Microsoft's UniLM AI achieves state-of-the-art performance on summarization and language generation


Language model pretraining, a technique that "teaches" machine learning systems contextualized text representations by having them predict words based on their contexts, has advanced the state of the art across a range of natural language processing objectives. However, models like Google's BERT, which are bidirectional in design (meaning they draw on left-of-word and right-of-word context to form predictions), aren't well-suited to the task of natural language generation with substantial modification. That's why scientists at Microsoft Research investigated an alternative approach dubbed UNIfied pre-trained Language Model (UniLM), which completes unidirectional, sequence-to-sequence, and bidirectional prediction tasks and which can be fine-tuned for both natural language understanding and generation. They claim it compares favorably to BERT on popular benchmarks, achieving state-of-the-art results on a sampling of abstract summarization, generative question answering, and language generation data sets. UniLM is a multi-layer network at its core, made up of Transformer AI models jointly pretrained on large amounts of text and optimized for language modeling.

This AI Can Produce 20,000 Articles for 44 Cents : Fanatics Media


Chris Penn Explains how AI and Natural Language Generation will listen to your customers and create content for them… Automatically!



Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet...) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32 pretrained models in 100 languages and deep interoperability between TensorFlow 2.0 and PyTorch. Choose the right framework for every part of a model's lifetime This repo is tested on Python 2.7 and 3.5 (examples are tested only on python 3.5), PyTorch 1.0.0 and TensorFlow 2.0.0-rc1 First you need to install one of, or both, TensorFlow 2.0 and PyTorch. Please refere to TensorFlow installation page and/or PyTorch installation page regarding the specific install command for your platform. When TensorFlow 2.0 and/or PyTorch has been installed, Transformers can be installed using pip as follows: Here also, you first need to install one of, or both, TensorFlow 2.0 and PyTorch.

This AI tool is smart enough to spot AI-generated articles and tweets


Researchers from Harvard University and MIT-IBM Watson Lab have created an AI-powered tool for spotting AI-generated text. Dubbed Giant Language Model Test Room (GLTR), the system aims to detect whether a specific piece of text was generated by a language model algorithm. You can give the tool a spin here. Don't miss Hard Fork Summit in Amsterdam With AI and natural language generation models already employed to produce fake news and spread misinformation, GLTR has the potential to distinguish machine generated text from human-written text to a non-expert reader. According to results shared by the researchers, GLTR improved the human detection-rate of fake text from 54 percent to 72 percent without any prior training.

Natural Language Generation for Non-Expert Users Artificial Intelligence

Motivated by the difficulty in presenting computational results, especially when the results are a collection of atoms in a logical language, to users, who are not proficient in computer programming and/or the logical representation of the results, we propose a system for automatic generation of natural language descriptions for applications targeting mainstream users. Differently from many earlier systems with the same aim, the proposed system does not employ templates for the generation task. It assumes that there exist some natural language sentences in the application domain and uses this repository for the natural language description. It does not require, however, a large corpus as it is often required in machine learning approaches. The systems consist of two main components. The first one aims at analyzing the sentences and constructs a Grammatical Framework (GF) for given sentences and is implemented using the Stanford parser and an answer set program. The second component is for sentence construction and relies on GF Library. The paper includes two use cases to demostrate the capability of the system. As the sentence construction is done via GF, the paper includes a use case evaluation showing that the proposed system could also be utilized in addressing a challenge to create an abstract Wikipedia, which is recently discussed in the BlueSky session of the 2018 International Semantic Web Conference.