"As for why I tell a lot of stories, there's a joke about that. There was once a man who had a computer, and he asked it, 'Do you compute that you will ever be able to think like a human being?' And after assorted grindings and beepings, a slip of paper came out of the computer that said, 'That reminds me of a story . . . "
– from ANGELS FEAR: TOWARDS AN EPISTEMOLOGY OF THE SACRED. Gregory Bateson & Mary Catherine Bateson. (Part III 'Metalogue').
CHICAGO, Jan. 17, 2018 (GLOBE NEWSWIRE) -- Narrative Science, the leader in Advanced Natural Language Generation (Advanced NLG) for the enterprise, today announced the availability of its third annual research report, "Outlook on Artificial Intelligence in the Enterprise 2018." In partnership with the National Business Research Institute (NBRI), Narrative Science surveyed business executives from a wide array of functions, including business intelligence, finance, and product management, to understand the use, value, and impact of AI throughout their businesses. Narrative Science's analysis of the data revealed key findings, including the compelling discovery that almost two-thirds of enterprises utilized AI in 2017.
Ready or not, the future is here. For enterprise organizations, it must be a data-driven one. Whoever can use data and technology to transform the customer experience, and be the first to discover and deliver on new business models, will be the disruptor. Those who can't, the disrupted in this period known as the "era of Digital Darwinism." As we hone and focus our organizations' 2020 (and even 2030) vision, MicroStrategy has compiled top trends we all should be watching today and in the near future from leading influencers in business intelligence, data analytics and digital transformation including: From the Internet of Things and artificial intelligence, to machine learning and natural language generation, to some lesser talked about and some very human factors, the insights gathered here will serve as a resource to shape enterprise strategy and planning in 2018 and beyond.
The Seventh International Workshop on Natural Language Generation was held from 21 to 24 June 1994 in Kennebunkport, Maine. Sixty-seven people from 13 countries attended this 4-day meeting on the study of natural language generation in computational linguistics and AI. The goal of the workshop was to introduce new, cuttingedge work to the community and provide an atmosphere in which discussion and exchange would flourish. Sixty-seven people from 13 countries attended this successful 4-day meeting, coming from as far away as Japan, Australia, and Europe. The study of language generation in computational linguistics and AI is still overshadowed by the study of parsing and analysis.
The PROSENET/TEXTNET approach is designed to facilitate the generation of polished prose by an expert system. The approach uses the augmented transition network (ATN) formalism to help structure prose generation at the phrase, sentence, and paragraph levels. The approach also uses expressive frames to help give the expert system builder considerable freedom to organize material flexibly at the paragraph level. The PROSENET /TEXTNET approach has been used in a number of prototype expert systems in medical domains, and has proved to be a convenient and powerful tool. One component of this interface for many systems involves the generation of English prose to communicate the expert system's conclusions and recommendations.
It was planned and coordinated by Kristiina Jokinen (Nara Institute of Science and Technology [NAIST]), Mark Maybury (The MITRE Corporation), Michael Zock (LIMSI-CNRS), and Ingrid Zukerman (Monash University). Thirty scholars from Europe, the United States, Australia, and Japan participated in the workshop. The purpose of the workshop was to clarify the role of rational and cooperative planning in generation in general and to bridge the gaps that seem to exist between theoretical models of planning agents and practical aspects of natural language generation (NLG) architecture. In recent years, there has been a focus shift in NLG from the study of well-formedness conditions (grammars) to the exploration of the communicative adequacy of linguistic forms: Speaking is viewed as an indirect means for achieving commupresentations, attempted to provide further material for building bridges. The workshop finished with a panel on the gaps and bridges theme, summarizing the topics of the ...
Text planning is one of the most rapidly growing subfields of language generation. Until the 1988 AAAI conference, no workshop has concentrated on text planning and its relationship to realization. This report is a summary of that workshop. Traditionally, systems that automatically generate natural language have been conceived as consisting of two principal components: a text planner and a realization grammar. Recent advances in the art, especially in the incorporation of generation systems into large computer applications, have prompted researchers to question this traditional categorization and the architectures used to embody generator systems.
These collocations are used by native speakers of a language almost without thought, yet they must be learned by nonnative speakers of the language. A native speaker of English might say that he/she drinks "strong coffee," but a nonnative speaker might say either "powerful coffee" or "sturdy coffee." Collocations tend to vary among languages and topic domains. Unfortunately, the task of correctly identifying lexical collocations, even by native speakers of the language, has been shown to be difficult. Computer systems that translate natural languages, or machine-translation systems, need to know about lexical collocation information to produce natural-sounding or colloquially proper text.
What is Natural Language Generation? Since the beginning of 2017, the term Natural Language Generation has become more and more common. But its rise in popularity also led to some confusion. Even the Forbes article that declared NLG one of the top trends for 2017 listed several companies that do Natural Language Processing, not Natural Language Generation! How can you blame them?
As the New Year begins and we look forward to what 2017 will bring for business, it makes sense to look at one of the most captivating but least understood areas of technology. I am of course talking about Artificial Intelligence. However, to understand what the top Artificial Intelligence trends in 2017 will be, we need to both look at 2016 and to cut through clever marketing and misinformation. Or, to put it another way, we need to separate the buzz words from the trends. In 2016, Machine Learning became a trendy buzz word, with software vendors jockeying to make use of the term even if their technology didn't really use machine learning.
Just as Echo and Alexa have invaded our homes, conversational interfaces will become increasingly common when it comes to interacting with technology in a business environment. According to one report, next year 20% of firms will look to add voice enabled interfaces to their existing point-and-click dashboards and systems. After all it's the way most of us communicate most naturally – we can generally structure any query in a matter of seconds. As computers have become more adept at understanding us, there's less need for us to spend time learning their complicated mathematical languages. Natural language generation and natural language processing algorithms are constantly learning to become better at understanding us, and talking to us in a way we understand.