Broad Coverage, Domain-Generic Deep Semantic Parsing

AAAI Conferences

The TRIPS parser is a broad-coverage domain-general deep semantic parser that produces logical forms grounded in a general ontology. While using many of the techniques of modern syntactic theory, the system is semantically driven and uses many of the ideas of construction grammar. Unlike most work in semantic parsing, which is limited to specific simple domains, the TRIPS parser performs well in many diverse domains, after incorporating domain-specific named entity recognition where needed. The TRIPS grammar uses syntactic, semantic and ontological constraints simultaneously to construct semantically accurate parses, and includes many rules that capture the common constructions of everyday spoken language.


Generating Animations from Screenplays

arXiv.org Artificial Intelligence

Automatically generating animation from natural language text finds application in a number of areas e.g. movie script writing, instructional videos, and public safety. However, translating natural language text into animation is a challenging task. Existing text-to-animation systems can handle only very simple sentences, which limits their applications. In this paper, we develop a text-to-animation system which is capable of handling complex sentences. We achieve this by introducing a text simplification step into the process. Building on an existing animation generation system for screenwriting, we create a robust NLP pipeline to extract information from screenplays and map them to the system's knowledge base. We develop a set of linguistic transformation rules that simplify complex sentences. Information extracted from the simplified sentences is used to generate a rough storyboard and video depicting the text. Our sentence simplification module outperforms existing systems in terms of BLEU and SARI metrics.We further evaluated our system via a user study: 68 % participants believe that our system generates reasonable animation from input screenplays.


Dependency-based Text Graphs for Keyphrase and Summary Extraction with Applications to Interactive Content Retrieval

arXiv.org Artificial Intelligence

We build a bridge between neural network-based machine learning and graph-based natural language processing and introduce a unified approach to keyphrase, summary and relation extraction by aggregating dependency graphs from links provided by a deep-learning based dependency parser. We reorganize dependency graphs to focus on the most relevant content elements of a sentence, integrate sentence identifiers as graph nodes and after ranking the graph, we extract our keyphrases and summaries from its largest strongly-connected component. We take advantage of the implicit structural information that dependency links bring to extract subject-verb-object, is-a and part-of relations. We put it all together into a proof-of-concept dialog engine that specializes the text graph with respect to a query and reveals interactively the document's most relevant content elements. The open-source code of the integrated system is available at https:// github.com/ptarau/DeepRank .


Understanding Language Syntax and Structure: A Practitioner's Guide to NLP

#artificialintelligence

For any language, syntax and structure usually go hand in hand, where a set of specific rules, conventions, and principles govern the way words are combined into phrases; phrases get combines into clauses; and clauses get combined into sentences. We will be talking specifically about the English language syntax and structure in this section. In English, words usually combine together to form other constituent units. These constituents include words, phrases, clauses, and sentences.