An environment representation (ER) is a substantial part of every autonomous system. It introduces a common interface between perception and other system components, such as decision making, and allows downstream algorithms to deal with abstracted data without knowledge of the used sensor. In this work, we propose and evaluate a novel architecture that generates an egocentric, grid-based, predictive, and semantically-interpretable ER. In particular, we provide a proof of concept for the spatio-temporal fusion of multiple camera sequences and short-term prediction in such an ER. Our design utilizes a strong semantic segmentation network together with depth and egomotion estimates to first extract semantic information from multiple camera streams and then transform these separately into egocentric temporally-aligned bird's-eye view grids. A deep encoder-decoder network is trained to fuse a stack of these grids into a unified semantic grid representation and to predict the dynamics of its surrounding. We evaluate this representation on real-world sequences of the Cityscapes dataset and show that our architecture can make accurate predictions in complex sensor fusion scenarios and significantly outperforms a model-driven baseline in a category-based evaluation.
These methods have generally been successful at processing simple declarative sentences, but are less suited to process other kinds of - or more complex - sentence structures. Among the main problems are the difficulty of coping with different word orders and the fact that the burden of untangling complex structures falls on individual semantic information, rather than general syntactic rules, thus increasing the amount of specification needed for the vocabulary of a given domain. Thus, other authors, such as Heidom (1972), Sowa and Way (1986) and Boguraev and Sparck Jones (1983) have preferred to add a semantic component to a syntactic parser. However, purely semantic parsers have other advantages, notably the potential for greater robustness, which justify their further study. This paper presents a new method for semantic parsing called "dual frames", which attempts to solve or lessen the above problems.
Most computational story generation systems lack the ability to generate new types of imaginary objects that play functional roles in stories, such as lightsabers in Star Wars. We present an algorithm that generates such imaginary objects, which we call gadgets, in order to extend the ontological expressivity of existing, planning-based story generation systems. The behavior of a gadget is represented as a plan including typical events that happen when the gadget is used. Our algorithm creates gadgets by extrapolating and merging one or more commonly known objects in order to achieve a narrative goal provided by an existing story generator. We extend partial-order planning to establish open conditions based on analogies between concepts related respectively to common objects and the gadget. We show the algorithm is capable of generating gadgets created by human.
The need of handling semantic heterogeneity of resources is a key problem of the Semantic Web. State of the art techniques for ontology matching are the key technology for addressing this issue. However, they only partially exploit the natural lan- guage descriptions of ontology entities and they are mostly unable to find correspondences between entities having dif- ferent logical types (e.g. mapping properties to classes). We introduce a novel approach aimed at finding correspondences between ontology entities according to the intensional mean- ing of their models, hence abstracting from their logical types. Lexical linked open data and frame semantics play a crucial role in this proposal. We argue that this approach may lead to a step ahead in the state of the art of ontology matching, and positively affect related applications such as question an- swering and knowledge reconciliation.
With the avalanche of electronic text collections descending from all over the web, new forms of document processing that facilitate automatic extraction of useful information from texts are required. One approach for understanding the key aspects of a document or of a set of documents is to analyze the events in the document(s) and to automatically find scenarios of related events.