Existing approaches to text generation fail to consider how interactions with the user may be managed within a coherent explanation or description. This paper presents an approach to generating such interactive explanations based on two levels of discourse planning - content planning and dialogue planning. The system developed allows aspects of the changing context to be monitored with an explanation, and the developing explanation to depend on this changing context. Interruptions from the user are allowed and dealt with (and resumed from) within the context of that explanation.
What do Siri and machine translation have in common? They both produce strange, sometimes ridiculous language that leave us shaking our heads with confusion. Here at IVANNOVATION we frequently use Siri as well as Google's dictation function to get our work done. Siri instantly adds items to our to do lists, adds events to our calendars, and tells us answers to important questions like, "Siri, how much wood would a woodchuck chuck if a woodchuck could chuck wood?" (Ask Siri yourself.) Likewise, Google dictation helps us avoid the ruthless onslaught of carpal tunnel syndrome by typing up our articles and emails for us.
Given a knowledge base, expanding a query consists of determining all the ways of deriving it from atoms built on some distinguished predicates. In this paper, we address the problem of determining the expansions of a query in description logics and CARIN. Description Logics are logical formalisms for representing classes of objects (called concepts) and their relationships (expressed by binary relations called roles). Much of the research in description logics has concentrated on algorithms for checldng subsumption between concepts and satisfiability of knowledge bases (see e.g.
Autonomous or driverless vehicles are a hot topic on the AI scene right now. Google, Volvo, Tesla, Uber… these are just some of the big names in the race to prove that driverless or autonomous vehicles are better and maybe even safer than human-driven vehicles. I was at a family event recently and two guests were chatting about the Artificial Intelligence (AI) component of driverless or autonomous vehicles and more specifically, how these vehicles are currently unable to detect human movement at high speed. One cited the example of a child stepping into the road whilst a vehicle was approaching at high speed. Some debate ensued about the width of lanes in the road (surrounding the vehicle) and the impact they have on the judgement of the driverless/autonomous vehicles.
When the amount of RDF data is very large, it becomes more likely that the triples describing entities will contain errors and may not include the specification of a class from a known ontology. The work presented here explores the utilization of methods from machine learning to develop classifiers for identifying the semantic categorization of entities based upon the property names used to describe the entity. The goal is to develop classifiers that are accurate, but robust to errors and noise. The training data comes from DBpedia, where entities are categorized by type and densely described with RDF properties. The initial experimentation reported here indicates that the approach is promising.