qualifier


Discovering Implicational Knowledge in Wikidata

arXiv.org Artificial Intelligence

Knowledge graphs have recently become the state-of-the-art tool for representing the diverse and complex knowledge of the world. Examples include the proprietary knowledge graphs of companies such as Google, Facebook, IBM, or Microsoft, but also freely available ones such as YAGO, DBpedia, and Wikidata. A distinguishing feature of Wikidata is that the knowledge is collaboratively edited and curated. While this greatly enhances the scope of Wikidata, it also makes it impossible for a single individual to grasp complex connections between properties or understand the global impact of edits in the graph. We apply Formal Concept Analysis to efficiently identify comprehensible implications that are implicitly present in the data. Although the complex structure of data modelling in Wikidata is not amenable to a direct approach, we overcome this limitation by extracting contextual representations of parts of Wikidata in a systematic fashion. We demonstrate the practical feasibility of our approach through several experiments and show that the results may lead to the discovery of interesting implicational knowledge.


Here's why artificial intelligence isn't out to get us

#artificialintelligence

AI has a long way to go before people can or should worry about turning the world over to machines. Elon Musk's new plan to go all-in on self-driving vehicles puts a lot of faith in the artificial intelligence needed to ensure his Teslas can read and react to different driving situations in real time. AI is doing some impressive things--last week, for example, makers of the AlphaGo computer program reported that their software has learned to navigate the intricate London subway system like a native. Even the White House has jumped on the bandwagon, releasing a report days ago to help prepare the U.S. for a future when machines can think like humans. But AI has a long way to go before people can or should worry about turning the world over to machines, says Oren Etzioni, a computer scientist who has spent the past few decades studying and trying to solve fundamental problems in AI.


Here's why artificial intelligence isn't out to get us

PBS NewsHour

AI has a long way to go before people can or should worry about turning the world over to machines. Elon Musk's new plan to go all-in on self-driving vehicles puts a lot of faith in the artificial intelligence needed to ensure his Teslas can read and react to different driving situations in real time. AI is doing some impressive things--last week, for example, makers of the AlphaGo computer program reported that their software has learned to navigate the intricate London subway system like a native. Even the White House has jumped on the bandwagon, releasing a report days ago to help prepare the U.S. for a future when machines can think like humans. But AI has a long way to go before people can or should worry about turning the world over to machines, says Oren Etzioni, a computer scientist who has spent the past few decades studying and trying to solve fundamental problems in AI.


Why Quartz's news app is so much bigger than news

#artificialintelligence

Tom Popomaronis is the founder and CEO of OpiaTalk. Have you tried the Quartz News app, yet? Imagine a text conversation with a bot that sends you a news topic. You're then presented with two choices: Either tap a string of relevant (and surprisingly entertaining) emojis, which is like pressing "learn more," or tap an "anything else?" button to have another topic served: If you opt for the emojis, the app sends 1-3 follow-on texts that provide a high-level summary of the story and link to the article. If you're lucky, you'll even see a pertinent and entertaining gif in the mix for added value.


A Discourse Approach to Explanation Aware Knowledge Representation

AAAI Conferences

This study describes a discourse approach to explanation aware knowledge representation. It presents a reasoning model that adheres to argumentation as found in written discourse, intended for use in intelligent human-computer collaboration and inter-agent deliberation. The approach integrates the Toulmin model with Rhetorical Structure Theory and Perelman and Olbrechts-Tyteca's (1958) strategic forms of argumentative processes to define a set of constraints for governing argumentative interactions and formulating explanations in an ontologically normalized manner. Arguments, when satisfied, are instantiated into a dynamic rhetorical network that represents the system's model of the situation. Two modalities of instantiation are proposed. Inferential instantiation is used when a claim may be inferred from a ground, and synthetic instantiation is used for descriptive argumentation where both ground and claim must be satisfied for the argument to be instantiated.


FS94-04-002.pdf

AAAI Conferences

In this paper, I discuss a formalism that I devised and implemented to deal with complex instructions, in particular those containing Purpose Clauses. I argue that such formalism supports inferences that are an important pragmatic aspect of natural language, and that at the same time are related to surface reasoning, based on the syntactic structure of Natural Language; and moreover, that by using the kind of approach I propose, namely, first define linguistic terms and then use them in the part of the KB concerning the semantics of the domain, we can start bridging the gap between the two representation languages that the organizers of the symposium mention, the first used to capture the semantics of a sentence, the second used to capture general knowledge about the domain. Details on all the topics discussed here can be found in [Di Eugenio, 1993; Di Eugenio, 1994]. 2 Motivations for the representation language The characteristics of the formalism I propose derive from an analysis of an extensive corpus of Purpose Clauses, infinitival to constructions as in Do to do fl; as some of these characteristics stem from the inferences necessary to interpret Purpose Clauses, it is with such inferences that I will start. Interpreting Do to do/3, where fl describes the goal to be achieved, in computational terms amounts to: (la) use fl as an index into the KB; (lb) find a collection of methods A4z that achieve (lc) try to match a to an action 7t,j that appears as a component in /lz. These are typical plan recognition inferences, eg see [Wilensky, 1983; Pollack, 1986; Charniak, 1988; Litman and Allen, 1990]. In all the work on plan recognition I know of, with the exception of [Charniak, 1988], match in step (lc) is taken to mean that a is instance-of 7t,j. However, given the variability of NL action descriptions, we can't assume that the input description exactly matches the knowledge that an agent has about actions and their mutual relations: my research focuses on computing a more flexible match between a and 7t,j-The two kinds of discrepancy between input and stored action descriptions I have examined so far concern structural consistency, and expectations that may need to be satisfied for a certain relation T¢ to hold between a and 3'z,j.