Collaborating Authors

A Review of Relational Machine Learning for Knowledge Graphs Machine Learning

Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be "trained" on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive datasets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's Knowledge Vault project as an example of such combination.

Foundations of Explainable Knowledge-Enabled Systems Artificial Intelligence

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.

AAAI 2006 Spring Symposium Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Computer Science Department, was pleased to present its 2006 Spring Symposium Series held March 27-29, 2006, at Stanford University, California. The titles of the eight symposia were (1) Argumentation for Consumers of Health Care (chaired by Nancy Green); (2) Between a Rock and a Hard Place: Cognitive Science Principles Meet AI Hard Problems (chaired by Christian Lebiere); (3) Computational Approaches to Analyzing Weblogs (chaired by Nicolas Nicolov); (4) Distributed Plan and Schedule Management (chaired by Ed Durfee); (5) Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question Answering (chaired by Chitta Baral); (6) Semantic Web Meets e-Government (chaired by Ljiljana Stojanovic); (7) To Boldly Go Where No Human-Robot Team Has Gone Before (chaired by Terry Fong); and (8) What Went Wrong and Why: Lessons from AI Research and Applications (chaired by Dan Shapiro).

Developing Semantic Classifiers for Big Data

AAAI Conferences

When the amount of RDF data is very large, it becomes more likely that the triples describing entities will contain errors and may not include the specification of a class from a known ontology. The work presented here explores the utilization of methods from machine learning to develop classifiers for identifying the semantic categorization of entities based upon the property names used to describe the entity. The goal is to develop classifiers that are accurate, but robust to errors and noise. The training data comes from DBpedia, where entities are categorized by type and densely described with RDF properties. The initial experimentation reported here indicates that the approach is promising.