Witbrock, Michael


D2KE: From Distance to Kernel and Embedding

arXiv.org Machine Learning

For many machine learning problem settings, particularly with structured inputs such as sequences or sets of objects, a distance measure between inputs can be specified more naturally than a feature representation. However, most standard machine models are designed for inputs with a vector feature representation. In this work, we consider the estimation of a function $f:\mathcal{X} \rightarrow \R$ based solely on a dissimilarity measure $d:\mathcal{X}\times\mathcal{X} \rightarrow \R$ between inputs. In particular, we propose a general framework to derive a family of \emph{positive definite kernels} from a given dissimilarity measure, which subsumes the widely-used \emph{representative-set method} as a special case, and relates to the well-known \emph{distance substitution kernel} in a limiting case. We show that functions in the corresponding Reproducing Kernel Hilbert Space (RKHS) are Lipschitz-continuous w.r.t. the given distance metric. We provide a tractable algorithm to estimate a function from this RKHS, and show that it enjoys better generalizability than Nearest-Neighbor estimates. Our approach draws from the literature of Random Features, but instead of deriving feature maps from an existing kernel, we construct novel kernels from a random feature map, that we specify given the distance measure. We conduct classification experiments with such disparate domains as strings, time series, and sets of vectors, where our proposed framework compares favorably to existing distance-based learning methods such as $k$-nearest-neighbors, distance-substitution kernels, pseudo-Euclidean embedding, and the representative-set method.


Reports of the 2016 AAAI Workshop Program

AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence's Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) was held at the beginning of the conference, February 12-13, 2016. Workshop participants met and discussed issues with a selected focus -- providing an informal setting for active exchange among researchers, developers and users on topics of current interest. To foster interaction and exchange of ideas, the workshops were kept small, with 25-65 participants. Attendance was sometimes limited to active participants only, but most workshops also allowed general registration by other interested individuals.


Reports of the 2016 AAAI Workshop Program

AI Magazine

The Workshop Program of the Association for the Advancement of Artificial Intelligence’s Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) was held at the beginning of the conference, February 12-13, 2016. Workshop participants met and discussed issues with a selected focus — providing an informal setting for active exchange among researchers, developers and users on topics of current interest. To foster interaction and exchange of ideas, the workshops were kept small, with 25-65 participants. Attendance was sometimes limited to active participants only, but most workshops also allowed general registration by other interested individuals. The AAAI-16 Workshops were an excellent forum for exploring emerging approaches and task areas, for bridging the gaps between AI and other fields or between subfields of AI, for elucidating the results of exploratory research, or for critiquing existing approaches. The fifteen workshops held at AAAI-16 were Artificial Intelligence Applied to Assistive Technologies and Smart Environments (WS-16-01), AI, Ethics, and Society (WS-16-02), Artificial Intelligence for Cyber Security (WS-16-03), Artificial Intelligence for Smart Grids and Smart Buildings (WS-16-04), Beyond NP (WS-16-05), Computer Poker and Imperfect Information Games (WS-16-06), Declarative Learning Based Programming (WS-16-07), Expanding the Boundaries of Health Informatics Using AI (WS-16-08), Incentives and Trust in Electronic Communities (WS-16-09), Knowledge Extraction from Text (WS-16-10), Multiagent Interaction without Prior Coordination (WS-16-11), Planning for Hybrid Systems (WS-16-12), Scholarly Big Data: AI Perspectives, Challenges, and Ideas (WS-16-13), Symbiotic Cognitive Systems (WS-16-14), and World Wide Web and Population Health Intelligence (WS-16-15).


Cyc and the Big C: Reading that Produces and Uses Hypotheses about Complex Molecular Biology Mechanisms

AAAI Conferences

Systems biology, the study of the intricate, ramified, com-plex and interacting mechanisms underlying life, often proves too complex for unaided human understanding, even by groups of people working together. This difficulty is ex-acerbated by the high volume of publications in molecular biology. The Big C (‘C’ for Cyc) is a system designed to (semi-)automatically acquire, integrate, and use complex mechanism models, specifically related to cancer biology, via automated reading and a hyper-detailed refinement pro-cess resting on Cyc’s logical representations and powerful inference mechanisms. We aim to assist cancer research and treatment by achieving elements of biologist-level reason-ing, but with the scale and attention to detail that only com-puter implementations can provide.


Harnessing Cyc to Answer Clinical Researchers' Ad Hoc Queries

AI Magazine

By extending Cyc's ontology and KB approximately 2%, Cycorp and Cleveland Clinic Foundation (CCF) have built a system to answer clinical researchers' ad hoc queries. But, surprisingly often, after applying various constraints (medical domain knowledge, common sense, discourse pragmatics, syntax), there is only one single way to fit those fragments together, one semantically meaningful formal query P. The system, SRA (for Semantic Research Assistant), dispatches a series of database calls and then combines, logically and arithmetically, their results into answers to P. Seeing the first few answers stream back, the user may realize that they need to abort, modify, and re-ask their query. Besides real-time ad hoc query-answering, queries can be bundled and persist over time. Until full articulation/answering of precise, analytical queries becomes as straight-forward and ubiquitous as text search, even partial understanding of a query empowers semantic search over semi-structured data (ontology-tagged text), avoiding many of the false positives and false negatives that standard text searching suffers from.


Harnessing Cyc to Answer Clinical Researchers' Ad Hoc Queries

AI Magazine

By extending Cyc’s ontology and KB approximately 2%, Cycorp and Cleveland Clinic Foundation (CCF) have built a system to answer clinical researchers’ ad hoc queries. The query may be long and complex, hence only partially understood at first, parsed into a set of CycL (higher-order logic) fragments with open variables. But, surprisingly often, after applying various constraints (medical domain knowledge, common sense, discourse pragmatics, syntax), there is only one single way to fit those fragments together, one semantically meaningful formal query P. The system, SRA (for Semantic Research Assistant), dispatches a series of database calls and then combines, logically and arithmetically, their results into answers to P. Seeing the first few answers stream back, the user may realize that they need to abort, modify, and re-ask their query. Even before they push ASK, just knowing approximately how many answers would be returned can spark such editing. Besides real-time ad hoc query-answering, queries can be bundled and persist over time. One bundle of 275 queries is rerun quarterly by CCF to produce the procedures and outcomes data it needs to report to STS (Society of Thoracic Surgeons, an external hospital accreditation and ranking body); another bundle covers ACC (American College of Cardiology) reporting. Until full articulation/answering of precise, analytical queries becomes as straight-forward and ubiquitous as text search, even partial understanding of a query empowers semantic search over semi-structured data (ontology-tagged text), avoiding many of the false positives and false negatives that standard text searching suffers from.


Project Halo: Towards a Digital Aristotle

AI Magazine

Vulcan selected three teams, each of which was to formally represent 70 pages from the advanced placement (AP) chemistry syllabus and deliver knowledge-based systems capable of answering questions on that syllabus. The evaluation quantified each system's coverage of the syllabus in terms of its ability to answer novel, previously unseen questions and to provide human- readable answer justifications. These justifications will play a critical role in building user trust in the question-answering capabilities of Digital Aristotle. This article presents the motivation and longterm goals of Project Halo, describes in detail the six-month first phase of the project -- the Halo Pilot -- its KR&R challenge, empirical evaluation, results, and failure analysis.


Project Halo: Towards a Digital Aristotle

AI Magazine

Project Halo is a multistaged effort, sponsored by Vulcan Inc, aimed at creating Digital Aristotle, an application that will encompass much of the world's scientific knowledge and be capable of applying sophisticated problem solving to answer novel questions. Vulcan envisions two primary roles for Digital Aristotle: as a tutor to instruct students in the sciences and as an interdisciplinary research assistant to help scientists in their work. As a first step towards this goal, we have just completed a six-month pilot phase designed to assess the state of the art in applied knowledge representation and reasoning (KR&/R). Vulcan selected three teams, each of which was to formally represent 70 pages from the advanced placement (AP) chemistry syllabus and deliver knowledge-based systems capable of answering questions on that syllabus. The evaluation quantified each system's coverage of the syllabus in terms of its ability to answer novel, previously unseen questions and to provide human- readable answer justifications. These justifications will play a critical role in building user trust in the question-answering capabilities of Digital Aristotle. Prior to the final evaluation, a "failure taxonomy' was collaboratively developed in an attempt to standardize failure analysis and to facilitate cross-platform comparisons. Despite differences in approach, all three systems did very well on the challenge, achieving performance comparable to the human median. The analysis also provided key insights into how the approaches might be scaled, while at the same time suggesting how the cost of producing such systems might be reduced. This outcome leaves us highly optimistic that the technical challenges facing this effort in the years to come can be identified and overcome. This article presents the motivation and longterm goals of Project Halo, describes in detail the six-month first phase of the project -- the Halo Pilot -- its KR&R challenge, empirical evaluation, results, and failure analysis. The pilot's outcome is used to define challenges for the next phase of the project and beyond.


AAAI 1997 Spring Symposium Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence (AAAI) held its 1997 Spring Symposium Series on 24 to 26 March at Stanford University in Stanford, California. This article contains summaries of the seven symposia that were conducted: (1) Artificial Intelligence in Knowledge Management; (2) Computational Models for Mixed-Initiative Interaction; (3) Cross-Language Text and Speech Retrieval; (4) Intelligent Integration and Use of Text, Image, Video, and Audio Corpora; (5) Natural Language Processing for the World Wide Web; (6) Ontological Engineering; and (7) Qualitative Preferences in Deliberation and Practical Reasoning.


AAAI 1997 Spring Symposium Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence (AAAI) held its 1997 Spring Symposium Series on 24 to 26 March at Stanford University in Stanford, California. This article contains summaries of the seven symposia that were conducted: (1) Artificial Intelligence in Knowledge Management; (2) Computational Models for Mixed-Initiative Interaction; (3) Cross-Language Text and Speech Retrieval; (4) Intelligent Integration and Use of Text, Image, Video, and Audio Corpora; (5) Natural Language Processing for the World Wide Web; (6) Ontological Engineering; and (7) Qualitative Preferences in Deliberation and Practical Reasoning.