Collaborating Authors

Small Is Beautiful: Computing Minimal Equivalent EL Concepts

AAAI Conferences

Rudolph 2012; Lutz, Seylan, and Wolter 2012), ontology Logics allow equivalent facts to be expressed in many different learning (Konev, Ozaki, and Wolter 2016; Lehmann and ways. The fact that ontologies are developed by a Hitzler 2010), rewriting ontologies into less expressive logics number of different people and grow over time can lead to (Carral et al. 2014; Lutz, Piro, and Wolter 2011), concepts that are more complex than necessary. For example, abduction (Du, Wang, and Shen 2015; Klarman, Endriss, below is a simplified definition of the medical concept and Schlobach 2011), and knowledge revision (Grau, Kharlamov, Clotting from the Galen ontology (Rector et al. 1994): and Zheleznyakov 2012; Qi, Liu, and Bell 2006).

Forgetting and Uniform Interpolation in Large-Scale Description Logic Terminologies

AAAI Conferences

We develop a framework for forgetting concepts and roles (aka uniform interpolation) in terminologies in the lightweight description logic EL extended with role inclusions and domain and range restrictions. Three different notions of forgetting, preserving, respectively, concept inclusions, concept instances, and answers to conjunctive queries, with corresponding languages for uniform interpolants are investigated. Experiments based on SNOMED CT (Systematised Nomenclature of Medicine Clinical Terms) and NCI (National Cancer Institute Ontology) demonstrate that forgetting is often feasible in practice for large-scale terminologies.

Zero-shot Medical Entity Retrieval without Annotation: Learning From Rich Knowledge Graph Semantics Artificial Intelligence

Medical entity retrieval is an integral component for understanding and communicating information across various health systems. Current approaches tend to work well on specific medical domains but generalize poorly to unseen sub-specialties. This is of increasing concern under a public health crisis as new medical conditions and drug treatments come to light frequently. Zero-shot retrieval is challenging due to the high degree of ambiguity and variability in medical corpora, making it difficult to build an accurate similarity measure between mentions and concepts. Medical knowledge graphs (KG), however, contain rich semantics including large numbers of synonyms as well as its curated graphical structures. To take advantage of this valuable information, we propose a suite of learning tasks designed for training efficient zero-shot entity retrieval models. Without requiring any human annotation, our knowledge graph enriched architecture significantly outperforms common zero-shot benchmarks including BM25 and Clinical BERT with 7% to 30% higher recall across multiple major medical ontologies, such as UMLS, SNOMED, and ICD-10.

Biomedical Concept Relatedness -- A large EHR-based benchmark Artificial Intelligence

A promising application of AI to healthcare is the retrieval of information from electronic health records (EHRs), e.g. to aid clinicians in finding relevant information for a consultation or to recruit suitable patients for a study. This requires search capabilities far beyond simple string matching, including the retrieval of concepts (diagnoses, symptoms, medications, etc.) related to the one in question. The suitability of AI methods for such applications is tested by predicting the relatedness of concepts with known relatedness scores. However, all existing biomedical concept relatedness datasets are notoriously small and consist of hand-picked concept pairs. We open-source a novel concept relatedness benchmark overcoming these issues: it is six times larger than existing datasets and concept pairs are chosen based on co-occurrence in EHRs, ensuring their relevance for the application of interest. We present an in-depth analysis of our new dataset and compare it to existing ones, highlighting that it is not only larger but also complements existing datasets in terms of the types of concepts included. Initial experiments with state-of-the-art embedding methods show that our dataset is a challenging new benchmark for testing concept relatedness models.

Consequence-Based Reasoning beyond Horn Ontologies

AAAI Conferences

Consequence-based ontology reasoning procedures have so far been known only for Horn ontology languages. A difficulty in extending such procedures is that non-Horn axioms seem to require reasoning by case, which causes non-determinism in tableau-based procedures. In this paper we present a consequence-based procedure for ALCH that overcomes this difficulty by using rules similar to ordered resolution to deal with disjunctive axioms in a deterministic way; it retains all the favourable attributes of existing consequence-based procedures, such as goal-directed “one pass” classification, optimal worst-case complexity, and “pay-asyou- go” behaviour. Our preliminary empirical evaluation suggests that the procedure scales well to non-Horn ontologies.