Collaborating Authors

Knowledge Engineering

Two minutes NLP -- Quick Intro to Knowledge Base Question Answering


Knowledge base question answering (KBQA) aims to answer a natural language question over a knowledge base (KB) as its knowledge source. A knowledge base (KB) is a structured database that contains a collection of facts in the form subject, relation, object, where each fact can have properties attached called qualifiers. For example, the sentence "Barack Obama got married to Michelle Obama on 3 October 1992 at Trinity United Church" can be represented by the tuple Barack Obama, Spouse, Michelle Obama, with the qualifiers start time 3 October 1992 and place of marriage Trinity United Church . Popular knowledge bases are DBpedia and WikiData. Early works on KBQA focused on simple question answering, where there's only a single fact involved.

Knowledge-based Entity Prediction for Improved Machine Perception in Autonomous Systems


For example, consider the case where the perception module detects a pedestrian (PCV) on the road. It does not, however, recognize that the pedestrian is jaywalking. Even if no jaywalking events have been seen while training the CV perception module, representing knowledge of this event – i.e. (Pedestrian, participatesIn, Jaywalking) – in the scene KG could provide a new insight or cue for handling this edge-case with KEP (i.e.


Communications of the ACM

The trend for an aging population, which is typical for Europe and for other high-income regions, brings with it a sharp increase in the number of chronic patients and a shortage of clinicians and hospital beds. Evidence-based clinical decision-support systems are one of the promising solutions for this problem.15 In the 1990s, different research groups started to develop computer-interpretable clinical guidelines (CIGs)7 as a form of evidence-based decision-support systems (DSS). Narrative evidence-based clinical guidelines, focused on a single disease, and containing recommendations for the disease diagnosis and management, were manually represented in CIG formalisms, such as Asbru,11 GLIF,1 or PROforma.3 The CIGs formed a network of clinical decisions and actions and served as a knowledge base.

OpenKBP-Opt: An international and reproducible evaluation of 76 knowledge-based planning pipelines Artificial Intelligence

We establish an open framework for developing plan optimization models for knowledge-based planning (KBP) in radiotherapy. Our framework includes reference plans for 100 patients with head-and-neck cancer and high-quality dose predictions from 19 KBP models that were developed by different research groups during the OpenKBP Grand Challenge. The dose predictions were input to four optimization models to form 76 unique KBP pipelines that generated 7600 plans. The predictions and plans were compared to the reference plans via: dose score, which is the average mean absolute voxel-by-voxel difference in dose a model achieved; the deviation in dose-volume histogram (DVH) criterion; and the frequency of clinical planning criteria satisfaction. We also performed a theoretical investigation to justify our dose mimicking models. The range in rank order correlation of the dose score between predictions and their KBP pipelines was 0.50 to 0.62, which indicates that the quality of the predictions is generally positively correlated with the quality of the plans. Additionally, compared to the input predictions, the KBP-generated plans performed significantly better (P<0.05; one-sided Wilcoxon test) on 18 of 23 DVH criteria. Similarly, each optimization model generated plans that satisfied a higher percentage of criteria than the reference plans. Lastly, our theoretical investigation demonstrated that the dose mimicking models generated plans that are also optimal for a conventional planning model. This was the largest international effort to date for evaluating the combination of KBP prediction and optimization models. In the interest of reproducibility, our data and code is freely available at


AAAI Conferences

MetaShare is a knowledge-based system that supports the creation of data management plans and provides the functionality to support researchers as they implement those plans. MetaShare is a community-based, user-driven system that is being designed around the parallels of the scientific data life cycle and the development cycle of knowledge-based systems. MetaShare will provide recommendations and guidance to researchers based on the practices and decisions of similar projects. Using formal knowledge representation in the form of ontologies and rules, the system will be able to generate data collection, dissemination, and management tools to facilitate tasks with respect to using and sharing scientific data. MetaShare, which is initially targeting the research community at the University of Texas at El Paso, is being developed on a Web platform, using Semantic Web technologies. This paper presents a roadmap for the development of MetaShare, justifying the functionality and implementation decisions. In addition, the paper presents an argument concerning the return on investment for researchers and the planned evaluation for the system.

Artificial Intelligence - Expert Systems


Expert systems (ES) are one of the prominent research domains of AI. It is introduced by the researchers at Stanford University, Computer Science Department. The expert systems are the computer applications developed to solve complex problems in a particular domain, at the level of extra-ordinary human intelligence and expertise. It contains domain-specific and high-quality knowledge. Knowledge is required to exhibit intelligence.

A Survey on AI Assurance Artificial Intelligence

Artificial Intelligence (AI) algorithms are increasingly providing decision making and operational support across multiple domains. AI includes a wide library of algorithms for different problems. One important notion for the adoption of AI algorithms into operational decision process is the concept of assurance. The literature on assurance, unfortunately, conceals its outcomes within a tangled landscape of conflicting approaches, driven by contradicting motivations, assumptions, and intuitions. Accordingly, albeit a rising and novel area, this manuscript provides a systematic review of research works that are relevant to AI assurance, between years 1985 - 2021, and aims to provide a structured alternative to the landscape. A new AI assurance definition is adopted and presented and assurance methods are contrasted and tabulated. Additionally, a ten-metric scoring system is developed and introduced to evaluate and compare existing methods. Lastly, in this manuscript, we provide foundational insights, discussions, future directions, a roadmap, and applicable recommendations for the development and deployment of AI assurance.

Shared Model of Sense-making for Human-Machine Collaboration Artificial Intelligence

We present a model of sense-making that greatly facilitates the collaboration between an intelligent analyst and a knowledge-based agent. It is a general model grounded in the science of evidence and the scientific method of hypothesis generation and testing, where sense-making hypotheses that explain an observation are generated, relevant evidence is then discovered, and the hypotheses are tested based on the discovered evidence. We illustrate how the model enables an analyst to directly instruct the agent to understand situations involving the possible production of weapons (e.g., chemical warfare agents) and how the agent becomes increasingly more competent in understanding other situations from that domain (e.g., possible production of centrifuge-enriched uranium or of stealth fighter aircraft).

Cross-Domain Reasoning via Template Filling Artificial Intelligence

In this paper, we explore the ability of sequence to sequence models to perform cross-domain reasoning. Towards this, we present a prompt-template-filling approach to enable sequence to sequence models to perform cross-domain reasoning. We also present a case-study with commonsense and health and well-being domains, where we study how prompt-template-filling enables pretrained sequence to sequence models across domains. Our experiments across several pretrained encoder-decoder models show that cross-domain reasoning is challenging for current models. We also show an in-depth error analysis and avenues for future research for reasoning across domains

Creating and evolving knowledge graphs at scale for explainable AI - Safe & Trusted AI


Knowledge graphs and knowledge bases are forms of symbolic knowledge representations used across AI applications. Both refer to a set of technologies that organise data for easier access, capture information about people, places, events, and other entities of interest, and forge connections between them. As AI (re-)conquered the world, symbolic knowledge representations became ubiquitous, and are now extensively used in everything from search engines and chatbots to product recommenders and autonomous systems, especially in the context of neuro-symbolic approaches. Knowledge engineering is the field that encompasses technical and social aspects related to building knowledge-based AI systems. In its most recent manifestation, it involves complex, human-machine workflows including knowledge acqusition from experts, crowdsourced entity typing and reconciliation, argumentation and discussion support, information extraction algorithms across different data modalities, and database lifting.