Padia, Ankur
Jointly Identifying and Fixing Inconsistent Readings from Information Extraction Systems
Padia, Ankur, Ferraro, Francis, Finin, Tim
KGCleaner is a framework to identify and correct errors in data produced and delivered by an information extraction system. These tasks have been understudied and KGCleaner is the first to address both. We introduce a multi-task model that jointly learns to predict if an extracted relation is credible and repair it if not. We evaluate our approach and other models as instance of our framework on two collections: a Wikidata corpus of nearly 700K facts and 5M fact-relevant sentences and a collection of 30K facts from the 2015 TAC Knowledge Base Population task. For credibility classification, parameter efficient simple shallow neural network can achieve an absolute performance gain of 30 $F_1$ points on Wikidata and comparable performance on TAC. For the repair task, significant performance (at more than twice) gain can be obtained depending on the nature of the dataset and the models.
Knowledge Graph Fact Prediction via Knowledge-Enriched Tensor Factorization
Padia, Ankur, Kalpakis, Kostantinos, Ferraro, Francis, Finin, Tim
We present a family of novel methods for embedding knowledge graphs into real-valued tensors. These tensor-based embeddings capture the ordered relations that are typical in the knowledge graphs represented by semantic web languages like RDF. Unlike many previous models, our methods can easily use prior background knowledge provided by users or extracted automatically from existing knowledge graphs. In addition to providing more robust methods for knowledge graph embedding, we provide a provably-convergent, linear tensor factorization algorithm. We demonstrate the efficacy of our models for the task of predicting new facts across eight different knowledge graphs, achieving between 5% and 50% relative improvement over existing state-of-the-art knowledge graph embedding techniques. Our empirical evaluation shows that all of the tensor decomposition models perform well when the average degree of an entity in a graph is high, with constraint-based models doing better on graphs with a small number of highly similar relations and regularization-based models dominating for graphs with relations of varying degrees of similarity.
UCO: A Unified Cybersecurity Ontology
Syed, Zareen (University of Maryland Baltimore County) | Padia, Ankur (University of Maryland, Baltimore County) | Finin, Tim (University of Maryland, Baltimore County) | Mathews, Lisa (University of Maryland, Baltimore County) | Joshi, Anupam (University of Maryland, Baltimore County)
In this paper we describe the Unified Cybersecurity Ontology (UCO) that is intended to support information integration and cyber situational awareness in cybersecurity systems. The ontology incorporates and integratesheterogeneous data and knowledge schemas from different cybersecurity systems and most commonly usedcybersecurity standards for information sharing and exchange. The UCO ontology has also been mapped to anumber of existing cybersecurity ontologies as well asconcepts in the Linked Open Data cloud (Berners-Lee,Bizer, and Heath 2009). Similar to DBpedia (Auer etal. 2007) which serves as the core for general knowledge in Linked Open Data cloud, we envision UCO toserve as the core for cybersecurity domain, which wouldevolve and grow with the passage of time with additional cybersecurity data sets as they become available.We also present a prototype system and concrete usecases supported by the UCO ontology. To the best of ourknowledge, this is the first cybersecurity ontology thathas been mapped to general world ontologies to support broader and diverse security use cases. We comparethe resulting ontology with previous efforts, discuss itsstrengths and limitations, and describe potential futurework directions.
Formal Ontology Learning on Factual IS-A Corpus in English using Description Logics
Dasgupta, Sourish, Padia, Ankur, Shah, Kushal, Majumder, Prasenjit
Ontology Learning (OL) is the computational task of generating a knowledge base in the form of an ontology given an unstructured corpus whose content is in natural language (NL). Several works can be found in this area most of which are limited to statistical and lexico-syntactic pattern matching based techniques Light-Weight OL. These techniques do not lead to very accurate learning mostly because of several linguistic nuances in NL. Formal OL is an alternative (less explored) methodology were deep linguistics analysis is made using theory and tools found in computational linguistics to generate formal axioms and definitions instead simply inducing a taxonomy. In this paper we propose "Description Logic (DL)" based formal OL framework for learning factual IS-A type sentences in English. We claim that semantic construction of IS-A sentences is non trivial. Hence, we also claim that such sentences requires special studies in the context of OL before any truly formal OL can be proposed. We introduce a learner tool, called DLOL_IS-A, that generated such ontologies in the owl format. We have adopted "Gold Standard" based OL evaluation on IS-A rich WCL v.1.1 dataset and our own Community representative IS-A dataset. We observed significant improvement of DLOL_IS-A when compared to the light-weight OL tool Text2Onto and formal OL tool FRED.
Description Logics based Formalization of Wh-Queries
Dasgupta, Sourish, KaPatel, Rupali, Padia, Ankur, Shah, Kushal
The problem of Natural Language Query Formalization (NLQF) is to translate a given user query in natural language (NL) into a formal language so that the semantic interpretation has equivalence with the NL interpretation. Formalization of NL queries enables logic based reasoning during information retrieval, database query, question-answering, etc. Formalization also helps in Web query normalization and indexing, query intent analysis, etc. In this paper we are proposing a Description Logics based formal methodology for wh-query intent (also called desire) identification and corresponding formal translation. We evaluated the scalability of our proposed formalism using Microsoft Encarta 98 query dataset and OWL-S TC v.4.0 dataset.
DLOLIS-A: Description Logic based Text Ontology Learning
Dasgupta, Sourish, Padia, Ankur, Shah, Kushal, KaPatel, Rupali, Majumder, Prasenjit
Ontology Learning has been the subject of intensive study for the past decade. Researchers in this field have been motivated by the possibility of automatically building a knowledge base on top of text documents so as to support reasoning based knowledge extraction. While most works in this field have been primarily statistical (known as light-weight Ontology Learning) not much attempt has been made in axiomatic Ontology Learning (called heavy-weight Ontology Learning) from Natural Language text documents. Heavy-weight Ontology Learning supports more precise formal logic-based reasoning when compared to statistical ontology learning. In this paper we have proposed a sound Ontology Learning tool DLOL_(IS-A) that maps English language IS-A sentences into their equivalent Description Logic (DL) expressions in order to automatically generate a consistent pair of T-box and A-box thereby forming both regular (definitional form) and generalized (axiomatic form) DL ontology. The current scope of the paper is strictly limited to IS-A sentences that exclude the possible structures of: (i) implicative IS-A sentences, and (ii) "Wh" IS-A questions. Other linguistic nuances that arise out of pragmatics and epistemic of IS-A sentences are beyond the scope of this present work. We have adopted Gold Standard based Ontology Learning evaluation on chosen IS-A rich Wikipedia documents.