Not enough data to create a plot.
Try a different view from the menu above.
Roukos, Salim
Maximum Bayes Smatch Ensemble Distillation for AMR Parsing
Lee, Young-Suk, Astudillo, Ramon Fernandez, Hoang, Thanh Lam, Naseem, Tahira, Florian, Radu, Roukos, Salim
AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning. Self-learning techniques have also played a role in pushing performance forward. However, for most recent high performant parsers, the effect of self-learning and silver data generation seems to be fading. In this paper we show that it is possible to overcome this diminishing returns of silver data by combining Smatch-based ensembling techniques with ensemble distillation. In an extensive experimental setup, we push single model English parser performance above 85 Smatch for the first time and return to substantial gains. We also attain a new state-of-the-art for cross-lingual AMR parsing for Chinese, German, Italian and Spanish. Finally we explore the impact of the proposed distillation technique on domain adaptation, and show that it can produce gains rivaling those of human annotated data for QALD-9 and achieve a new state-of-the-art for BioAMR.
SYGMA: System for Generalizable Modular Question Answering OverKnowledge Bases
Neelam, Sumit, Sharma, Udit, Karanam, Hima, Ikbal, Shajith, Kapanipathi, Pavan, Abdelaziz, Ibrahim, Mihindukulasooriya, Nandana, Lee, Young-Suk, Srivastava, Santosh, Pendus, Cezar, Dana, Saswati, Garg, Dinesh, Fokoue, Achille, Bhargav, G P Shrivatsa, Khandelwal, Dinesh, Ravishankar, Srinivas, Gurajada, Sairam, Chang, Maria, Uceda-Sosa, Rosario, Roukos, Salim, Gray, Alexander, Riegel, Guilherme LimaRyan, Luus, Francois, Subramaniam, L Venkata
Knowledge Base Question Answering (KBQA) tasks that in-volve complex reasoning are emerging as an important re-search direction. However, most KBQA systems struggle withgeneralizability, particularly on two dimensions: (a) acrossmultiple reasoning types where both datasets and systems haveprimarily focused on multi-hop reasoning, and (b) across mul-tiple knowledge bases, where KBQA approaches are specif-ically tuned to a single knowledge base. In this paper, wepresent SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types. Specifically, SYGMA contains three high levelmodules: 1) KB-agnostic question understanding module thatis common across KBs 2) Rules to support additional reason-ing types and 3) KB-specific question mapping and answeringmodule to address the KB-specific aspects of the answer ex-traction. We demonstrate effectiveness of our system by evalu-ating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata. In addition, to demonstrate extensi-bility to additional reasoning types we evaluate on multi-hopreasoning datasets and a new Temporal KBQA benchmarkdataset on Wikidata, namedTempQA-WD1, introduced in thispaper. We show that our generalizable approach has bettercompetetive performance on multiple datasets on DBpediaand Wikidata that requires both multi-hop and temporal rea-soning
Combining Rules and Embeddings via Neuro-Symbolic AI for Knowledge Base Completion
Sen, Prithviraj, Carvalho, Breno W. S. R., Abdelaziz, Ibrahim, Kapanipathi, Pavan, Luus, Francois, Roukos, Salim, Gray, Alexander
Recent interest in Knowledge Base Completion (KBC) has led to a plethora of approaches based on reinforcement learning, inductive logic programming and graph embeddings. In particular, rule-based KBC has led to interpretable rules while being comparable in performance with graph embeddings. Even within rule-based KBC, there exist different approaches that lead to rules of varying quality and previous work has not always been precise in highlighting these differences. Another issue that plagues most rule-based KBC is the non-uniformity of relation paths: some relation sequences occur in very few paths while others appear very frequently. In this paper, we show that not all rule-based KBC models are the same and propose two distinct approaches that learn in one case: 1) a mixture of relations and the other 2) a mixture of paths. When implemented on top of neuro-symbolic AI, which learns rules by extending Boolean logic to real-valued logic, the latter model leads to superior KBC accuracy outperforming state-of-the-art rule-based KBC by 2-10% in terms of mean reciprocal rank. Furthermore, to address the non-uniformity of relation paths, we combine rule-based KBC with graph embeddings thus improving our results even further and achieving the best of both worlds.
Bootstrapping Multilingual AMR with Contextual Word Alignments
Sheth, Janaki, Lee, Young-Suk, Astudillo, Ramon Fernandez, Naseem, Tahira, Florian, Radu, Roukos, Salim, Ward, Todd
We develop high performance multilingualAbstract Meaning Representation (AMR) sys-tems by projecting English AMR annotationsto other languages with weak supervision. Weachieve this goal by bootstrapping transformer-based multilingual word embeddings, in partic-ular those from cross-lingual RoBERTa (XLM-R large). We develop a novel technique forforeign-text-to-English AMR alignment, usingthe contextual word alignment between En-glish and foreign language tokens. This wordalignment is weakly supervised and relies onthe contextualized XLM-R word embeddings.We achieve a highly competitive performancethat surpasses the best published results forGerman, Italian, Spanish and Chinese.
Question Answering over Knowledge Bases by Leveraging Semantic Parsing and Neuro-Symbolic Reasoning
Kapanipathi, Pavan, Abdelaziz, Ibrahim, Ravishankar, Srinivas, Roukos, Salim, Gray, Alexander, Astudillo, Ramon, Chang, Maria, Cornelio, Cristina, Dana, Saswati, Fokoue, Achille, Garg, Dinesh, Gliozzo, Alfio, Gurajada, Sairam, Karanam, Hima, Khan, Naweed, Khandelwal, Dinesh, Lee, Young-Suk, Li, Yunyao, Luus, Francois, Makondo, Ndivhuwo, Mihindukulasooriya, Nandana, Naseem, Tahira, Neelam, Sumit, Popa, Lucian, Reddy, Revanth, Riegel, Ryan, Rossiello, Gaetano, Sharma, Udit, Bhargav, G P Shrivatsa, Yu, Mo
Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large training datasets. In this work, we propose a semantic parsing and reasoning-based Neuro-Symbolic Question Answering(NSQA) system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question under-standing; (2) a novel path-based approach to transform AMR parses into candidate logical queries that are aligned to the KB; (3) a neuro-symbolic reasoner called Logical Neural Net-work (LNN) that executes logical queries and reasons over KB facts to provide an answer; (4) system of systems approach,which integrates multiple, reusable modules that are trained specifically for their individual tasks (e.g. semantic parsing,entity linking, and relationship linking) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0. NSQA's novelty lies in its modular neuro-symbolic architecture and its task-general approach to interpreting natural language questions.
End-to-End QA on COVID-19: Domain Adaptation with Synthetic Training
Reddy, Revanth Gangi, Iyer, Bhavani, Sultan, Md Arafat, Zhang, Rong, Sil, Avi, Castelli, Vittorio, Florian, Radu, Roukos, Salim
End-to-end question answering (QA) requires both information retrieval (IR) over a large document collection and machine reading comprehension (MRC) on the retrieved passages. Recent work has successfully trained neural IR systems using only supervised question answering (QA) examples from open-domain datasets. However, despite impressive performance on Wikipedia, neural IR lags behind traditional term matching approaches such as BM25 in more specific and specialized target domains such as COVID-19. Furthermore, given little or no labeled data, effective adaptation of QA systems can also be challenging in such target domains. In this work, we explore the application of synthetically generated QA examples to improve performance on closed-domain retrieval and MRC. We combine our neural IR and MRC systems and show significant improvements in end-to-end QA on the CORD-19 collection over a state-of-the-art open-domain QA baseline.
Leveraging Semantic Parsing for Relation Linking over Knowledge Bases
Mihindukulasooriya, Nandana, Rossiello, Gaetano, Kapanipathi, Pavan, Abdelaziz, Ibrahim, Ravishankar, Srinivas, Yu, Mo, Gliozzo, Alfio, Roukos, Salim, Gray, Alexander
Knowledgebase question answering systems are heavily dependent on relation extraction and linking modules. However, the task of extracting and linking relations from text to knowledgebases faces two primary challenges; the ambiguity of natural language and lack of training data. To overcome these challenges, we present SLING, a relation linking framework which leverages semantic parsing using Abstract Meaning Representation (AMR) and distant supervision. SLING integrates multiple relation linking approaches that capture complementary signals such as linguistic cues, rich semantic representation, and information from the knowledgebase. The experiments on relation linking using three KBQA datasets; QALD-7, QALD-9, and LC-QuAD 1.0 demonstrate that the proposed approach achieves state-of-the-art performance on all benchmarks.