Brown, Eric
Multi-step Inference over Unstructured Data
Kalyanpur, Aditya, Saravanakumar, Kailash, Barres, Victor, McFate, CJ, Moon, Lori, Seifu, Nati, Eremeev, Maksim, Barrera, Jose, Brown, Eric, Ferrucci, David
The advent of Large Language Models (LLMs) and Generative AI has revolutionized natural language applications across various domains. However, high-stakes decision-making tasks in fields such as medical, legal and finance require a level of precision, comprehensiveness, and logical consistency that pure LLM or Retrieval-Augmented-Generation (RAG) approaches often fail to deliver. At Elemental Cognition (EC), we have developed a neuro-symbolic AI platform to tackle these problems. The platform integrates fine-tuned LLMs for knowledge extraction and alignment with a robust symbolic reasoning engine for logical inference, planning and interactive constraint solving. We describe Cora, a Collaborative Research Assistant built on this platform, that is designed to perform complex research and discovery tasks in high-stakes domains. This paper discusses the multi-step inference challenges inherent in such domains, critiques the limitations of existing LLM-based methods, and demonstrates how Cora's neuro-symbolic approach effectively addresses these issues. We provide an overview of the system architecture, key algorithms for knowledge extraction and formal reasoning, and present preliminary evaluation results that highlight Cora's superior performance compared to well-known LLM and RAG baselines.
Building Watson: An Overview of the DeepQA Project
Ferrucci, David (IBM T. J. Watson Research Center) | Brown, Eric (IBM T. J. Watson Research Center) | Chu-Carroll, Jennifer (IBM T. J. Watson Research Center) | Fan, James (IBM T. J. Watson Research Center) | Gondek, David (IBM T. J. Watson Research Center) | Kalyanpur, Aditya A. (IBM T. J. Watson Research Center) | Lally, Adam (IBM T. J. Watson Research Center) | Murdock, J. William (IBM T. J. Watson Research Center) | Nyberg, Eric (Carnegie Mellon University) | Prager, John (IBM T. J. Watson Research Center) | Schlaefer, Nico (Carnegie Mellon University) | Welty, Chris (IBM T. J. Watson Research Center)
IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA.