Not enough data to create a plot.
Try a different view from the menu above.
Ramachandran, Deepak
Tackling Provably Hard Representative Selection via Graph Neural Networks
Kazemi, Mehran, Tsitsulin, Anton, Esfandiari, Hossein, Bateni, MohammadHossein, Ramachandran, Deepak, Perozzi, Bryan, Mirrokni, Vahab
Representative Selection (RS) is the problem of finding a small subset of exemplars from a dataset that is representative of the dataset. In this paper, we study RS for attributed graphs, and focus on finding representative nodes that optimize the accuracy of a model trained on the selected representatives. Theoretically, we establish a new hardness result for RS (in the absence of a graph structure) by proving that a particular, highly practical variant of it (RS for Learning) is hard to approximate in polynomial time within any reasonable factor, which implies a significant potential gap between the optimum solution of widely-used surrogate functions and the actual accuracy of the model. We then study the setting where a (homophilous) graph structure is available, or can be constructed, between the data points. We show that with an appropriate modeling approach, the presence of such a structure can turn a hard RS (for learning) problem into one that can be effectively solved. To this end, we develop RS-GNN, a representation learning-based RS model based on Graph Neural Networks. Empirically, we demonstrate the effectiveness of RS-GNN on problems with predefined graph structures as well as problems with graphs induced from node feature similarities, by showing that RS-GNN achieves significant improvements over established baselines on a suite of eight benchmarks.
BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information
Kazemi, Mehran, Yuan, Quan, Bhatia, Deepti, Kim, Najoung, Xu, Xin, Imbrasaite, Vaiva, Ramachandran, Deepak
Automated reasoning with unstructured natural text is a key requirement for many potential applications of NLP and for developing robust AI systems. Recently, Language Models (LMs) have demonstrated complex reasoning capacities even without any finetuning. However, existing evaluation for automated reasoning assumes access to a consistent and coherent set of information over which models reason. When reasoning in the real-world, the available information is frequently inconsistent or contradictory, and therefore models need to be equipped with a strategy to resolve such conflicts when they arise. One widely-applicable way of resolving conflicts is to impose preferences over information sources (e.g., based on source credibility or information recency) and adopt the source with higher preference. In this paper, we formulate the problem of reasoning with contradictory information guided by preferences over sources as the classical problem of defeasible reasoning, and develop a dataset called BoardgameQA for measuring the reasoning capacity of LMs in this setting. BoardgameQA also incorporates reasoning with implicit background knowledge, to better reflect reasoning problems in downstream applications. We benchmark various LMs on BoardgameQA and the results reveal a significant gap in the reasoning capacity of state-of-the-art LMs on this problem, showing that reasoning with conflicting information does not surface out-of-the-box in LMs. While performance can be improved with finetuning, it nevertheless remains poor.
LAMBADA: Backward Chaining for Automated Reasoning in Natural Language
Kazemi, Mehran, Kim, Najoung, Bhatia, Deepti, Xu, Xin, Ramachandran, Deepak
Remarkable progress has been made on automated reasoning with natural text, by using Language Models (LMs) and methods such as Chain-of-Thought and Selection-Inference. These techniques search for proofs in the forward direction from axioms to the conclusion, which suffers from a combinatorial explosion of the search space, and thus high failure rates for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from the intended conclusion to supporting axioms) is significantly more efficient at proof-finding. Importing this intuition into the LM setting, we develop a Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into four sub-modules. These sub-modules are simply implemented by few-shot prompted LM inference. We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on challenging logical reasoning datasets, particularly when deep and accurate proof chains are required.
Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play
Liu, Jeremiah Zhe, Dvijotham, Krishnamurthy Dj, Lee, Jihyeon, Yuan, Quan, Strobel, Martin, Lakshminarayanan, Balaji, Ramachandran, Deepak
Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data. Therefore, approaches that improve the accuracy-group robustness trade-off frontier of a DNN model (i.e. improving worst-group accuracy without sacrificing average accuracy, or vice versa) is of crucial importance. Uncertainty-based active learning (AL) can potentially improve the frontier by preferentially sampling underrepresented subgroups to create a more balanced training dataset. However, the quality of uncertainty estimates from modern DNNs tend to degrade in the presence of spurious correlations and dataset bias, compromising the effectiveness of AL for sampling tail groups. In this work, we propose Introspective Self-play (ISP), a simple approach to improve the uncertainty estimation of a deep neural network under dataset bias, by adding an auxiliary introspection task requiring a model to predict the bias for each data point in addition to the label. We show that ISP provably improves the bias-awareness of the model representation and the resulting uncertainty estimates. On two real-world tabular and language tasks, ISP serves as a simple "plug-in" for AL model training, consistently improving both the tail-group sampling rate and the final accuracy-fairness trade-off frontier of popular AL methods.
Understanding Finetuning for Factual Knowledge Extraction from Language Models
Kazemi, Mehran, Mittal, Sid, Ramachandran, Deepak
Language models (LMs) pretrained on large corpora of text from the web have been observed to contain large amounts of various types of knowledge about the world. This observation has led to a new and exciting paradigm in knowledge graph construction where, instead of manual curation or text mining, one extracts knowledge from the parameters of an LM. Recently, it has been shown that finetuning LMs on a set of factual knowledge makes them produce better answers to queries from a different set, thus making finetuned LMs a good candidate for knowledge extraction and, consequently, knowledge graph construction. In this paper, we analyze finetuned LMs for factual knowledge extraction. We show that along with its previously known positive effects, finetuning also leads to a (potentially harmful) phenomenon which we call Frequency Shock, where at the test time the model over-predicts rare entities that appear in the training set and under-predicts common entities that do not appear in the training set enough times. We show that Frequency Shock leads to a degradation in the predictions of the model and beyond a point, the harm from Frequency Shock can even outweigh the positive effects of finetuning, making finetuning harmful overall. We then consider two solutions to remedy the identified negative effect: 1- model mixing and 2- mixture finetuning with the LM's pre-training task. The two solutions combined lead to significant improvements compared to vanilla finetuning.
Discovering Personalized Semantics for Soft Attributes in Recommender Systems using Concept Activation Vectors
Gรถpfert, Christina, Chow, Yinlam, Hsu, Chih-wei, Vendrov, Ivan, Lu, Tyler, Ramachandran, Deepak, Boutilier, Craig
Interactive recommender systems (RSs) allow users to express intent, preferences and contexts in a rich fashion, often using natural language. One challenge in using such feedback is inferring a user's semantic intent from the open-ended terms used to describe an item, and using it to refine recommendation results. Leveraging concept activation vectors (CAVs) [21], we develop a framework to learn a representation that captures the semantics of such attributes and connects them to user preferences and behaviors in RSs. A novel feature of our approach is its ability to distinguish objective and subjective attributes and associate different senses with different users. Using synthetic and real-world datasets, we show that our CAV representation accurately interprets users' subjective semantics, and can improve recommendations via interactive critiquing
An End-to-End Conversational Second Screen Application for TV Program Discovery
Yeh, Peter Z. (Nuance Communications) | Ramachandran, Deepak (Nuance Communications) | Douglas, Benjamin (Nuance Communications) | Ratnaparkhi, Adwait (Nuance Communications) | Jarrold, William (Nuance Communications) | Provine, Ronald (Nuance Communications) | Patel-Schneider, Peter F. (Nuance Communications) | Laverty, Stephen (Nuance Communications) | Tikku, Nirvana (Nuance Communications) | Brown, Sean (Nuance Communications) | Mendel, Jeremy (Nuance Communications) | Emfield, Adam (Nuance Communications)
Our goal is to share with the community the breadth of artificial intelligence (AI) and natural language (NL) technologies required to develop such an application along with learnings from target end-users. We then present the architecture of our application along with the main AI and NL components, which were developed over multiple phases. The first phase focuses on enabling core functionality such as effectively finding programs matching the user's intent. The second phase focuses on enabling dialog with the user.
An End-to-End Conversational Second Screen Application for TV Program Discovery
Yeh, Peter Z. (Nuance Communications) | Ramachandran, Deepak (Nuance Communications) | Douglas, Benjamin (Nuance Communications) | Ratnaparkhi, Adwait (Nuance Communications) | Jarrold, William (Nuance Communications) | Provine, Ronald (Nuance Communications) | Patel-Schneider, Peter F. (Nuance Communications) | Laverty, Stephen (Nuance Communications) | Tikku, Nirvana (Nuance Communications) | Brown, Sean (Nuance Communications) | Mendel, Jeremy (Nuance Communications) | Emfield, Adam (Nuance Communications)
In this article, we report on a multiphase R&D effort to develop a conversational second screen application for TV program discovery. Our goal is to share with the community the breadth of artificial intelligence (AI) and natural language (NL) technologies required to develop such an application along with learnings from target end-users. We first give an overview of our application from the perspective of the end-user. We then present the architecture of our application along with the main AI and NL components, which were developed over multiple phases. The first phase focuses on enabling core functionality such as effectively finding programs matching the userโs intent. The second phase focuses on enabling dialog with the user. Finally, we present two user studies, corresponding to these two phases. The results from both studies demonstrate the effectiveness of our application in the target domain.
The Dialog State Tracking Challenge Series
Williams, Jason D. (Microsoft Corporation) | Henderson, Matthew (Cambridge University) | Raux, Antoine (Lenovo Labs) | Thomson, Blaise (VocalIQ, Ltd) | Black, Alan (Carnegie Mellon University) | Ramachandran, Deepak (Nuance Communications, Inc.)
Dialog state tracking is difficult because automatic speech recognition (ASR) and spoken language understanding (SLU) errors are common and can cause the system to misunderstand the user. At the same time, state tracking is crucial because the system relies on the estimated dialog state to choose actions -- for example, which restaurants to suggest. Figure 1 shows an illustration of the dialog state tracking task. Historically dialog state tracking has been done with handcrafted rules. More recently, statistical methods have been found to be superior by effectively overcoming some SLU errors, resulting in better dialogs. Despite this progress, direct comparisons between methods have not been possible because past studies use different domains, system components, and evaluation measures, hindering progresss.
Improving Hybrid Vehicle Fuel Efficiency Using Inverse Reinforcement Learning
Vogel, Adam (Stanford University) | Ramachandran, Deepak (Honda Research Institute (USA) Inc.) | Gupta, Rakesh (Honda Research Institute (USA) Inc.) | Raux, Antoine (Honda Research Institute (USA) Inc.)
Deciding what mix of engine and battery power to use is critical to hybrid vehicles' fuel efficiency. Current solutions consider several factors such as the charge of the battery and how efficient the engine operates at a given speed. Previous research has shown that by taking into account the future power requirements of the vehicle, a more efficient balance of engine vs. battery power can be attained. In this paper, we utilize a probabilistic driving route prediction system, trained using Inverse Reinforcement Learning, to optimize the hybrid control policy. Our approach considers routes that the driver is likely to be taking, computing an optimal mix of engine and battery power. This approach has the potential to increase vehicle power efficiency while not requiring any hardware modification or change in driver behavior. Our method outperforms a standard hybrid control policy, yielding an average of 1.22% fuel savings.