Not enough data to create a plot.
Try a different view from the menu above.
Patwardhan, Siddharth
EELBERT: Tiny Models through Dynamic Embeddings
Cohn, Gabrielle, Agarwal, Rishika, Gupta, Deepanshu, Patwardhan, Siddharth
We introduce EELBERT, an approach for compression of transformer-based models (e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is achieved by replacing the input embedding layer of the model with dynamic, i.e. on-the-fly, embedding computations. Since the input embedding layer accounts for a significant fraction of the model size, especially for the smaller BERT variants, replacing this layer with an embedding computation function helps us reduce the model size significantly. Empirical evaluation on the GLUE benchmark shows that our BERT variants (EELBERT) suffer minimal regression compared to the traditional BERT models. Through this approach, we are able to develop our smallest model UNO-EELBERT, which achieves a GLUE score within 4% of fully trained BERT-tiny, while being 15x smaller (1.2 MB) in size.
Machine Learning as an Accurate Predictor for Percolation Threshold of Diverse Networks
Patwardhan, Siddharth, Majumder, Utso, Sarma, Aditya Das, Pal, Mayukha, Dwivedi, Divyanshi, Panigrahi, Prasanta K.
The percolation threshold is an important measure to determine the inherent rigidity of large networks. Predictors of the percolation threshold for large networks are computationally intense to run, hence it is a necessity to develop predictors of the percolation threshold of networks, that do not rely on numerical simulations. We demonstrate the efficacy of five machine learning-based regression techniques for the accurate prediction of the percolation threshold. The dataset generated to train the machine learning models contains a total of 777 real and synthetic networks. It consists of 5 statistical and structural properties of networks as features and the numerically computed percolation threshold as the output attribute. We establish that the machine learning models outperform three existing empirical estimators of bond percolation threshold, and extend this experiment to predict site and explosive percolation. Further, we compared the performance of our models in predicting the percolation threshold using RMSE values. The gradient boosting regressor, multilayer perceptron and random forests regression models achieve the least RMSE values among considered models.
Languages You Know Influence Those You Learn: Impact of Language Characteristics on Multi-Lingual Text-to-Text Transfer
Muller, Benjamin, Gupta, Deepanshu, Patwardhan, Siddharth, Fauconnier, Jean-Philippe, Vandyke, David, Agarwal, Sachin
Multi-lingual language models (LM), such as mBERT, XLM-R, mT5, mBART, have been remarkably successful in enabling natural language tasks in low-resource languages through cross-lingual transfer from high-resource ones. In this work, we try to better understand how such models, specifically mT5, transfer *any* linguistic and semantic knowledge across languages, even though no explicit cross-lingual signals are provided during pre-training. Rather, only unannotated texts from each language are presented to the model separately and independently of one another, and the model appears to implicitly learn cross-lingual connections. This raises several questions that motivate our study, such as: Are the cross-lingual connections between every language pair equally strong? What properties of source and target language impact the strength of cross-lingual transfer? Can we quantify the impact of those properties on the cross-lingual transfer? In our investigation, we analyze a pre-trained mT5 to discover the attributes of cross-lingual connections learned by the model. Through a statistical interpretation framework over 90 language pairs across three tasks, we show that transfer performance can be modeled by a few linguistic and data-derived features. These observations enable us to interpret cross-lingual understanding of the mT5 model. Through these observations, one can favorably choose the best source language for a task, and can anticipate its training data demands. A key finding of this work is that similarity of syntax, morphology and phonology are good predictors of cross-lingual transfer, significantly more than just the lexical similarity of languages. For a given language, we are able to predict zero-shot performance, that increases on a logarithmic scale with the number of few-shot target language data points.
Can Open Domain Question Answering Systems Answer Visual Knowledge Questions?
Zhang, Jiawen, Mishra, Abhijit, S, Avinesh P. V., Patwardhan, Siddharth, Agarwal, Sachin
The task of Outside Knowledge Visual Question Answering (OKVQA) requires an automatic system to answer natural language questions about pictures and images using external knowledge. We observe that many visual questions, which contain deictic referential phrases referring to entities in the image, can be rewritten as "non-grounded" questions and can be answered by existing text-based question answering systems. This allows for the reuse of existing text-based Open Domain Question Answering (QA) Systems for visual question answering. In this work, we propose a potentially data-efficient approach that reuses existing systems for (a) image analysis, (b) question rewriting, and (c) text-based question answering to answer such visual questions. Given an image and a question pertaining to that image (a visual question), we first extract the entities present in the image using pre-trained object and scene classifiers. Using these detected entities, the visual questions can be rewritten so as to be answerable by open domain QA systems. We explore two rewriting strategies: (1) an unsupervised method using BERT for masking and rewriting, and (2) a weakly supervised approach that combines adaptive rewriting and reinforcement learning techniques to use the implicit feedback from the QA system. We test our strategies on the publicly available OKVQA dataset and obtain a competitive performance with state-of-the-art models while using only 10% of the training data.
WatsonPaths: Scenario-Based Question Answering and Inference over Unstructured Information
Lally, Adam (Information Technology and Services) | Bagchi, Sugato (IBM Research) | Barborak, Michael A. (IBM T. J. Watson Research Center) | Buchanan, David W. (IBM T. J. Watson Research Center) | Chu-Carroll, Jennifer (IBM Research) | Ferrucci, David A. (Bridgewater) | Glass, Michael R. (IBM Research) | Kalyanpur, Aditya (IBM T. J. Watson Research Center) | Mueller, Erik T. (Capital One) | Murdock, J. William (IBM T. J. Watson Research Center) | Patwardhan, Siddharth (IBM T. J. Watson Research Center) | Prager, John M. (IBM T. J. Watson Research Center)
WatsonPaths: Scenario-Based Question Answering and Inference over Unstructured Information
Lally, Adam (Information Technology and Services) | Bagchi, Sugato (IBM Research) | Barborak, Michael A. (IBM T. J. Watson Research Center) | Buchanan, David W. (IBM T. J. Watson Research Center) | Chu-Carroll, Jennifer (IBM Research) | Ferrucci, David A. (Bridgewater) | Glass, Michael R. (IBM Research) | Kalyanpur, Aditya (IBM T. J. Watson Research Center) | Mueller, Erik T. (Capital One) | Murdock, J. William (IBM T. J. Watson Research Center) | Patwardhan, Siddharth (IBM T. J. Watson Research Center) | Prager, John M. (IBM T. J. Watson Research Center)
We present WatsonPaths, a novel system that can answer scenario-based questions. These include medical questions that present a patient summary and ask for the most likely diagnosis or most appropriate treatment. WatsonPaths builds on the IBM Watson question answering system. WatsonPaths breaks down the input scenario into individual pieces of information, asks relevant subquestions of Watson to conclude new information, and represents these results in a graphical model. Probabilistic inference is performed over the graph to conclude the answer. On a set of medical test preparation questions, WatsonPaths shows a significant improvement in accuracy over multiple baselines.