Lykov, Anton
CHORUS: Foundation Models for Unified Data Discovery and Exploration
Kayali, Moe, Lykov, Anton, Fountalis, Ilias, Vasiloglou, Nikolaos, Olteanu, Dan, Suciu, Dan
We apply foundation models to data discovery and exploration tasks. Foundation models are large language models (LLMs) that show promising performance on a range of diverse tasks unrelated to their training. We show that these models are highly applicable to the data discovery and data exploration domain. When carefully used, they have superior capability on three representative tasks: table-class detection, column-type annotation and join-column prediction. On all three tasks, we show that a foundation-model-based approach outperforms the task-specific models and so the state of the art. Further, our approach often surpasses human-expert task performance. We investigate the fundamental characteristics of this approach including generalizability to several foundation models, impact of non-determinism on the outputs and syntactic/semantic signals. All in all, this suggests a future direction in which disparate data management tasks can be unified under foundation models.
On the Tractability of SHAP Explanations
Van den Broeck, Guy, Lykov, Anton, Schleich, Maximilian, Suciu, Dan
Shap explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether Shap explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the Shap explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the Shap explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the Shap computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing Shap explanations is already intractable for a very simple setting: computing Shap explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing Shap over the empirical distribution is #P-hard.
On the Tractability of SHAP Explanations
Broeck, Guy Van den, Lykov, Anton, Schleich, Maximilian, Suciu, Dan
SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the SHAP explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the SHAP computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing SHAP explanations is already intractable for a very simple setting: computing SHAP explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing SHAP over the empirical distribution is #P-hard.