Testing Goodness of Fit of Conditional Density Models with Kernels

Jitkrittum, Wittawat, Kanagawa, Heishiro, Schölkopf, Bernhard

arXiv.org Machine Learning 

Conditional distributions provide a versatile tool for capturing the relationship between a target variable and a conditioning variable (or covariate). The last few decades has seen a broad range of modeling applications across multiple disciplines including econometrics in particular [30, 42], machine learning [14, 40], among others. In many cases, estimating a conditional density function from the observed data is a one of the first crucial steps in the data analysis pipeline. While the task of conditional density estimation has received a considerable attention in the literature, fewer works have investigated the equally important task of evaluating the goodness of fit of a given conditional density model. Several approaches that address the task of conditional model evaluation take the form of a hypothesis test. Given a conditional model, and a joint sample containing realizations of both target variables and covariates, test the null hypothesis stating that the model is correctly specified, against the alternative stating that it is not. The model does not specify the marginal distribution of the covariates. We refer to this task as conditional goodness-of-fit testing. One of the early nonparametric tests is [1], which extended the classic Kolmogorov test to the conditional case.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found