Goto

Collaborating Authors

 density estimation







A Missing Details 453 A.1 Motivations for working with model latent space

Neural Information Processing Systems

Let us now make this point more rigorous. In our experiments, we use empirical quantiles as thresholds. This is the case for all the kernels that rely on a distance (e.g. the Radial Basis Function Kernel, the Matern Knowing the category that a suspicious example belongs to, can we improve its prediction? B&I class are always the lowest among all classes. Table 4: DAUC is not the only choice in identifying OOD examples.


Kernel smoothing on manifolds

Bae, Eunseong, Polonik, Wolfgang

arXiv.org Machine Learning

Under the assumption that data lie on a compact (unknown) manifold without boundary, we derive finite sample bounds for kernel smoothing and its (first and second) derivatives, and we establish asymptotic normality through Berry-Esseen type bounds. Special cases include kernel density estimation, kernel regression and the heat kernel signature. Connections to the graph Laplacian are also discussed.


Conditional Normalizing Flows for Forward and Backward Joint State and Parameter Estimation

Lagunowich, Luke S., Tong, Guoxiang Grayson, Schiavazzi, Daniele E.

arXiv.org Machine Learning

Traditional filtering algorithms for state estimation -- such as classical Kalman filtering, unscented Kalman filtering, and particle filters - show performance degradation when applied to nonlinear systems whose uncertainty follows arbitrary non-Gaussian, and potentially multi-modal distributions. This study reviews recent approaches to state estimation via nonlinear filtering based on conditional normalizing flows, where the conditional embedding is generated by standard MLP architectures, transformers or selective state-space models (like Mamba-SSM). In addition, we test the effectiveness of an optimal-transport-inspired kinetic loss term in mitigating overparameterization in flows consisting of a large collection of transformations. We investigate the performance of these approaches on applications relevant to autonomous driving and patient population dynamics, paying special attention to how they handle time inversion and chained predictions. Finally, we assess the performance of various conditioning strategies for an application to real-world COVID-19 joint SIR system forecasting and parameter estimation.



Conditional Density Estimation with Histogram Trees

Neural Information Processing Systems

Conditional density estimation (CDE) goes beyond regression by modeling the full conditional distribution, providing a richer understanding of the data than just the conditional mean in regression. This makes CDE particularly useful in critical application domains. However, interpretable CDE methods are understudied. Current methods typically employ kernel-based approaches, using kernel functions directly for kernel density estimation or as basis functions in linear models. In contrast, despite their conceptual simplicity and visualization suitability, tree-based methods---which are arguably more comprehensible---have been largely overlooked for CDE tasks. Thus, we propose the Conditional Density Tree (CDTree), a fully non-parametric model consisting of a decision tree in which each leaf is formed by a histogram model.