Goto

Collaborating Authors

 regression model



f66340d6f28dae6aab0176892c9065e7-Supplemental-Conference.pdf

Neural Information Processing Systems

Once closed-form expressions for these Jacobians are derived, it remains to substitute those expressions into (16). The following identity (often termed the "vec" rule) will To depict the spatial topographies of the latent components measured on the EEG and fMRI analyses, the "forward-model" [ The results of the comparison are shown in Fig S1, where it is clear that the signal fidelity of the GCs (right panel) significantly exceeds those yielded by PCA (left) and ICA (middle). GCA is only able to recover sources with temporal dependencies (i.e., s Both the single electrodes and Granger components exhibit two pronounced peaks in the spectra: one near 2 Hz ("delta" Fig S3 shows the corresponding result for the left motor imagery condition. EEG motor imagery dataset described in the main text. For each technique, the first 6 components are presented.




Contrasting Global and Patient-Specific Regression Models via a Neural Network Representation

Behrens, Max, Stolz, Daiana, Papakonstantinou, Eleni, Nolde, Janis M., Bellerino, Gabriele, Rohde, Angelika, Hess, Moritz, Binder, Harald

arXiv.org Machine Learning

When developing clinical prediction models, it can be challenging to balance between global models that are valid for all patients and personalized models tailored to individuals or potentially unknown subgroups. To aid such decisions, we propose a diagnostic tool for contrasting global regression models and patient-specific (local) regression models. The core utility of this tool is to identify where and for whom a global model may be inadequate. We focus on regression models and specifically suggest a localized regression approach that identifies regions in the predictor space where patients are not well represented by the global model. As localization becomes challenging when dealing with many predictors, we propose modeling in a dimension-reduced latent representation obtained from an autoencoder. Using such a neural network architecture for dimension reduction enables learning a latent representation simultaneously optimized for both good data reconstruction and for revealing local outcome-related associations suitable for robust localized regression. We illustrate the proposed approach with a clinical study involving patients with chronic obstructive pulmonary disease. Our findings indicate that the global model is adequate for most patients but that indeed specific subgroups benefit from personalized models. We also demonstrate how to map these subgroup models back to the original predictors, providing insight into why the global model falls short for these groups. Thus, the principal application and diagnostic yield of our tool is the identification and characterization of patients or subgroups whose outcome associations deviate from the global model. Introduction In clinical research, conclusions about potential relationships between patient characteristics and outcomes often are based on regression models. More specifically, there might not be just some random variability across the parameters of patients, e.g. as considered in regression modeling with random effects (Pinheiro and Bates, 2000), but different regions in the space spanned by the patient characteristics might require different parameters. For example, the relation of some patient characteristics to the outcome might be more pronounced for older patients with high body weight, without having a corresponding pre-defined subgroup indicator. While sticking to a global model keeps interpretation simple and is beneficial in terms of statistical stability, it would at least be useful to have some diagnostic tool for judging the potential extent of deviations from the global model.


Robust low-rank estimation with multiple binary responses using pairwise AUC loss

Mai, The Tien

arXiv.org Machine Learning

Multiple binary responses arise in many modern data-analytic problems. Although fitting separate logistic regressions for each response is computationally attractive, it ignores shared structure and can be statistically inefficient, especially in high-dimensional and class-imbalanced regimes. Low-rank models offer a natural way to encode latent dependence across tasks, but existing methods for binary data are largely likelihood-based and focus on pointwise classification rather than ranking performance. In this work, we propose a unified framework for learning with multiple binary responses that directly targets discrimination by minimizing a surrogate loss for the area under the ROC curve (AUC). The method aggregates pairwise AUC surrogate losses across responses while imposing a low-rank constraint on the coefficient matrix to exploit shared structure. We develop a scalable projected gradient descent algorithm based on truncated singular value decomposition. Exploiting the fact that the pairwise loss depends only on differences of linear predictors, we simplify computation and analysis. We establish non-asymptotic convergence guarantees, showing that under suitable regularity conditions, leading to linear convergence up to the minimax-optimal statistical precision. Extensive simulation studies demonstrate that the proposed method is robust in challenging settings such as label switching and data contamination and consistently outperforms likelihood-based approaches.


Minimum Wasserstein distance estimator under covariate shift: closed-form, super-efficiency and irregularity

Lang, Junjun, Zhang, Qiong, Liu, Yukun

arXiv.org Machine Learning

Covariate shift arises when covariate distributions differ between source and target populations while the conditional distribution of the response remains invariant, and it underlies problems in missing data and causal inference. We propose a minimum Wasserstein distance estimation framework for inference under covariate shift that avoids explicit modeling of outcome regressions or importance weights. The resulting W-estimator admits a closed-form expression and is numerically equivalent to the classical 1-nearest neighbor estimator, yielding a new optimal transport interpretation of nearest neighbor methods. We establish root-$n$ asymptotic normality and show that the estimator is not asymptotically linear, leading to super-efficiency relative to the semiparametric efficient estimator under covariate shift in certain regimes, and uniformly in missing data problems. Numerical simulations, along with an analysis of a rainfall dataset, underscore the exceptional performance of our W-estimator.


DeTrack: In-model Latent Denoising Learning for Visual Object Tracking

Neural Information Processing Systems

Previous visual object tracking methods employ image-feature regression models or coordinate autoregression models for bounding box prediction. Image-feature regression methods heavily depend on matching results and do not utilize positional prior, while the autoregressive approach can only be trained using bounding boxes available in the training set, potentially resulting in suboptimal performance during testing with unseen data. Inspired by the diffusion model, denoising learning enhances the model's robustness to unseen data. Therefore, We introduce noise to bounding boxes, generating noisy boxes for training, thus enhancing model robustness on testing data. We propose a new paradigm to formulate the visual object tracking problem as a denoising learning process. However, tracking algorithms are usually asked to run in real-time, directly applying the diffusion model to object tracking would severely impair tracking speed.


A Bayes-Sard Cubature Method

Neural Information Processing Systems

To date, research effort has largely focussed on the development of Bayesian cubature, whose distributional output provides uncertainty quantification for the integral. However, the point estimators associated to Bayesian cubature can be inaccurate and acutely sensitive to the prior when the domain is high-dimensional. To address these drawbacks we introduce Bayes-Sard cubature, a probabilistic framework that combines the flexibility of Bayesian cubature with the robustness of classical cubatures which are well-established. This is achieved by considering a Gaussian process model for the integrand whose mean is a parametric regression model, with an improper prior on each regression coefficient. The features in the regression model consist of test functions which are guaranteed to be exactly integrated, with remaining degrees of freedom afforded to the non-parametric part. The asymptotic convergence of the Bayes-Sard cubature method is established and the theoretical results are numerically verified. In particular, we report two orders of magnitude reduction in error compared to Bayesian cubature in the context of a high-dimensional financial integral.


Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing

Neural Information Processing Systems

Existing regression models tend to fall short in both accuracy and uncertainty estimation when the label distribution is imbalanced. In this paper, we propose a probabilistic deep learning model, dubbed variational imbalanced regression (VIR), which not only performs well in imbalanced regression but naturally produces reasonable uncertainty estimation as a byproduct. Different from typical variational autoencoders assuming I.I.D. representations (a data point's representation is not directly affected by other data points), our VIR borrows data with similar regression labels to compute the latent representation's variational distribution; furthermore, different from deterministic regression models producing point estimates, VIR predicts the entire normal-inverse-gamma distributions and modulates the associated conjugate distributions to impose probabilistic reweighting on the imbalanced data, thereby providing better uncertainty estimation. Experiments in several real-world datasets show that our VIR can outperform state-of-the-art imbalanced regression models in terms of both accuracy and uncertainty estimation.