Not enough data to create a plot.
Try a different view from the menu above.
arXiv.org Machine Learning
Stochastic Poisson Surface Reconstruction with One Solve using Geometric Gaussian Processes
Holalkere, Sidhanth, Bindel, David S., Sellán, Silvia, Terenin, Alexander
Poisson Surface Reconstruction is a widely-used algorithm for reconstructing a surface from an oriented point cloud. To facilitate applications where only partial surface information is available, or scanning is performed sequentially, a recent line of work proposes to incorporate uncertainty into the reconstructed surface via Gaussian process models. The resulting algorithms first perform Gaussian process interpolation, then solve a set of volumetric partial differential equations globally in space, resulting in a computationally expensive two-stage procedure. In this work, we apply recently-developed techniques from geometric Gaussian processes to combine interpolation and surface reconstruction into a single stage, requiring only one linear solve per sample. The resulting reconstructed surface samples can be queried locally in space, without the use of problem-dependent volumetric meshes or grids. These capabilities enable one to (a) perform probabilistic collision detection locally around the region of interest, (b) perform ray casting without evaluating points not on the ray's trajectory, and (c) perform next-view planning on a per-slice basis. They also improve reconstruction quality, by not requiring one to approximate kernel matrix inverses with diagonal matrices as part of intermediate computations. Results show that our approach provides a cleaner, more-principled, and more-flexible stochastic surface reconstruction pipeline.
Centroid Decision Forest
Ali, Amjad, Khan, Zardad, Aldahmani, Saeed
This paper introduces the centroid decision forest (CDF), a novel ensemble learning framework that redefines the splitting strategy and tree building in the ordinary decision trees for high-dimensional classification. The splitting approach in CDF differs from the traditional decision trees in theat the class separability score (CSS) determines the selection of the most discriminative features at each node to construct centroids of the partitions (daughter nodes). The splitting criterion uses the Euclidean distance measurements from each class centroid to achieve a splitting mechanism that is more flexible and robust. Centroids are constructed by computing the mean feature values of the selected features for each class, ensuring a class-representative division of the feature space. This centroid-driven approach enables CDF to capture complex class structures while maintaining interpretability and scalability. To evaluate CDF, 23 high-dimensional datasets are used to assess its performance against different state-of-the-art classifiers through classification accuracy and Cohen's kappa statistic. The experimental results show that CDF outperforms the conventional methods establishing its effectiveness and flexibility for high-dimensional classification problems.
Minimum Volume Conformal Sets for Multivariate Regression
Braun, Sacha, Aolaritei, Liviu, Jordan, Michael I., Bach, Francis
Conformal prediction provides a principled framework for constructing predictive sets with finite-sample validity. While much of the focus has been on univariate response variables, existing multivariate methods either impose rigid geometric assumptions or rely on flexible but computationally expensive approaches that do not explicitly optimize prediction set volume. We propose an optimization-driven framework based on a novel loss function that directly learns minimum-volume covering sets while ensuring valid coverage. This formulation naturally induces a new nonconformity score for conformal prediction, which adapts to the residual distribution and covariates. Our approach optimizes over prediction sets defined by arbitrary norm balls, including single and multi-norm formulations. Additionally, by jointly optimizing both the predictive model and predictive uncertainty, we obtain prediction sets that are tight, informative, and computationally efficient, as demonstrated in our experiments on real-world datasets.
Accelerating Langevin Monte Carlo Sampling: A Large Deviations Analysis
Yao, Nian, Ali, Pervez, Tao, Xihua, Zhu, Lingjiong
Langevin algorithms are popular Markov chain Monte Carlo methods that are often used to solve high-dimensional large-scale sampling problems in machine learning. The most classical Langevin Monte Carlo algorithm is based on the overdamped Langevin dynamics. There are many variants of Langevin dynamics that often show superior performance in practice. In this paper, we provide a unified approach to study the acceleration of the variants of the overdamped Langevin dynamics through the lens of large deviations theory. Numerical experiments using both synthetic and real data are provided to illustrate the efficiency of these variants.
Tractable downfall of basis pursuit in structured sparse optimization
Marmary, Maya V., Grussler, Christian
The problem of finding the sparsest solution to a linear underdetermined system of equations, as it often appears in data analysis, optimal control and system identification problems, is considered. This non-convex problem is commonly solved by convexification via $\ell_1$-norm minimization, also known as basis pursuit. In this work, a class of structured matrices, representing the system of equations, is introduced for which the basis pursuit approach tractably fails to recover the sparsest solution. In particular, we are able to identify matrix columns that correspond to unrecoverable non-zero entries of the sparsest solution, as well as to conclude the uniqueness of the sparsest solution in polynomial time. These deterministic guarantees contrast popular probabilistic ones, and as such, provide valuable insights into the a priori design of sparse optimization problems. As our matrix structure appears naturally in optimal control problems, we exemplify our findings by showing that it is possible to verify a priori that basis pursuit may fail in finding fuel optimal regulators for a class of discrete-time linear time-invariant systems.
Bayesian Semi-Parametric Spatial Dispersed Count Model for Precipitation Analysis
Nadifar, Mahsa, Bekker, Andriette, Arashi, Mohammad, Ramoelo, Abel
The appropriateness of the Poisson model is frequently challenged when examining spatial count data marked by unbalanced distributions, over-dispersion, or under-dispersion. Moreover, traditional parametric models may inadequately capture the relationships among variables when covariates display ambiguous functional forms or when spatial patterns are intricate and indeterminate. To tackle these issues, we propose an innovative Bayesian hierarchical modeling system. This method combines non-parametric techniques with an adapted dispersed count model based on renewal theory, facilitating the effective management of unequal dispersion, non-linear correlations, and complex geographic dependencies in count data. We illustrate the efficacy of our strategy by applying it to lung and bronchus cancer mortality data from Iowa, emphasizing environmental and demographic factors like ozone concentrations, PM2.5, green space, and asthma prevalence. Our analysis demonstrates considerable regional heterogeneity and non-linear relationships, providing important insights into the impact of environmental and health-related factors on cancer death rates. This application highlights the significance of our methodology in public health research, where precise modeling and forecasting are essential for guiding policy and intervention efforts. Additionally, we performed a simulation study to assess the resilience and accuracy of the suggested method, validating its superiority in managing dispersion and capturing intricate spatial patterns relative to conventional methods. The suggested framework presents a flexible and robust instrument for geographical count analysis, offering innovative insights for academics and practitioners in disciplines such as epidemiology, environmental science, and spatial statistics.
Fundamental Safety-Capability Trade-offs in Fine-tuning Large Language Models
Chen, Pin-Yu, Shen, Han, Das, Payel, Chen, Tianyi
Fine-tuning Large Language Models (LLMs) on some task-specific datasets has been a primary use of LLMs. However, it has been empirically observed that this approach to enhancing capability inevitably compromises safety, a phenomenon also known as the safety-capability trade-off in LLM fine-tuning. This paper presents a theoretical framework for understanding the interplay between safety and capability in two primary safety-aware LLM fine-tuning strategies, providing new insights into the effects of data similarity, context overlap, and alignment loss landscape. Our theoretical results characterize the fundamental limits of the safety-capability trade-off in LLM fine-tuning, which are also validated by numerical experiments.
Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems
Lin, Zhidi, Li, Ying, Yin, Feng, Maroñas, Juan, Thiéry, Alexandre H.
Gaussian process state-space models (GPSSMs) have emerged as a powerful framework for modeling dynamical systems, offering interpretable uncertainty quantification and inherent regularization. However, existing GPSSMs face significant challenges in handling high-dimensional, non-stationary systems due to computational inefficiencies, limited scalability, and restrictive stationarity assumptions. In this paper, we propose an efficient transformed Gaussian process state-space model (ETGPSSM) to address these limitations. Our approach leverages a single shared Gaussian process (GP) combined with normalizing flows and Bayesian neural networks, enabling efficient modeling of complex, high-dimensional state transitions while preserving scalability. To address the lack of closed-form expressions for the implicit process in the transformed GP, we follow its generative process and introduce an efficient variational inference algorithm, aided by the ensemble Kalman filter (EnKF), to enable computationally tractable learning and inference. Extensive empirical evaluations on synthetic and real-world datasets demonstrate the superior performance of our ETGPSSM in system dynamics learning, high-dimensional state estimation, and time-series forecasting, outperforming existing GPSSMs and neural network-based methods in both accuracy and computational efficiency.
A New Stochastic Approximation Method for Gradient-based Simulated Parameter Estimation
This paper tackles the challenge of parameter calibration in stochastic models, particularly in scenarios where the likelihood function is unavailable in an analytical form. We introduce a gradient-based simulated parameter estimation framework, which employs a multi-time scale stochastic approximation algorithm. This approach effectively addresses the ratio bias that arises in both maximum likelihood estimation and posterior density estimation problems. The proposed algorithm enhances estimation accuracy and significantly reduces computational costs, as demonstrated through extensive numerical experiments. Our work extends the GSPE framework to handle complex models such as hidden Markov models and variational inference-based problems, offering a robust solution for parameter estimation in challenging stochastic environments.
Quantile-Based Randomized Kaczmarz for Corrupted Tensor Linear Systems
Castillo, Alejandra, Haddock, Jamie, Hartsock, Iryna, Hoyos, Paulina, Kassab, Lara, Kryshchenko, Alona, Larripa, Kamila, Needell, Deanna, Suryanarayanan, Shambhavi, Djima, Karamatou Yacoubou
The reconstruction of tensor-valued signals from corrupted measurements, known as tensor regression, has become essential in many multi-modal applications such as hyperspectral image reconstruction and medical imaging. In this work, we address the tensor linear system problem $\mathcal{A} \mathcal{X}=\mathcal{B}$, where $\mathcal{A}$ is a measurement operator, $\mathcal{X}$ is the unknown tensor-valued signal, and $\mathcal{B}$ contains the measurements, possibly corrupted by arbitrary errors. Such corruption is common in large-scale tensor data, where transmission, sensory, or storage errors are rare per instance but likely over the entire dataset and may be arbitrarily large in magnitude. We extend the Kaczmarz method, a popular iterative algorithm for solving large linear systems, to develop a Quantile Tensor Randomized Kaczmarz (QTRK) method robust to large, sparse corruptions in the observations $\mathcal{B}$. This approach combines the tensor Kaczmarz framework with quantile-based statistics, allowing it to mitigate adversarial corruptions and improve convergence reliability. We also propose and discuss the Masked Quantile Randomized Kaczmarz (mQTRK) variant, which selectively applies partial updates to handle corruptions further. We present convergence guarantees, discuss the advantages and disadvantages of our approaches, and demonstrate the effectiveness of our methods through experiments, including an application for video deblurring.