neg
Contrastive Conformal Sets
Alkhatib, Yahya, Tay, Wee Peng
Contrastive learning produces coherent semantic feature embeddings by encouraging positive samples to cluster closely while separating negative samples. However, existing contrastive learning methods lack principled guarantees on coverage within the semantic feature space. We extend conformal prediction to this setting by introducing minimum-volume covering sets equipped with learnable generalized multi-norm constraints. We propose a method that constructs conformal sets guaranteeing user-specified coverage of positive samples while maximizing negative sample exclusion. We establish theoretically that volume minimization serves as a proxy for negative exclusion, enabling our approach to operate effectively even when negative pairs are unavailable. The positive inclusion guarantee inherits the distribution-free coverage property of conformal prediction, while negative exclusion is maximized through learned set geometry optimized on a held-out training split. Experiments on simulated and real-world image datasets demonstrate improved inclusion-exclusion trade-offs compared to standard distance-based conformal baselines.
- Asia > Singapore (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > New York (0.04)
- (2 more...)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
cf0d02ec99e61a64137b8a2c3b03e030-Supplemental.pdf
Lemma 5. Let S = (Z1,...,Zn) be a collection ofn independent random variables andΦ be an arbitrary random variable defined on the same probability space. Furthermore, each of these summands has zero mean. Given a deterministic algorithmf, we consider the algorithm that adds Gaussian noise to the predictionsoff: fσ(z,x,R)=f(z,x)+ξ, (151) whereξ N(0,σ2Id). Thefigureinthemiddle repeats the experiment of Figure 1a while making the training algorithm stochastic by randomizing the seed. Table 1: The architecture of the 4-layer convolutional neural network used in MNIST 4 vs 9 classification tasks.
A Proofs of the Main Results
This section describes Stein variational gradient descent (SVGD) by Liu and Wang [19]. The overview is meant as supplementary material for Section 5, where we propose to use SVGD for inferring the DiBS posteriors p(Z | D) and p(Z, Θ | D). In contrast to sampling-based MCMC or optimizationbased variational inference methods, SVGD iteratively transports a fixed set of particles to closely match a target distribution, akin to the gradient descent algorithm in optimization. We refer the reader to Liu and Wang [19] for additional details. Let p(x) with x X be a differentiable density that we want to sample from, e.g., to estimate an expectation.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.54)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > France > Hauts-de-France > Nord > Lille (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Alberta (0.14)
- Asia > China > Tianjin Province > Tianjin (0.05)
- Asia > Singapore (0.04)