Appendix to: Training Uncertainty-Aware Classifiers with Conformalized Deep Learning Bat-Sheva Einbinder Y aniv Romano Matteo Sesia Y anfei Zhou A1 Additional methodological details
–Neural Information Processing Systems
Authors listed in alphabetical order. Figure A1: Schematic of the proposed uncertainty-aware deep classification learning algorithm. This procedure is summarized in Algorithm A1, which is a more technical version of Algorithm 1. (t 1) (t 1) This section explains the implementation of the hybrid benchmark method applied in Section 4. This This benchmark is based on a loss function designed to incentivize the trained model to produce the smallest possible conformal prediction sets with the desired coverage (e.g., 90% if (t 1) (t 1) To facilitate the exposition of our analysis, we begin by introducing some helpful notations. The first part of the proof is standard and proceeds as follows. A3.1 Details about experiments with synthetic data The conditional data-generating distribution of Y given X is given by: P[Y | X ] = null Our method (resp., the hybrid method) is applied using The hybrid loss model is trained via stochastic gradient descent for 4000 epochs with learning rate 0.01 decreased by a factor 10 halfway through training.
Neural Information Processing Systems
Aug-16-2025, 21:19:10 GMT
- Country:
- Asia > Middle East
- Israel (0.14)
- North America > United States
- California > Los Angeles County > Los Angeles (0.28)
- Asia > Middle East
- Technology: