Transductive Conformal Inference for Ranking
Fermanian, Jean-Baptiste, Humbert, Pierre, Blanchard, Gilles
We introduce a method based on Conformal Prediction (CP) to quantify the uncertainty of full ranking algorithms. We focus on a specific scenario where $n + m$ items are to be ranked by some ''black box'' algorithm. It is assumed that the relative (ground truth) ranking of n of them is known. The objective is then to quantify the error made by the algorithm on the ranks of the m new items among the total $(n + m)$. In such a setting, the true ranks of the n original items in the total $(n + m)$ depend on the (unknown) true ranks of the m new ones. Consequently, we have no direct access to a calibration set to apply a classical CP method. To address this challenge, we propose to construct distribution-free bounds of the unknown conformity scores using recent results on the distribution of conformal p-values. Using these scores upper bounds, we provide valid prediction sets for the rank of any item. We also control the false coverage proportion, a crucial quantity when dealing with multiple prediction sets. Finally, we empirically show on both synthetic and real data the efficiency of our CP method.
Jan-20-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- Finland > Uusimaa
- Helsinki (0.04)
- France
- Occitanie > Hérault
- Montpellier (0.04)
- Provence-Alpes-Côte d'Azur > Alpes-Maritimes
- Nice (0.04)
- Occitanie > Hérault
- Middle East > Cyprus
- Finland > Uusimaa
- Asia > Middle East
- Genre:
- Research Report (1.00)