Review for NeurIPS paper: Learning from Positive and Unlabeled Data with Arbitrary Positive Shift
–Neural Information Processing Systems
Additional Feedback: Overall comment: Although I enjoyed reading the paper and it proposes novel ideas for PU learning research, I couldn't give a high score because: I feel it is hard to compare between methods in the experiments due to the usage of different models for proposed/baselines, some of the work in this paper (Sec. Other comments: The output of logistic classifiers will be between 0 and 1, and theoretically it should be an estimate of p(y x). Practically, the estimate of p(y x) can become quite noisy, or may overfit and lead to peaky hat{p}(y x) distributions, according to papers like "On Calibration of Modern Neural Networks" (ICML 2017). Assuming \hat{\sigma}(x) p_tr(y -1 x) seems to be a strong assumption, but does this cause any issues in the experiments? A minor suggestion is to investigate confidence-calibration, and see how much sensitive the final PU classifier is for worse calibration.
Neural Information Processing Systems
Jan-26-2025, 20:24:48 GMT
- Technology: