ev 1+ev +|S|wq ev 1+ev =0. Solvingtheequation,wehave

Neural Information Processing Systems 

Note that computing bR value can be done in constant time ifWp and Wn values are given. We stress that this result holds for any loss functionℓ satisfying ℓ(v,y) > ℓ(y,y) 0, with v =y. We performed additional experiments to empirically investigate the difference between uPU and nnPU risk estimators in regards to overfitting. In Table 11 we report the training risks (measured 19 asPUriskasdataisPU)andtesting risks(measured asPNriskasdataisPN)using zero-one loss ℓ0/1(v,y)=(1 sign(vy))/2onanumberofdatasets. From the results we can see that the training risk issignificantly smaller than the test risk in the uPU setting as compared to the nnPU setting, confirming that uPU suffers more from overfittingthannnPU. Table11: TrainingandtestingriskofPUET. Figure 4shows that the normalized risk reduction importance makes manymore pixels more important.