Goto

Collaborating Authors

 thatis






An adaptive nearest neighbor rule for classification

Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran

Neural Information Processing Systems

Findthesmallest0 (n, k, ), where (n, k, )= c1 r logn+ log ( 1/ ) k . Then, withprobabilityatleast1 , theresultingclassifiergn satisfiesthefollowing: foreverypointx 2 supp(µ), if n C adv (x) max log 1 adv (x) , log 1 thengn(x)= g (x).






Variational Inference with Tail-adaptive f-Divergence

Dilin Wang, Hao Liu, Qiang Liu

Neural Information Processing Systems

However, estimating and optimizingα-divergences require to use importance sampling, which may havelarge orinfinite variance due to heavy tails ofimportance weights.