A Competitive Algorithm for Agnostic Active Learning Eric Price
–Neural Information Processing Systems
For some hypothesis classes and input distributions, active agnostic learning needs exponentially fewer samples than passive learning; for other classes and distributions, it offers little to no improvement. The most popular algorithms for agnostic active learning express their performance in terms of a parameter called the disagreement coefficient, but it is known that these algorithms are inefficient on some inputs.
Neural Information Processing Systems
Oct-5-2024, 17:22:55 GMT