Goto

Collaborating Authors

 online improper learning




Online Improper Learning with an Approximation Oracle

Neural Information Processing Systems

We study the following question: given an efficient approximation algorithm for an optimization problem, can we learn efficiently in the same setting? We give a formal affirmative answer to this question in the form of a reduction from online learning to offline approximate optimization using an efficient algorithm that guarantees near optimal regret. The algorithm is efficient in terms of the number of oracle calls to a given approximation oracle - it makes only logarithmically many such calls per iteration.


Online Improper Learning with an Approximation Oracle

Hazan, Elad, Hu, Wei, Li, Yuanzhi, Li, Zhiyuan

Neural Information Processing Systems

We study the following question: given an efficient approximation algorithm for an optimization problem, can we learn efficiently in the same setting? We give a formal affirmative answer to this question in the form of a reduction from online learning to offline approximate optimization using an efficient algorithm that guarantees near optimal regret. The algorithm is efficient in terms of the number of oracle calls to a given approximation oracle – it makes only logarithmically many such calls per iteration. Furthermore, our result applies to the more general improper learning problems. Papers published at the Neural Information Processing Systems Conference.


Online Improper Learning with an Approximation Oracle

Hazan, Elad, Hu, Wei, Li, Yuanzhi, li, zhiyuan

Neural Information Processing Systems

We study the following question: given an efficient approximation algorithm for an optimization problem, can we learn efficiently in the same setting? We give a formal affirmative answer to this question in the form of a reduction from online learning to offline approximate optimization using an efficient algorithm that guarantees near optimal regret. The algorithm is efficient in terms of the number of oracle calls to a given approximation oracle – it makes only logarithmically many such calls per iteration. This resolves an open question by Kalai and Vempala, and by Garber. Furthermore, our result applies to the more general improper learning problems.


Online Improper Learning with an Approximation Oracle

Hazan, Elad, Hu, Wei, Li, Yuanzhi, li, zhiyuan

Neural Information Processing Systems

We study the following question: given an efficient approximation algorithm for an optimization problem, can we learn efficiently in the same setting? We give a formal affirmative answer to this question in the form of a reduction from online learning to offline approximate optimization using an efficient algorithm that guarantees near optimal regret. The algorithm is efficient in terms of the number of oracle calls to a given approximation oracle – it makes only logarithmically many such calls per iteration. This resolves an open question by Kalai and Vempala, and by Garber. Furthermore, our result applies to the more general improper learning problems.