A Trichotomy for Transductive Online Learning
–Neural Information Processing Systems
This setting is similar to standard online learning, except that the adversary fixes a sequence of instances x_1,\dots,x_n to be labeled at the start of the game, and this sequence is known to the learner. Qualitatively, we prove a \emph{trichotomy}, stating that the minimal number of mistakes made by the learner as n grows can take only one of precisely three possible values: n, \Theta\left(\log (n)\right), or \Theta(1) . Furthermore, this behavior is determined by a combination of the VC dimension and the Littlestone dimension. Quantitatively, we show a variety of bounds relating the number of mistakes to well-known combinatorial dimensions. In particular, we improve the known lower bound on the constant in the \Theta(1) case from \Omega\left(\sqrt{\log(d)}\right) to \Omega(\log(d)) where d is the Littlestone dimension.
Neural Information Processing Systems
Oct-11-2024, 12:18:11 GMT