Computational Learning Theory
On The Structure of Parametric Tournaments with Application to Ranking from Pairwise Comparisons
We consider the classical problem of finding the minimum feedback arc set on tournaments (MFAST). The problem is NP-hard in general and we study it for important classes of tournaments that arise naturally in the problem of learning to rank from pairwise comparisons. Specifically, we consider tournaments classes that arise out of parametric preference matrices that can lead to cyclic preference relations. We investigate their structural properties via forbidden sub tournament configurations. Towards this, we introduce Tournament Dimension - a combinatorial parameter that characterizes the size of a forbidden configuration for rank r tournament classes i.e., classes that arise out of pairwise preference matrices which lead to rank r skew-symmetric matrices under a suitable link function.
On The Structure of Parametric Tournaments with Application to Ranking from Pairwise Comparisons
We consider the classical problem of finding the minimum feedback arc set on tournaments (MFAST). The problem is NP-hard in general and we study it for important classes of tournaments that arise naturally in the problem of learning to rank from pairwise comparisons. Specifically, we consider tournaments classes that arise out of parametric preference matrices that can lead to cyclic preference relations. We investigate their structural properties via forbidden sub tournament configurations. Towards this, we introduce Tournament Dimension - a combinatorial parameter that characterizes the size of a forbidden configuration for rank r tournament classes i.e., classes that arise out of pairwise preference matrices which lead to rank r skew-symmetric matrices under a suitable link function.
An Adaptive Algorithm for Learning with Unknown Distribution Drift
We develop and analyze a general technique for learning with an unknown distribution drift. Given a sequence of independent observations from the last T steps of a drifting distribution, our algorithm agnostically learns a family of functions with respect to the current distribution at time T. Unlike previous work, our technique does not require prior knowledge about the magnitude of the drift.
An Optimal Elimination Algorithm for Learning a Best Arm
We consider the classic problem of (ษ, ฮด)-PAC learning a best arm where the goal is to identify with confidence 1 ฮด an arm whose mean is an ษ-approximation to that of the highest mean arm in a multi-armed bandit setting. This problem is one of the most fundamental problems in statistics and learning theory, yet somewhat surprisingly its worst case sample complexity is not well understood. In this paper we propose a new approach for (ษ, ฮด)-PAC learning a best arm. This approach leads to an algorithm whose sample complexity converges to exactly the optimal sample complexity of (ษ, ฮด)-learning the mean of n arms separately and we complement this result with a conditional matching lower bound.
Causal Discovery from Event Sequences by Local Cause-Effect Attribution
Sequences of events, such as crashes in the stock market or outages in a network, contain strong temporal dependencies, whose understanding is crucial to react to and influence future events. In this paper, we study the problem of discovering the underlying causal structure from event sequences. To this end, we introduce a new causal model, where individual events of the cause trigger events of the effect with dynamic delays. We show that in contrast to existing methods based on Granger causality, our model is identifiable for both instant and delayed effects.We base our approach on the Algorithmic Markov Condition, by which we identify the true causal network as the one that minimizes the Kolmogorov complexity. As the Kolmogorov complexity is not computable, we instantiate our model using Minimum Description Length and show that the resulting score identifies the causal direction.