Reviews: On Learning Markov Chains
–Neural Information Processing Systems
Summary: The paper's goal is to study the minimax rates for learning problems on Markovian data. The author(s) consider an interesting setting where the sequence of data observed follow a Markovian dependency pattern. They consider discrete state Markov chains with a state space [k] and study the minimax error rates for the following two tasks: Prediction: Given a trajectory X_1 - X_2 … - X_n from an unknown chain M, predict the probability distribution of the next state X_n 1, i.e., P(. X_1…n)) ] where the expectation is over the trajectory X_1…X_n. The loss function the paper focuses on is KL-divergence and presents a conjecture for how the L_1 loss should scale with respect to k and n.
Neural Information Processing Systems
Oct-8-2024, 05:14:10 GMT