markov transition data
State Aggregation Learning from Markov Transition Data
State aggregation is a popular model reduction method rooted in optimal control. It reduces the complexity of engineering systems by mapping the system's states into a small number of meta-states. The choice of aggregation map often depends on the data analysts' knowledge and is largely ad hoc. In this paper, we propose a tractable algorithm that estimates the probabilistic aggregation map from the system's trajectory. We adopt a soft-aggregation model, where each meta-state has a signature raw state, called an anchor state.
Reviews: State Aggregation Learning from Markov Transition Data
This paper studies the problem of learning soft state aggregation of a Markov model, where there are r hidden meta states, each corresponds to a distribution over the observed state of the Markov model. Under the anchor state assumption, the authors propose an algorithm that provably learns the state aggregation model from the Markov chain's trajectory. They evaluated their algorithm on a Manhattan taxi-trip dataset which yields interesting discoveries. There has been lots of work on estimating the low rank transition matrix itself and on matrix factorization in the topic modelling setting, and this work seems to be connecting the two problems. The paper is presented well and easy to follow. I have the following questions regarding the novelty and impact of this paper.
State Aggregation Learning from Markov Transition Data
State aggregation is a popular model reduction method rooted in optimal control. It reduces the complexity of engineering systems by mapping the system's states into a small number of meta-states. The choice of aggregation map often depends on the data analysts' knowledge and is largely ad hoc. In this paper, we propose a tractable algorithm that estimates the probabilistic aggregation map from the system's trajectory. We adopt a soft-aggregation model, where each meta-state has a signature raw state, called an anchor state.
Online Factorization and Partition of Complex Networks From Random Walks
Yang, Lin F., Braverman, Vladimir, Zhao, Tuo, Wang, Mengdi
Finding the reduced-dimensional structure is critical to understanding complex networks. Existing approaches such as spectral clustering are applicable only when the full network is explicitly observed. In this paper, we focus on the online factorization and partition of implicit large-scale networks based on observations from an associated random walk. We formulate this into a nonconvex stochastic factorization problem and propose an efficient and scalable stochastic generalized Hebbian algorithm. The algorithm is able to process dependent state-transition data dynamically generated by the underlying network and learn a low-dimensional representation for each vertex. By applying a diffusion approximation analysis, we show that the continuous-time limiting process of the stochastic algorithm converges globally to the "principal components" of the Markov chain and achieves a nearly optimal sample complexity. Once given the learned low-dimensional representations, we further apply clustering techniques to recover the network partition. We show that when the associated Markov process is lumpable, one can recover the partition exactly with high probability. We apply the proposed approach to model the traffic flow of Manhattan as city-wide random walks. By using our algorithm to analyze the taxi trip data, we discover a latent partition of the Manhattan city that closely matches the traffic dynamics.
- Transportation > Ground > Road (0.88)
- Transportation > Passenger (0.66)