Learning-Based Mean-Payoff Optimization in an Unknown MDP under Omega-Regular Constraints
Kretinsky, Jan, Perez, Guillermo A., Raskin, Jean-Francois
–arXiv.org Artificial Intelligence
We formalize the problem of maximizing the mean-payoff value with high probability while satisfying a parity objective in a Markov decision process (MDP) with unknown probabilistic transition function and unknown reward function. Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in MDPs consisting of a single end component, two combinations of guarantees on the parity and mean-payoff objectives can be achieved depending on how much memory one is willing to use. (i) For all $\epsilon$ and $\gamma$ we can construct an online-learning finite-memory strategy that almost-surely satisfies the parity objective and which achieves an $\epsilon$-optimal mean payoff with probability at least $1 - \gamma$. (ii) Alternatively, for all $\epsilon$ and $\gamma$ there exists an online-learning infinite-memory strategy that satisfies the parity objective surely and which achieves an $\epsilon$-optimal mean payoff with probability at least $1 - \gamma$. We extend the above results to MDPs consisting of more than one end component in a natural way. Finally, we show that the aforementioned guarantees are tight, i.e. there are MDPs for which stronger combinations of the guarantees cannot be ensured.
arXiv.org Artificial Intelligence
Apr-24-2018
- Country:
- Europe > Germany (0.67)
- North America
- Canada > Quebec (0.28)
- United States > California
- San Francisco County > San Francisco (0.14)
- Genre:
- Research Report (0.64)