Goto

Collaborating Authors

 irl problem




Identifiabilityininversereinforcementlearning

Neural Information Processing Systems

Inverse reinforcement learning attempts to reconstruct the reward function in a Markov decision problem, using observations of agent actions. As already observed in Russell [1998] the problem is ill-posed, and the reward function is not identifiable, even under the presence of perfect information about optimal behavior. We provide a resolution to this non-identifiability for problems with entropyregularization.


Supplementary material: Inverse Reinforcement Learning in a ContinuousStateSpacewithFormalGuarantees AProofsoflemmasandtheorems

Neural Information Processing Systems

We note that the interchange of the integral and infinite summation is justified by Section 3.7 in [5], since the coefficients Z Now,define action sequence (a)n such thata1 = a and an = a1 for alln > 1. Then we can use subadditivity of measure to bound the maximum difference across all entries of [kZ]. Therefore, the induced infinity norm error ofbZ isless thanεifthe element wise error isless than ε/k. Therefore,bα>Fφ(s) is ρ-Lipschitz if the absolute value of its derivativeisboundedbyρ,i.e. SincebF has all zeros beyond thek-th column and row, each infinite-matrix bF can be treated as ak k matrix.


384babc3e7faa44cf1ca671b74499c3b-Paper.pdf

Neural Information Processing Systems

TheIRLsettingisremarkably useful for automated control, in situations where the reward function is difficult to specify manually or as a means to extract agent preference.


Inverse Reinforcement Learning with the Average Reward Criterion

Neural Information Processing Systems

We study the problem of Inverse Reinforcement Learning (IRL) with an average-reward criterion. The goal is to recover an unknown policy and a reward function when the agent only has samples of states and actions from an experienced agent. Previous IRL methods assume that the expert is trained in a discounted environment, and the discount factor is known.


On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Neural Information Processing Systems

Inverse reinforcement learning (IRL) is the problem of finding a reward function that generates a given optimal policy for a given Markov Decision Process. This paper looks at an algorithmic-independent geometric analysis of the IRL problem with finite states and actions. A L1-regularized Support Vector Machine formulation of the IRL problem motivated by the geometric analysis is then proposed with the basic objective of the inverse reinforcement problem in mind: to find a reward function that generates a specified optimal policy. The paper further analyzes the proposed formulation of inverse reinforcement learning with $n$ states and $k$ actions, and shows a sample complexity of $O(d^2 \log (nk))$ for transition probability matrices with at most $d$ non-zeros per row, for recovering a reward function that generates a policy that satisfies Bellman's optimality condition with respect to the true transition probabilities.




Inverse Reinforcement Learning with Locally Consistent Reward Functions

Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

Neural Information Processing Systems

Existing inverse reinforcement learning (IRL) algorithms have assumed each expert's demonstrated trajectory to be produced by only a single reward function. This paper presents a novel generalization of the IRL problem that allows each trajectory to be generated by multiple locally consistent reward functions, hence catering to more realistic and complex experts' behaviors. Solving our generalized IRL problem thus involves not only learning these reward functions but also the stochastic transitions between them at any state (including unvisited states). By representing our IRL problem with a probabilistic graphical model, an expectation-maximization (EM) algorithm can be devised to iteratively learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert's demonstrated trajectories. As a result, the most likely partition of a trajectory into segments that are generated from different locally consistent reward functions selected by EM can be derived. Empirical evaluation on synthetic and real-world datasets shows that our IRL algorithm outperforms the state-of-the-art EM clustering with maximum likelihood IRL, which is, interestingly, a reduced variant of our approach.