A unifying, game-theoretic framework for imitation learning

AIHub 

Imitation learning (IL) is the problem of finding a policy,, that is as close as possible to an expert's policy, . IL algorithms can be grouped broadly into (a) online, (b) offline, and (c) interactive methods. We provide, for each setting, performance bounds for learned policies that apply for all algorithms, provably efficient algorithmic templates for achieving said bounds, and practical realizations that out-perform recent work. From beating the world champion at Go (Silver et al.) to getting cars to drive themselves (Bojarski et al.), we've seen unprecedented successes in learning to make sequential decisions over the last few years. When viewed from an algorithmic viewpoint, many of these accomplishments share a common paradigm: imitation learning (IL).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found