Not enough data to create a plot.
Try a different view from the menu above.

Plotting









ExplainMySurprise: LearningEfficientLong-Term MemorybyPredictingUncertainOutcomes

Neural Information Processing Systems

In many sequential tasks, a model needs to remember relevant events from the distant past to make correct predictions. Unfortunately, a straightforward application ofgradient based training requires intermediate computations tobestored for every element of a sequence. This requires to store prohibitively large intermediate data ifasequence consists ofthousands oreven millions elements, and asaresult, makeslearning ofverylong-term dependencies infeasible.


Teaching Inverse Reinforcement Learners via Features and Demonstrations

Luis Haug, Sebastian Tschiatschek, Adish Singla

Neural Information Processing Systems

Weintroduceanaturalquantity,the teaching risk, which measures the potential suboptimality of policies that look optimal to the learner in this setting. We show that bounds on the teaching risk guarantee that the learner is able to find a near-optimal policy using standard algorithms basedoninversereinforcement learning. Basedonthesefindings, we suggest a teaching scheme in which the expert can decrease the teaching risk by updating the learner's worldview, and thus ultimately enable her to find a near-optimalpolicy.