Andersen, Garrett
Artificial Intelligence for Prosthetics - challenge solutions
Kidziński, Łukasz, Ong, Carmichael, Mohanty, Sharada Prasanna, Hicks, Jennifer, Carroll, Sean F., Zhou, Bo, Zeng, Hongsheng, Wang, Fan, Lian, Rongzhong, Tian, Hao, Jaśkowski, Wojciech, Andersen, Garrett, Lykkebø, Odd Rune, Toklu, Nihat Engin, Shyam, Pranav, Srivastava, Rupesh Kumar, Kolesnikov, Sergey, Hrinchuk, Oleksii, Pechenko, Anton, Ljungström, Mattias, Wang, Zhen, Hu, Xu, Hu, Zehong, Qiu, Minghui, Huang, Jun, Shpilman, Aleksei, Sosin, Ivan, Svidchenko, Oleg, Malysheva, Aleksandra, Kudenko, Daniel, Rane, Lance, Bhatt, Aditya, Wang, Zhengfei, Qi, Penghui, Yu, Zeyang, Peng, Peng, Yuan, Quan, Li, Wenxin, Tian, Yunsheng, Yang, Ruihan, Ma, Pingchuan, Khadka, Shauharda, Majumdar, Somdeb, Dwiel, Zach, Liu, Yinyin, Tumer, Evren, Watson, Jeremy, Salathé, Marcel, Levine, Sergey, Delp, Scott
In the NeurIPS 2018 Artificial Intelligence for Prosthetics challenge, participants were tasked with building a controller for a musculoskeletal model with a goal of matching a given time-varying velocity vector. Top participants were invited to describe their algorithms. In this work, we describe the challenge and present thirteen solutions that used deep reinforcement learning approaches. Many solutions use similar relaxations and heuristics, such as reward shaping, frame skipping, discretization of the action space, symmetry, and policy blending. However, each team implemented different modifications of the known algorithms by, for example, dividing the task into subtasks, learning low-level control, or by incorporating expert knowledge and using imitation learning.
Active Exploration for Learning Symbolic Representations
Andersen, Garrett, Konidaris, George
We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment. Our algorithm is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, which the agent can then use along with the second part to guide its future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm outperforms random and greedy exploration policies on two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure. The second is the Treasure Game, with simpler dynamics but more complex logical structure.
Fast Equilibrium Computation for Infinitely Repeated Games
Andersen, Garrett (Duke University) | Conitzer, Vincent (Duke University)
It is known that an equilibrium of an infinitely repeated two-player game (with limit average payoffs) can be computed in polynomial time, as follows: according to the folk theorem, we compute minimax strategies for both players to calculate the punishment values, and subsequently find a mixture over outcomes that exceeds these punishment values. However, for very large games, even computing minimax strategies can be prohibitive. In this paper, we propose an algorithmic framework for computing equilibria of repeated games that does not require linear programming and that does not necessarily need to inspect all payoffs of the game. This algorithm necessarily sometimes fails to compute an equilibrium, but we mathematically demonstrate that most of the time it succeeds quickly on uniformly random games, and experimentally demonstrate this for other classes of games. This also holds for games with more than two players, for which no efficient general algorithms are known.