How to Abstract Intelligence? (If Verification Is in Order)
Pathak, Shashank (Istituto Italiano di Tecnologia) | Pulina, Luca (Università degli Studi di Sassari) | Metta, Giorgio (Istituto Italiano di Tecnologia) | Tacchella, Armando (Università degli Studi di Genova)
In this paper, we focus on learning intelligent agents through model-free reinforcement learning. Rather than arguing that reinforcement learning is the right abstraction for attaining intelligent behavior, we consider the issue of finding useful abstractions to represent the agent and the environment when verification is in order. Indeed, verifying that the agent’s behavior complies to some stated safety property — an ”Asimovian” perspective — only adds to the challenge that abstracting intelligence represents per se. In the paper, we show an example application about verification of abstractions in model-free learning, and we argue about potential (more) useful abstractions in the same context.
- Technology: