A Socially Aware Reinforcement Learning Agent for The Single Track Road Problem
–arXiv.org Artificial Intelligence
We present the single track road problem. In this problem two agents face each-other at opposite positions of a road that can only have one agent pass at a time. We focus on the scenario in which one agent is human, while the other is an autonomous agent. We run experiments with human subjects in a simple grid domain, which simulates the single track road problem. We show that when data is limited, building an accurate human model is very challenging, and that a reinforcement learning agent, which is based on this data, does not perform well in practice. However, we show that an agent that tries to maximize a linear combination of the human's utility and its own utility, achieves a high score, and significantly outperforms other baselines, including an agent that tries to maximize only its own utility. While humans can cope with new situations quite easily, even state-of-the-art algorithms trouble with new situations that they haven't been trained on. Unfortunately, when it comes to autonomous vehicles the results may be devastating. One example for an uncommon, yet important scenario for autonomous vehicles is the problem of a single track road. In this problem two vehicles in opposite directions must cross a narrow road, which is not wide enough to allow both vehicles to pass at the same time.
arXiv.org Artificial Intelligence
Sep-22-2021
- Country:
- Asia
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Middle East > Israel (0.04)
- Japan > Honshū
- North America > United States
- Utah > Salt Lake County > Salt Lake City (0.04)
- Asia
- Genre:
- Research Report > Experimental Study (0.46)
- Industry:
- Leisure & Entertainment > Games (0.94)
- Transportation > Ground
- Road (0.69)
- Technology: