Optimizing Autonomous Driving for Safety: A Human-Centric Approach with LLM-Enhanced RLHF
Sun, Yuan, Pargoo, Navid Salami, Jin, Peter J., Ortiz, Jorge
–arXiv.org Artificial Intelligence
Reinforcement Learning from Human Feedback (RLHF) is popular in large language models (LLMs), whereas traditional Reinforcement Learning (RL) often falls short. Current autonomous driving methods typically utilize either human feedback in machine learning, including RL, or LLMs. Most feedback guides the car agent's learning process (e.g., controlling the car). RLHF is usually applied in the fine-tuning step, requiring direct human "preferences," which are not commonly used in optimizing autonomous driving models. In this research, we innovatively combine RLHF and LLMs to enhance autonomous driving safety. Training a model with human guidance from scratch is inefficient. Our framework starts with a pre-trained autonomous car agent model and implements multiple human-controlled agents, such as cars and pedestrians, to simulate real-life road environments. The autonomous car model is not directly controlled by humans. We integrate both physical and physiological feedback to fine-tune the model, optimizing this process using LLMs. This multi-agent interactive environment ensures safe, realistic interactions before real-world application. Finally, we will validate our model using data gathered from real-life testbeds located in New Jersey and New York City.
arXiv.org Artificial Intelligence
Jun-6-2024
- Country:
- Europe > Jersey (0.34)
- North America > United States
- New Jersey > Middlesex County
- Piscataway (0.05)
- New York > New York County
- New York City (0.04)
- New Jersey > Middlesex County
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Robotics & Automation (1.00)
- Transportation > Ground
- Road (1.00)
- Technology: