A Framework for Human-Reason-Aligned Trajectory Evaluation in Automated Vehicles
Suryana, Lucas Elbert, Rahmani, Saeed, Calvert, Simeon Craig, Zgonnikov, Arkady, van Arem, Bart
–arXiv.org Artificial Intelligence
One major challenge for the adoption and acceptance of automated vehicles (AVs) is ensuring that they can make sound decisions in everyday situations that involve ethical tension. Much attention has focused on rare, high-stakes dilemmas such as trolley problems. Yet similar conflicts arise in routine driving when human considerations, such as legality, efficiency, and comfort, come into conflict. Current AV planning systems typically rely on rigid rules, which struggle to balance these competing considerations and often lead to behaviour that misaligns with human expectations. This paper introduces a reasons-based trajectory evaluation framework that operationalises the tracking condition of Meaningful Human Control (MHC). The framework represents human agents reasons (e.g., regulatory compliance) as quantifiable functions and evaluates how well candidate trajectories align with them. It assigns adjustable weights to agent priorities and includes a balance function to discourage excluding any agent. To demonstrate the approach, we use a real-world-inspired overtaking scenario, which highlights tensions between compliance, efficiency, and comfort. Our results show that different trajectories emerge as preferable depending on how agents reasons are weighted, and small shifts in priorities can lead to discrete changes in the selected action. This demonstrates that everyday ethical decisions in AV driving are highly sensitive to the weights assigned to the reasons of different human agents.
arXiv.org Artificial Intelligence
Nov-7-2025
- Country:
- Europe > Netherlands > South Holland > Delft (0.06)
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Automobiles & Trucks (0.69)
- Government (0.91)
- Law (0.68)
- Transportation > Ground
- Road (1.00)
- Technology: