GABRIL: Gaze-Based Regularization for Mitigating Causal Confusion in Imitation Learning
Banayeeanzade, Amin, Bahrani, Fatemeh, Zhou, Yutai, Bıyık, Erdem
–arXiv.org Artificial Intelligence
Imitation Learning (IL) is a widely adopted approach which enables agents to learn from human expert demonstrations by framing the task as a supervised learning problem. However, IL often suffers from causal confusion, where agents misinterpret spurious correlations as causal relationships, leading to poor performance in testing environments with distribution shift. To address this issue, we introduce GAze-Based Regularization in Imitation Learning (GABRIL), a novel method that leverages the human gaze data gathered during the data collection phase to guide the representation learning in IL. GABRIL utilizes a regularization loss which encourages the model to focus on causally relevant features identified through expert gaze and consequently mitigates the effects of confounding variables. We validate our approach in Atari environments and the Bench2Drive benchmark in CARLA by collecting human gaze datasets and applying our method in both domains. Experimental results show that the improvement of GABRIL over behavior cloning is around 179% more than the same number for other baselines in the Atari and 76% in the CARLA setup. Finally, we show that our method provides extra explainability when compared to regular IL agents.
arXiv.org Artificial Intelligence
Jul-31-2025
- Country:
- Asia > Singapore (0.04)
- Europe > Germany
- Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States
- California (0.14)
- Genre:
- Instructional Material > Course Syllabus & Notes (0.54)
- Research Report > New Finding (0.66)
- Industry:
- Education (0.48)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.47)
- Representation & Reasoning
- Agents (0.69)
- Expert Systems (0.68)
- Robots (1.00)
- Information Technology > Artificial Intelligence