Interpretable Deep Tracking
Thérien, Benjamin, Czarnecki, Krzysztof
–arXiv.org Artificial Intelligence
Imagine experiencing a crash as the passenger of an autonomous vehicle. Wouldn't you want to know why it happened? Current end-to-end optimizable deep neural networks (DNNs) in 3D detection, multi-object tracking, and motion forecasting provide little to no explanations about how they make their decisions. To help bridge this gap, we design an end-to-end optimizable multi-object tracking architecture and training protocol inspired by the recently proposed method of interchange intervention training (IIT). By enumerating different tracking decisions and associated reasoning procedures, we can train individual networks to reason about the possible decisions via IIT. Each network's decisions can be explained by the high-level structural causal model (SCM) it is trained in alignment with. Moreover, our proposed model learns to rank these outcomes, leveraging the promise of deep learning in end-to-end training, while being inherently interpretable.
arXiv.org Artificial Intelligence
Oct-3-2022
- Country:
- Asia
- Europe > Italy
- North America > United States
- California > Los Angeles County
- Long Beach (0.05)
- Maryland > Baltimore (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- Washington > King County
- Seattle (0.04)
- California > Los Angeles County
- Genre:
- Research Report (0.40)
- Industry:
- Technology: