Hubschneider, Christian
CoCar NextGen: a Multi-Purpose Platform for Connected Autonomous Driving Research
Heinrich, Marc, Zipfl, Maximilian, Uecker, Marc, Ochs, Sven, Gontscharow, Martin, Fleck, Tobias, Doll, Jens, Schörner, Philip, Hubschneider, Christian, Zofka, Marc René, Viehl, Alexander, Zöllner, J. Marius
Abstract-- Real world testing is of vital importance to the success of automated driving. While many players in the business design purpose build testing vehicles, we designed and build a modular platform that offers high flexibility for any kind of scenario. CoCar NextGen is equipped with next generation hardware that addresses all future use cases. Its extensive, redundant sensor setup allows to develop cross-domain data driven approaches that manage the transfer to other sensor setups. Together with the possibility of being deployed on public roads, this creates a unique research platform that supports the road to automated driving on SAE Level 5. I. INTRODUCTION Autonomous driving test vehicles (AVs) are at the vanguard of innovation in autonomous mobility. These vehicles are equipped with cutting-edge sensors; Cameras, LiDAR, radar, and sophisticated software enable them to perceive Figure 1: CoCar NextGen was presented first on the IEEE ITSC and interpret their surroundings with unmatched precision. They provide an essential bridge between theoretical The variety of use cases demand an extensive hardware setup. Moreover, the extensive setup provides under test needs a driving platform to be evaluated under unique opportunity for cross-domain research of multi-modal real-world conditions.
Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability
Pavlitska, Svetlana, Hubschneider, Christian, Struppek, Lukas, Zöllner, J. Marius
Sparsely-gated Mixture of Expert (MoE) layers have been recently successfully applied for scaling large transformers, especially for language modeling tasks. An intriguing side effect of sparse MoE layers is that they convey inherent interpretability to a model via natural expert specialization. In this work, we apply sparse MoE layers to CNNs for computer vision tasks and analyze the resulting effect on model interpretability. To stabilize MoE training, we present both soft and hard constraint-based approaches. With hard constraints, the weights of certain experts are allowed to become zero, while soft constraints balance the contribution of experts with an additional auxiliary loss. As a result, soft constraints handle expert utilization better and support the expert specialization process, while hard constraints maintain more generalized experts and increase overall model performance. Our findings demonstrate that experts can implicitly focus on individual sub-domains of the input space. For example, experts trained for CIFAR-100 image classification specialize in recognizing different domains such as flowers or animals without previous data clustering. Experiments with RetinaNet and the COCO dataset further indicate that object detection experts can also specialize in detecting objects of distinct sizes.