Goto

Collaborating Authors

 tandem loss





A Proofs of our main results

Neural Information Processing Systems

In this section, we provide proofs for our main results. We first state and prove two lemmas that will be used in the proof of Theorem 1 . P ( X x) d x concludes the proof. With these two lemmas, we now provide the proof of Theorem 1 . Now we will derive a lower bound of the second term.


A Proofs of our main results

Neural Information Processing Systems

In this section, we provide proofs for our main results. We first state and prove two lemmas that will be used in the proof of Theorem 1 . P ( X x) d x concludes the proof. With these two lemmas, we now provide the proof of Theorem 1 . Now we will derive a lower bound of the second term.




We thank all the reviewers for their insightful comments, suggestions, and references

Neural Information Processing Systems

We thank all the reviewers for their insightful comments, suggestions, and references. Novelty of tandem loss: it is not new, but we were not aware of the prior work, we thank Reviewer 2 for bringing it up. While most of the computed bounds are non-vacuous, they look to be not that tight. Also a discussion of potential ways to obtain tighter bond values, or whether there is a fundamental limitation. We provide some discussion in Sections 3.2 and 4.4.



BEA: Revisiting anchor-based object detection DNN using Budding Ensemble Architecture

Qutub, Syed Sha, Kose, Neslihan, Rosales, Rafael, Paulitsch, Michael, Hagn, Korbinian, Geissler, Florian, Peng, Yang, Hinz, Gereon, Knoll, Alois

arXiv.org Artificial Intelligence

This paper introduces the Budding Ensemble Architecture (BEA), a novel reduced ensemble architecture for anchor-based object detection models. Object detection models are crucial in vision-based tasks, particularly in autonomous systems. They should provide precise bounding box detections while also calibrating their predicted confidence scores, leading to higher-quality uncertainty estimates. However, current models may make erroneous decisions due to false positives receiving high scores or true positives being discarded due to low scores. BEA aims to address these issues. The proposed loss functions in BEA improve the confidence score calibration and lower the uncertainty error, which results in a better distinction of true and false positives and, eventually, higher accuracy of the object detection models. Both Base-YOLOv3 and SSD models were enhanced using the BEA method and its proposed loss functions. The BEA on Base-YOLOv3 trained on the KITTI dataset results in a 6% and 3.7% increase in mAP and AP50, respectively. Utilizing a well-balanced uncertainty estimation threshold to discard samples in real-time even leads to a 9.6% higher AP50 than its base model. This is attributed to a 40% increase in the area under the AP50-based retention curve used to measure the quality of calibration of confidence scores. Furthermore, BEA-YOLOV3 trained on KITTI provides superior out-of-distribution detection on Citypersons, BDD100K, and COCO datasets compared to the ensembles and vanilla models of YOLOv3 and Gaussian-YOLOv3.