calibration method
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Europe > France (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Research Report > Experimental Study (0.92)
- Overview (0.67)
TowardsImprovingCalibrationinObjectDetection UnderDomainShift
Unfortunately, very little to no attention is paid towards addressing calibration ofDNN-based visual object detectors, that occupysimilar space and importance inmanydecision making systems astheir visual classification counterparts. In this work, we study the calibration of DNN-based object detection models, particularly under domain shift.
- Europe > Estonia > Tartu County > Tartu (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > Canada (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Research Report > New Finding (0.94)
- Research Report > Experimental Study (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > California (0.04)
- North America > Canada (0.04)
5975754c7650dfee0682e06e1fec0522-Supplemental-Conference.pdf
Both models consist of 2 layers and the hidden dimension is fixed to 64. We add a weight decay of 5e-4 for Cora, Citeseer, and Pubmed,and0fortherest. The optimizer configuration and the training schedule are the same as Section A.2. Kh(c ˆci) (7) where i N V denotes the evaluated node, andh is the bandwidth of the kernel function. The classwise-ECEs are summarized in Table 3, and the KDE-ECEs are collected in Table 4. Weadopt a heuristic which proportionally rescales the non top-1 output probabilities so that the calibrated probabilistic output sums up to one. While the ECEs ofCaGCN inits original paper are promising [23], we observethat the ECEs of CaGCN are often unstable and sometimes even worse than that of the uncalibrated model in our experiments.