Höfer, Timon
Interactive Explainable Anomaly Detection for Industrial Settings
Gramelt, Daniel, Höfer, Timon, Schmid, Ute
Being able to recognise defects in industrial objects is a key element of quality assurance in production lines. Our research focuses on visual anomaly detection in RGB images. Although Convolutional Neural Networks (CNNs) achieve high accuracies in this task, end users in industrial environments receive the model's decisions without additional explanations. Therefore, it is of interest to enrich the model's outputs with further explanations to increase confidence in the model and speed up anomaly detection. In our work, we focus on (1) CNNbased classification models and (2) the further development of a model-agnostic explanation algorithm for black-box classifiers. Additionally, (3) we demonstrate how we can establish an interactive interface that allows users to further correct the model's output. We present our NearCAIPI Interaction Framework, which improves AI through user interaction, and show how this approach increases the system's trustworthiness. We also illustrate how NearCAIPI can integrate human feedback into an interactive process chain. With this work, we plan to provide a new industry dataset for anomaly detection.
Stable Yaw Estimation of Boats from the Viewpoint of UAVs and USVs
Kiefer, Benjamin, Höfer, Timon, Zell, Andreas
Yaw estimation of boats from the viewpoint of unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) or boats is a crucial task in various applications such as 3D scene rendering, trajectory prediction, and navigation. However, the lack of literature on yaw estimation of objects from the viewpoint of UAVs has motivated us to address this domain. In this paper, we propose a method based on HyperPosePDF for predicting the orientation of boats in the 6D space. For that, we use existing datasets, such as PASCAL3D+ and our own datasets, SeaDronesSee-3D and BOArienT, which we annotated manually. We extend HyperPosePDF to work in video-based scenarios, such that it yields robust orientation predictions across time. Naively applying HyperPosePDF on video data yields single-point predictions, resulting in far-off predictions and often incorrect symmetric orientations due to unseen or visually different data. To alleviate this issue, we propose aggregating the probability distributions of pose predictions, resulting in significantly improved performance, as shown in our experimental evaluation. Our proposed method could significantly benefit downstream tasks in marine robotics.
1st Workshop on Maritime Computer Vision (MaCVi) 2023: Challenge Results
Kiefer, Benjamin, Kristan, Matej, Perš, Janez, Žust, Lojze, Poiesi, Fabio, Andrade, Fabio Augusto de Alcantara, Bernardino, Alexandre, Dawkins, Matthew, Raitoharju, Jenni, Quan, Yitong, Atmaca, Adem, Höfer, Timon, Zhang, Qiming, Xu, Yufei, Zhang, Jing, Tao, Dacheng, Sommer, Lars, Spraul, Raphael, Zhao, Hangyue, Zhang, Hongpu, Zhao, Yanyun, Augustin, Jan Lukas, Jeon, Eui-ik, Lee, Impyeong, Zedda, Luca, Loddo, Andrea, Di Ruberto, Cecilia, Verma, Sagar, Gupta, Siddharth, Muralidhara, Shishir, Hegde, Niharika, Xing, Daitao, Evangeliou, Nikolaos, Tzes, Anthony, Bartl, Vojtěch, Špaňhel, Jakub, Herout, Adam, Bhowmik, Neelanjan, Breckon, Toby P., Kundargi, Shivanand, Anvekar, Tejas, Desai, Chaitra, Tabib, Ramesh Ashok, Mudengudi, Uma, Vats, Arpita, Song, Yang, Liu, Delong, Li, Yonglin, Li, Shuman, Tan, Chenhao, Lan, Long, Somers, Vladimir, De Vleeschouwer, Christophe, Alahi, Alexandre, Huang, Hsiang-Wei, Yang, Cheng-Yen, Hwang, Jenq-Neng, Kim, Pyong-Kun, Kim, Kwangju, Lee, Kyoungoh, Jiang, Shuai, Li, Haiwen, Ziqiang, Zheng, Vu, Tuan-Anh, Nguyen-Truong, Hai, Yeung, Sai-Kit, Jia, Zhuang, Yang, Sophia, Hsu, Chih-Chung, Hou, Xiu-Yu, Jhang, Yu-An, Yang, Simon, Yang, Mau-Tsuen
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.