Trinh, Linh
Data selection method for assessment of autonomous vehicles
Trinh, Linh, Anwar, Ali, Mercelis, Siegfried
As the popularity of autonomous vehicles has grown, many standards and regulators, such as ISO, NHTSA, and Euro NCAP, require safety validation to ensure a sufficient level of safety before deploying them in the real world. Manufacturers gather a large amount of public road data for this purpose. However, the majority of these validation activities are done manually by humans. Furthermore, the data used to validate each driving feature may differ. As a result, it is essential to have an efficient data selection method that can be used flexibly and dynamically for verification and validation while also accelerating the validation process. In this paper, we present a data selection method that is practical, flexible, and efficient for assessment of autonomous vehicles. Our idea is to optimize the similarity between the metadata distribution of the selected data and a predefined metadata distribution that is expected for validation. Our experiments on the large dataset BDD100K show that our method can perform data selection tasks efficiently. These results demonstrate that our methods are highly reliable and can be used to select appropriate data for the validation of various safety functions.
TurtleRabbit 2024 SSL Team Description Paper
Trinh, Linh, Anzuman, Alif, Batkhuu, Eric, Chan, Dychen, Graf, Lisa, Gurung, Darpan, Jamal, Tharunimm, Namgyal, Jigme, Ng, Jason, Tsang, Wing Lam, Wang, X. Rosalind, Yilmaz, Eren, Obst, Oliver
TurtleRabbit is a new RoboCup SSL team from Western Sydney University. This team description paper presents our approach in navigating some of the challenges in developing a new SSL team from scratch. SSL is dominated by teams with extensive experience and customised equipment that has been developed over many years. Here, we outline our approach in overcoming some of the complexities associated with replicating advanced open-sourced designs and managing the high costs of custom components. Opting for simplicity and cost-effectiveness, our strategy primarily employs off-the-shelf electronics components and ``hobby'' brushless direct current (BLDC) motors, complemented by 3D printing and CNC milling. This approach helped us to streamline the development process and, with our open-sourced hardware design, hopefully will also lower the bar for other teams to enter RoboCup SSL in the future. The paper details the specific hardware choices, their approximate costs, the integration of electronics and mechanics, and the initial steps taken in software development, for our entry into SSL that aims to be simple yet competitive.
FisheyePP4AV: A privacy-preserving method for autonomous vehicles on fisheye camera images
Trinh, Linh, Ha, Bach, Tran, Tu
In many parts of the world, the use of vast amounts of data collected on public roadways for autonomous driving has increased. In order to detect and anonymize pedestrian faces and nearby car license plates in actual road-driving scenarios, there is an urgent need for effective solutions. As more data is collected, privacy concerns regarding it increase, including but not limited to pedestrian faces and surrounding vehicle license plates. Normal and fisheye cameras are the two common camera types that are typically mounted on collection vehicles. With complex camera distortion models, fisheye camera images were deformed in contrast to regular images. It causes computer vision tasks to perform poorly when using numerous deep learning models. In this work, we pay particular attention to protecting privacy while yet adhering to several laws for fisheye camera photos taken by driverless vehicles. First, we suggest a framework for extracting face and plate identification knowledge from several teacher models. Our second suggestion is to transform both the image and the label from a regular image to fisheye-like data using a varied and realistic fisheye transformation. Finally, we run a test using the open-source PP4AV dataset. The experimental findings demonstrated that our model outperformed baseline methods when trained on data from autonomous vehicles, even when the data were softly labeled. The implementation code is available at our github: https://github.com/khaclinh/FisheyePP4AV.
The Second Monocular Depth Estimation Challenge
Spencer, Jaime, Qian, C. Stella, Trescakova, Michaela, Russell, Chris, Hadfield, Simon, Graf, Erich W., Adams, Wendy J., Schofield, Andrew J., Elder, James, Bowden, Richard, Anwar, Ali, Chen, Hao, Chen, Xiaozhi, Cheng, Kai, Dai, Yuchao, Hoa, Huynh Thai, Hossain, Sadat, Huang, Jianmian, Jing, Mohan, Li, Bo, Li, Chao, Li, Baojun, Liu, Zhiwen, Mattoccia, Stefano, Mercelis, Siegfried, Nam, Myungwoo, Poggi, Matteo, Qi, Xiaohua, Ren, Jiahui, Tang, Yang, Tosi, Fabio, Trinh, Linh, Uddin, S. M. Nadim, Umair, Khan Muhammad, Wang, Kaixuan, Wang, Yufei, Wang, Yixing, Xiang, Mochu, Xu, Guangkai, Yin, Wei, Yu, Jun, Zhang, Qi, Zhao, Chaoqiang
This paper discusses the results for the second edition of the Monocular Depth Estimation Challenge (MDEC). This edition was open to methods using any form of supervision, including fully-supervised, self-supervised, multi-task or proxy depth. The challenge was based around the SYNS-Patches dataset, which features a wide diversity of environments with high-quality dense ground-truth. This includes complex natural environments, e.g. forests or fields, which are greatly underrepresented in current benchmarks. The challenge received eight unique submissions that outperformed the provided SotA baseline on any of the pointcloud- or image-based metrics. The top supervised submission improved relative F-Score by 27.62%, while the top self-supervised improved it by 16.61%. Supervised submissions generally leveraged large collections of datasets to improve data diversity. Self-supervised submissions instead updated the network architecture and pretrained backbones. These results represent a significant progress in the field, while highlighting avenues for future research, such as reducing interpolation artifacts at depth boundaries, improving self-supervised indoor performance and overall natural image accuracy.