Mercelis, Siegfried
Attention Based Feature Fusion For Multi-Agent Collaborative Perception
Ahmed, Ahmed N., Mercelis, Siegfried, Anwar, Ali
In the domain of intelligent transportation systems (ITS), collaborative perception has emerged as a promising approach to overcome the limitations of individual perception by enabling multiple agents to exchange information, thus enhancing their situational awareness. Collaborative perception overcomes the limitations of individual sensors, allowing connected agents to perceive environments beyond their line-of-sight and field of view. However, the reliability of collaborative perception heavily depends on the data aggregation strategy and communication bandwidth, which must overcome the challenges posed by limited network resources. To improve the precision of object detection and alleviate limited network resources, we propose an intermediate collaborative perception solution in the form of a graph attention network (GAT). The proposed approach develops an attention-based aggregation strategy to fuse intermediate representations exchanged among multiple connected agents. This approach adaptively highlights important regions in the intermediate feature maps at both the channel and spatial levels, resulting in improved object detection precision. We propose a feature fusion scheme using attention-based architectures and evaluate the results quantitatively in comparison to other state-of-the-art collaborative perception approaches. Our proposed approach is validated using the V2XSim dataset. The results of this work demonstrate the efficacy of the proposed approach for intermediate collaborative perception in improving object detection average precision while reducing network resource usage.
The Second Monocular Depth Estimation Challenge
Spencer, Jaime, Qian, C. Stella, Trescakova, Michaela, Russell, Chris, Hadfield, Simon, Graf, Erich W., Adams, Wendy J., Schofield, Andrew J., Elder, James, Bowden, Richard, Anwar, Ali, Chen, Hao, Chen, Xiaozhi, Cheng, Kai, Dai, Yuchao, Hoa, Huynh Thai, Hossain, Sadat, Huang, Jianmian, Jing, Mohan, Li, Bo, Li, Chao, Li, Baojun, Liu, Zhiwen, Mattoccia, Stefano, Mercelis, Siegfried, Nam, Myungwoo, Poggi, Matteo, Qi, Xiaohua, Ren, Jiahui, Tang, Yang, Tosi, Fabio, Trinh, Linh, Uddin, S. M. Nadim, Umair, Khan Muhammad, Wang, Kaixuan, Wang, Yufei, Wang, Yixing, Xiang, Mochu, Xu, Guangkai, Yin, Wei, Yu, Jun, Zhang, Qi, Zhao, Chaoqiang
This paper discusses the results for the second edition of the Monocular Depth Estimation Challenge (MDEC). This edition was open to methods using any form of supervision, including fully-supervised, self-supervised, multi-task or proxy depth. The challenge was based around the SYNS-Patches dataset, which features a wide diversity of environments with high-quality dense ground-truth. This includes complex natural environments, e.g. forests or fields, which are greatly underrepresented in current benchmarks. The challenge received eight unique submissions that outperformed the provided SotA baseline on any of the pointcloud- or image-based metrics. The top supervised submission improved relative F-Score by 27.62%, while the top self-supervised improved it by 16.61%. Supervised submissions generally leveraged large collections of datasets to improve data diversity. Self-supervised submissions instead updated the network architecture and pretrained backbones. These results represent a significant progress in the field, while highlighting avenues for future research, such as reducing interpolation artifacts at depth boundaries, improving self-supervised indoor performance and overall natural image accuracy.
Learning to Communicate Using Counterfactual Reasoning
Vanneste, Simon, Vanneste, Astrid, Mercelis, Siegfried, Hellinckx, Peter
This paper introduces a new approach for multi-agent communication learning called multi-agent counterfactual communication (MACC) learning. Many real-world problems are currently tackled using multi-agent techniques. However, in many of these tasks the agents do not observe the full state of the environment but only a limited observation. This absence of knowledge about the full state makes completing the objectives significantly more complex or even impossible. The key to this problem lies in sharing observation information between agents or learning how to communicate the essential data. In this paper we present a novel multi-agent communication learning approach called MACC. It addresses the partial observability problem of the agents. MACC lets the agent learn the action policy and the communication policy simultaneously. We focus on decentralized Markov Decision Processes (Dec-MDP), where the agents have joint observability. This means that the full state of the environment can be determined using the observations of all agents. MACC uses counterfactual reasoning to train both the action and the communication policy. This allows the agents to anticipate on how other agents will react to certain messages and on how the environment will react to certain actions, allowing them to learn more effective policies. MACC uses actor-critic with a centralized critic and decentralized actors. The critic is used to calculate an advantage for both the action and communication policy. We demonstrate our method by applying it on the Simple Reference Particle environment of OpenAI and a MNIST game. Our results are compared with a communication and non-communication baseline. These experiments demonstrate that MACC is able to train agents for each of these problems with effective communication policies.