Goto

Collaborating Authors

 point cloud data


STONE: A Submodular Optimization Framework for Active 3D Object Detection

Neural Information Processing Systems

A key requirement for training an accurate 3D object detector is the availability of a large amount of LiDAR-based point cloud data. Unfortunately, labeling point cloud data is extremely challenging, as accurate 3D bounding boxes and semantic labels are required for each potential object. This paper proposes a unified active 3D object detection framework, for greatly reducing the labeling cost of training 3D object detectors. Our framework is based on a novel formulation of submodular optimization, specifically tailored to the problem of active 3D object detection. In particular, we address two fundamental challenges associated with active 3D object detection: data imbalance and the need to cover the distribution of the data, including LiDAR-based point cloud data of varying difficulty levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance with high computational efficiency compared to existing active learning methods. The code is available at https://github.com/RuiyuM/STONE


PointDAN: A Multi-Scale 3D Domain Adaption Network for Point Cloud Representation

Neural Information Processing Systems

Domain Adaptation (DA) approaches achieved significant improvements in a wide range of machine learning and computer vision tasks (i.e., classification, detection, and segmentation). However, as far as we are aware, there are few methods yet to achieve domain adaptation directly on 3D point cloud data. The unique challenge of point cloud data lies in its abundant spatial geometric information, and the semantics of the whole object is contributed by including regional geometric structures. Specifically, most general-purpose DA methods that struggle for global feature alignment and ignore local geometric information are not suitable for 3D domain alignment. In this paper, we propose a novel 3D Domain Adaptation Network for point cloud data (PointDAN).


An Indoor Radio Mapping Dataset Combining 3D Point Clouds and RSSI

Milosheski, Ljupcho, Akiyama, Kuon, Bertalanič, Blaž, Hribar, Jernej, Shinkuma, Ryoichi

arXiv.org Artificial Intelligence

The growing number of smart devices supporting bandwidth-intensive and latency-sensitive applications, such as real-time video analytics, smart sensing, Extended Reality (XR), etc., necessitates reliable wireless connectivity in indoor environments. In such environments, accurate design of Radio Environment Maps (REMs) enables adaptive wireless network planning and optimization of Access Point (AP) placement. However, generating realistic REMs remains difficult due to the variability of indoor environments and the limitations of existing modeling approaches, which often rely on simplified layouts or fully synthetic data. These challenges are further amplified by the adoption of next-generation Wi-Fi standards, which operate at higher frequencies and suffer from limited range and wall penetration. To support the efforts in addressing these challenges, we collected a dataset that combines high-resolution 3D LiDAR scans with Wi-Fi RSSI measurements collected across 20 setups in a multi-room indoor environment. The dataset includes two measurement scenarios, the first without human presence in the environment, and the second with human presence, enabling the development and validation of REM estimation models that incorporate physical geometry and environmental dynamics. The described dataset supports research in data-driven wireless modeling and the development of high-capacity indoor communication networks.




STONE: A Submodular Optimization Framework for Active 3D Object Detection

Neural Information Processing Systems

In particular, we address two fundamental challenges associated with active 3D object detection: data imbalance and the need to cover the distribution of the data, including LiDAR-based point cloud data of varying difficulty levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance with high computational efficiency compared to existing active learning methods.





The proposed algorithm is a unique combination of a GCN and a novel rotation-invariant local

Neural Information Processing Systems

We appreciate positive and constructive comments and address the main concerns raised by the reviewers below. Note that our training procedure takes the original 3D points and, consequently, is free from information loss. The manual feature extraction steps in RIConv and ClusterNet may incur the loss and lead to performance degradation. Therefore, the accuracy of A-CNN on z/SO(3) is as low as 35.8% according to our experiment based CNN (G-CNN) [A4] are designed for meshes and their target tasks are different from ours. In practice, computing PCAs at every level does not affect the overall accuracy at all.