Not enough data to create a plot.
Try a different view from the menu above.
Ma, Chenyang
Pre-Training LiDAR-Based 3D Object Detectors Through Colorization
Pan, Tai-Yu, Ma, Chenyang, Chen, Tianle, Phoo, Cheng Perng, Luo, Katie Z, You, Yurong, Campbell, Mark, Weinberger, Kilian Q., Hariharan, Bharath, Chao, Wei-Lun
Accurate 3D object detection and understanding for self-driving cars heavily relies on LiDAR point clouds, necessitating large amounts of labeled data to train. In this work, we introduce an innovative pre-training approach, Grounded Point Colorization (GPC), to bridge the gap between data and labels by teaching the model to colorize LiDAR point clouds, equipping it with valuable semantic cues. To tackle challenges arising from color variations and selection bias, we incorporate color as "context" by providing ground-truth colors as hints during colorization. Even with limited labeled data, GPC significantly improves finetuning performance; notably, on just 20% of the KITTI dataset, GPC outperforms training from scratch with the entire dataset. In sum, we introduce a fresh perspective on pre-training for 3D object detection, aligning the objective with the model's intended role and ultimately advancing the accuracy and efficiency of 3D object detection for autonomous vehicles. Detecting objects such as vehicles and pedestrians in 3D is crucial for self-driving cars to operate safely. Mainstream 3D object detectors (Shi et al., 2019; 2020b; Zhu et al., 2020; He et al., 2020a) take LiDAR point clouds as input, which provide precise 3D signals of the surrounding environment. However, training a detector needs a lot of labeled data. The expensive process of curating annotated data has motivated the community to investigate model pre-training using unlabeled data that can be collected easily. Most of the existing pre-training methods are built upon contrastive learning (Yin et al., 2022; Xie et al., 2020; Zhang et al., 2021; Huang et al., 2021; Liang et al., 2021), inspired by its success in 2D recognition (Chen et al., 2020a; He et al., 2020b). The key novelties, however, are often limited to how the positive and negative data pairs are constructed. This paper attempts to go beyond contrastive learning by providing a new perspective on pre-training 3D object detectors. We rethink pre-training's role in how it could facilitate the downstream fine-tuning with labeled data.
vFedSec: Efficient Secure Aggregation for Vertical Federated Learning via Secure Layer
Qiu, Xinchi, Pan, Heng, Zhao, Wanru, Ma, Chenyang, Gusmao, Pedro P. B., Lane, Nicholas D.
Most work in privacy-preserving federated learning (FL) has been focusing on horizontally partitioned datasets where clients share the same sets of features and can train complete models independently. However, in many interesting problems, individual data points are scattered across different clients/organizations in a vertical setting. Solutions for this type of FL require the exchange of intermediate outputs and gradients between participants, posing a potential risk of privacy leakage when privacy and security concerns are not considered. In this work, we present vFedSec - a novel design with an innovative Secure Layer for training vertical FL securely and efficiently using state-of-the-art security modules in secure aggregation. We theoretically demonstrate that our method does not impact the training performance while protecting private data effectively. Empirically results also show its applicability with extensive experiments that our design can achieve the protection with negligible computation and communication overhead. Also, our method can obtain 9.1e2 ~ 3.8e4 speedup compared to widely-adopted homomorphic encryption (HE) method.
Efficient Vertical Federated Learning with Secure Aggregation
Qiu, Xinchi, Pan, Heng, Zhao, Wanru, Ma, Chenyang, de Gusmรฃo, Pedro Porto Buarque, Lane, Nicholas D.
The majority of work in privacy-preserving federated learning (FL) has been focusing on horizontally partitioned datasets where clients share the same sets of features and can train complete models independently. However, in many interesting problems, such as financial fraud detection and disease detection, individual data points are scattered across different clients/organizations in vertical federated learning. Solutions for this type of FL require the exchange of gradients between participants and rarely consider privacy and security concerns, posing a potential risk of privacy leakage. In this work, we present a novel design for training vertical FL securely and efficiently using state-of-the-art security modules for secure aggregation. We demonstrate empirically that our method does not impact training performance whilst obtaining 9.1e2 ~3.8e4 speedup compared to homomorphic encryption (HE).
The Optimization of the Constant Flow Parallel Micropump Using RBF Neural Network
Ma, Chenyang, Xu, Boyuan, Liu, Hesheng
The objective of this work is to optimize the performance of a constant flow parallel mechanical displacement micropump, which has parallel pump chambers and incorporates passive check valves. The critical task is to minimize the pressure pulse caused by regurgitation, which negatively impacts the constant flow rate, during the reciprocating motion when the left and right pumps interchange their role of aspiration and transfusion. Previous works attempt to solve this issue via the mechanical design of passive check valves. In this work, the novel concept of overlap time is proposed, and the issue is solved from the aspect of control theory by implementing a RBF neural network trained by both unsupervised and supervised learning. The experimental results indicate that the pressure pulse is optimized in the range of 0.15 - 0.25 MPa, which is a significant improvement compared to the maximum pump working pressure of 40 MPa.
Gradient-less Federated Gradient Boosting Trees with Learnable Learning Rates
Ma, Chenyang, Qiu, Xinchi, Beutel, Daniel J., Lane, Nicholas D.
The privacy-sensitive nature of decentralized datasets and the robustness of eXtreme Gradient Boosting (XGBoost) on tabular data raise the needs to train XGBoost in the context of federated learning (FL). Existing works on federated XGBoost in the horizontal setting rely on the sharing of gradients, which induce per-node level communication frequency and serious privacy concerns. To alleviate these problems, we develop an innovative framework for horizontal federated XGBoost which does not depend on the sharing of gradients and simultaneously boosts privacy and communication efficiency by making the learning rates of the aggregated tree ensembles learnable. We conduct extensive evaluations on various classification and regression datasets, showing our approach achieves performance comparable to the state-of-the-art method and effectively improves communication efficiency by lowering both communication rounds and communication overhead by factors ranging from 25x to 700x.