Chen, Fei
BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration
Liu, Junjia, Sim, Hengyi, Li, Chenzui, Chen, Fei
Human bimanual manipulation can perform more complex tasks than a simple combination of two single arms, which is credited to the spatio-temporal coordination between the arms. However, the description of bimanual coordination is still an open topic in robotics. This makes it difficult to give an explainable coordination paradigm, let alone applied to robotics. In this work, we divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination. Then we propose a relative parameterization method to learn these types of coordination from human demonstration. It represents coordination as Gaussian mixture models from bimanual demonstration to describe the change in the importance of coordination throughout the motions by probability. The learned coordinated representation can be generalized to new task parameters while ensuring spatio-temporal coordination. We demonstrate the method using synthetic motions and human demonstration data and deploy it to a humanoid robot to perform a generalized bimanual coordination motion. We believe that this easy-to-use bimanual learning from demonstration (LfD) method has the potential to be used as a data augmentation plugin for robot large manipulation model training. The corresponding codes are open-sourced in https://github.com/Skylark0924/Rofunc.
Learning Robotic Ultrasound Scanning Skills via Human Demonstrations and Guided Explorations
Deng, Xutian, Chen, Yiting, Chen, Fei, Li, Miao
The goal of our tasks images, position of probe, pose of probe, and contact force), the policy is to autonomously acquire ultrasound images with the centered region of should yield a befitting action. As mentioned above, four different sensory modalities ultrasound images have been proposed in [16], [17], [18], are closely related to the robotic ultrasound scanning skills, [19], [20], [21]. To the best of our knowledge, this is the first unified framework that learns As shown in Figure 3, with the target to perform autonomous the robotic ultrasound scanning skills representation and the ultrasound scanning process, the ultrasound scanning skills is corresponding manipulation skills from human demonstrations represented as a policy function π(s) a, which denotes the including the modalities of ultrasound image, pose/position mapping from the current state s to the predicted action a. of the probe and the contact force. The modeling and learning of policy π about the ultrasound scanning skills is described in the following section. B. Ultrasound Skills Modeling and Learning A. Problem Formulation of Ultrasound Scanning Tasks We proposed to use a deep neural network as shown For each ultrasound scanning task, the key point is to in Figure.
LV-BERT: Exploiting Layer Variety for BERT
Yu, Weihao, Jiang, Zihang, Chen, Fei, Hou, Qibin, Feng, Jiashi
Modern pre-trained language models are mostly built upon backbones stacking self-attention and feed-forward layers in an interleaved order. In this paper, beyond this stereotyped layer pattern, we aim to improve pre-trained models by exploiting layer variety from two aspects: the layer type set and the layer order. Specifically, besides the original self-attention and feed-forward layers, we introduce convolution into the layer type set, which is experimentally found beneficial to pre-trained models. Furthermore, beyond the original interleaved order, we explore more layer orders to discover more powerful architectures. However, the introduced layer variety leads to a large architecture space of more than billions of candidates, while training a single candidate model from scratch already requires huge computation cost, making it not affordable to search such a space by directly training large amounts of candidate models. To solve this problem, we first pre-train a supernet from which the weights of all candidate models can be inherited, and then adopt an evolutionary algorithm guided by pre-training accuracy to find the optimal architecture. Extensive experiments show that LV-BERT model obtained by our method outperforms BERT and its variants on various downstream tasks. For example, LV-BERT-small achieves 79.8 on the GLUE testing set, 1.8 higher than the strong baseline ELECTRA-small.
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
Luo, Mi, Chen, Fei, Hu, Dapeng, Zhang, Yifan, Liang, Jian, Feng, Jiashi
A central challenge in training classification models in the real-world federated system is learning with non-IID data. To cope with this, most of the existing works involve enforcing regularization in local optimization or improving the model aggregation scheme at the server. Other works also share public datasets or synthesized samples to supplement the training of under-represented classes or introduce a certain level of personalization. Though effective, they lack a deep understanding of how the data heterogeneity affects each layer of a deep classification model. In this paper, we bridge this gap by performing an experimental analysis of the representations learned by different layers. Our observations are surprising: (1) there exists a greater bias in the classifier than other layers, and (2) the classification performance can be significantly improved by post-calibrating the classifier after federated training. Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model. Experimental results demonstrate that CCVR achieves state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10. We hope that our simple yet effective method can shed some light on the future research of federated learning with non-IID data.
Whole-Body Control on Non-holonomic Mobile Manipulation for Grapevine Winter Pruning Automation
Teng, Tao, Fernandes, Miguel, Gatti, Matteo, Poni, Stefano, Semini, Claudio, Caldwell, Darwin, Chen, Fei
Mobile manipulators that combine mobility and manipulability, are increasingly being used for various unstructured application scenarios in the field, e.g. vineyards. Therefore, the coordinated motion of the mobile base and manipulator is an essential feature of the overall performance. In this paper, we explore a whole-body motion controller of a robot which is composed of a 2-DoFs non-holonomic wheeled mobile base with a 7-DoFs manipulator (non-holonomic wheeled mobile manipulator, NWMM) This robotic platform is designed to efficiently undertake complex grapevine pruning tasks. In the control framework, a task priority coordinated motion of the NWMM is guaranteed. Lower-priority tasks are projected into the null space of the top-priority tasks so that higher-priority tasks are completed without interruption from lower-priority tasks. The proposed controller was evaluated in a grapevine spur pruning experiment scenario.
Two stages for visual object tracking
Chen, Fei, Wang, Xiaodong
Siamese-based trackers have achived promising performance on visual object tracking tasks. Most existing Siamese-based trackers contain two separate branches for tracking, including classification branch and bounding box regression branch. In addition, image segmentation provides an alternative way to obetain the more accurate target region. In this paper, we propose a novel tracker with two-stages: detection and segmentation. The detection stage is capable of locating the target by Siamese networks. Then more accurate tracking results are obtained by segmentation module given the coarse state estimation in the first stage. We conduct experiments on four benchmarks. Our approach achieves state-of-the-art results, with the EAO of 52.6$\%$ on VOT2016, 51.3$\%$ on VOT2018, and 39.0$\%$ on VOT2019 datasets, respectively.
Risk Variance Penalization: From Distributional Robustness to Causality
Xie, Chuanlong, Chen, Fei, Liu, Yue, Li, Zhenguo
Learning under multi-environments often requires the ability of out-of-distribution generalization for the worst-environment performance guarantee. Some novel algorithms, e.g. Invariant Risk Minimization and Risk Extrapolation, build stable models by extracting invariant (causal) feature. However, it remains unclear how these methods learn to remove the environmental features. In this paper, we focus on the Risk Extrapolation (REx) and make attempts to fill this gap. We first propose a framework, Quasi-Distributional Robustness, to unify the Empirical Risk Minimization (ERM), the Robust Optimization (RO) and the Risk Extrapolation. Then, under this framework, we show that, comparing to ERM and RO, REx has a much larger robust region. Furthermore, based on our analysis, we propose a novel regularization method, Risk Variance Penalization (RVP), which is derived from REx. The proposed method is easy to implement, and has proper degree of penalization, and enjoys an interpretable tuning parameter. Finally, our experiments show that under certain conditions, the regularization strategy that encourages the equality of training risks has ability to discover relationships which do not exist in the training data. This provides important evidence to support that RVP is useful to discover causal models.
Deep Job Understanding at LinkedIn
Li, Shan, Shi, Baoxu, Yang, Jaewon, Yan, Ji, Wang, Shuai, Chen, Fei, He, Qi
As the world's largest professional network, LinkedIn wants to create economic opportunity for everyone in the global workforce. One of its most critical missions is matching jobs with processionals. Improving job targeting accuracy and hire efficiency align with LinkedIn's Member First Motto. To achieve those goals, we need to understand unstructured job postings with noisy information. We applied deep transfer learning to create domain-specific job understanding models. After this, jobs are represented by professional entities, including titles, skills, companies, and assessment questions. To continuously improve LinkedIn's job understanding ability, we designed an expert feedback loop where we integrated job understanding models into LinkedIn's products to collect job posters' feedback. In this demonstration, we present LinkedIn's job posting flow and demonstrate how the integrated deep job understanding work improves job posters' satisfaction and provides significant metric lifts in LinkedIn's job recommendation system.
Automated scalable segmentation of neurons from multispectral images
Sümbül, Uygar, Roossien, Douglas, Cai, Dawen, Chen, Fei, Barry, Nicholas, Cunningham, John P., Boyden, Edward, Paninski, Liam
Reconstruction of neuroanatomy is a fundamental problem in neuroscience. Stochastic expression of colors in individual cells is a promising tool, although its use in the nervous system has been limited due to various sources of variability in expression. Moreover, the intermingled anatomy of neuronal trees is challenging for existing segmentation algorithms. Here, we propose a method to automate the segmentation of neurons in such (potentially pseudo-colored) images. The method uses spatio-color relations between the voxels, generates supervoxels to reduce the problem size by four orders of magnitude before the final segmentation, and is parallelizable over the supervoxels. To quantify performance and gain insight, we generate simulated images, where the noise level and characteristics, the density of expression, and the number of fluorophore types are variable. We also present segmentations of real Brainbow images of the mouse hippocampus, which reveal many of the dendritic segments.