Goto

Collaborating Authors

 Yang, Xiaopeng


Generalization Ability Analysis of Through-the-Wall Radar Human Activity Recognition

arXiv.org Artificial Intelligence

Through-the-Wall radar (TWR) human activity recognition (HAR) is a technology that uses low-frequency ultra-wideband (UWB) signal to detect and analyze indoor human motion. However, the high dependence of existing end-to-end recognition models on the distribution of TWR training data makes it difficult to achieve good generalization across different indoor testers. In this regard, the generalization ability of TWR HAR is analyzed in this paper. In detail, an end-to-end linear neural network method for TWR HAR and its generalization error bound are first discussed. Second, a micro-Doppler corner representation method and the change of the generalization error before and after dimension reduction are presented. The appropriateness of the theoretical generalization errors is proved through numerical simulations and experiments. The results demonstrate that feature dimension reduction is effective in allowing recognition models to generalize across different indoor testers.


Generalizable Indoor Human Activity Recognition Method Based on Micro-Doppler Corner Point Cloud and Dynamic Graph Learning

arXiv.org Artificial Intelligence

Through-the-wall radar (TWR) human activity recognition can be achieved by fusing micro-Doppler signature extraction and intelligent decision-making algorithms. However, limited by the insufficient priori of tester in practical indoor scenarios, the trained models on one tester are commonly difficult to inference well on other testers, which causes poor generalization ability. To solve this problem, this paper proposes a generalizable indoor human activity recognition method based on micro-Doppler corner point cloud and dynamic graph learning. In the proposed method, DoG-{\mu}D-CornerDet is used for micro-Doppler corner extraction on two types of radar profiles. Then, a micro-Doppler corner filtering method based on polynomial fitting smoothing is proposed to maximize the feature distance under the constraints of the kinematic model. The extracted corners from the two types of radar profiles are concatenated together into three-dimensional point cloud. Finally, the paper proposes a dynamic graph neural network (DGNN)-based recognition method for data-to-activity label mapping. Visualization, comparison and ablation experiments are carried out to verify the effectiveness of the proposed method. The results prove that the proposed method has strong generalization ability on radar data collected from different testers.


TWR-MCAE: A Data Augmentation Method for Through-the-Wall Radar Human Motion Recognition

arXiv.org Artificial Intelligence

To solve the problems of reduced accuracy and prolonging convergence time of through-the-wall radar (TWR) human motion due to wall attenuation, multipath effect, and system interference, we propose a multilink auto-encoding neural network (TWR-MCAE) data augmentation method. Specifically, the TWR-MCAE algorithm is jointly constructed by a singular value decomposition (SVD)-based data preprocessing module, an improved coordinate attention module, a compressed sensing learnable iterative shrinkage threshold reconstruction algorithm (LISTA) module, and an adaptive weight module. The data preprocessing module achieves wall clutter, human motion features, and noise subspaces separation. The improved coordinate attention module achieves clutter and noise suppression. The LISTA module achieves human motion feature enhancement. The adaptive weight module learns the weights and fuses the three subspaces. The TWR-MCAE can suppress the low-rank characteristics of wall clutter and enhance the sparsity characteristics in human motion at the same time. It can be linked before the classification step to improve the feature extraction capability without adding other prior knowledge or recollecting more data. Experiments show that the proposed algorithm gets a better peak signal-to-noise ratio (PSNR), which increases the recognition accuracy and speeds up the training process of the back-end classifiers.


Generating Thematic Chinese Poetry using Conditional Variational Autoencoders with Hybrid Decoders

arXiv.org Artificial Intelligence

Computer poetry generation is our first step towards computer writing. Writing must have a theme. The current approaches of using sequence-to-sequence models with attention often produce non-thematic poems. We present a novel conditional variational autoencoder with a hybrid decoder adding the deconvolutional neural networks to the general recurrent neural networks to fully learn topic information via latent variables. This approach significantly improves the relevance of the generated poems by representing each line of the poem not only in a context-sensitive manner but also in a holistic way that is highly related to the given keyword and the learned topic. A proposed augmented word2vec model further improves the rhythm and symmetry. Tests show that the generated poems by our approach are mostly satisfying with regulated rules and consistent themes, and 73.42% of them receive an Overall score no less than 3 (the highest score is 5).