Lai, Xin
CausalVE: Face Video Privacy Encryption via Causal Video Prediction
Huang, Yubo, Feng, Wenhao, Lai, Xin, Wang, Zixi, Xu, Jingzehua, Zhang, Shuai, He, Hongjie, Chen, Fan
Advanced facial recognition technologies and recommender systems with inadequate privacy technologies and policies for facial interactions increase concerns about bioprivacy violations. With the proliferation of video and live-streaming websites, public-face video distribution and interactions pose greater privacy risks. Existing techniques typically address the risk of sensitive biometric information leakage through various privacy enhancement methods but pose a higher security risk by corrupting the information to be conveyed by the interaction data, or by leaving certain biometric features intact that allow an attacker to infer sensitive biometric information from them. To address these shortcomings, in this paper, we propose a neural network framework, CausalVE. We obtain cover images by adopting a diffusion model to achieve face swapping with face guidance and use the speech sequence features and spatiotemporal sequence features of the secret video for dynamic video inference and prediction to obtain a cover video with the same number of frames as the secret video. In addition, we hide the secret video by using reversible neural networks for video hiding so that the video can also disseminate secret data. Numerous experiments prove that our CausalVE has good security in public video dissemination and outperforms state-of-the-art methods from a qualitative, quantitative, and visual point of view. With the widespread adoption of smart devices and the Internet of Things (IoT), the security issues of biological face privacy are becoming increasingly unavoidable. The explosion of public face video distribution for IoT, exemplified by YouTube, TikTok, and Instagram, makes it difficult to protect face privacy during video interaction and distribution. In addition, the autonomy of public face video distribution and interaction on video websites means that disguised face videos must convey the same visual video information effect as the original video and hide sensitive personal privacy information. Current face privacy measures mainly focus on destroying or hiding facial attributes. In video sequences, face attributes are destroyed by replacing the region where the person is located with blank information (Newton et al., 2005; Meden et al., 2018) or by blurring and pixellating face attributes from the detector (Sarwar et al., 2018). These methods directly damage the biometric features in facial videos, destroying the usability of data interactions and even failing to leave any useful information in interactions and propagation.
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Chen, Yukang, Qian, Shengju, Tang, Haotian, Lai, Xin, Liu, Zhijian, Han, Song, Jia, Jiaya
We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost. Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. For example, training on the context length of 8192 needs 16x computational costs in self-attention layers as that of 2048. In this paper, we speed up the context extension of LLMs in two aspects. On the one hand, although dense global attention is needed during inference, fine-tuning the model can be effectively and efficiently done by sparse local attention. The proposed shifted sparse attention (S$^2$-Attn) effectively enables context extension, leading to non-trivial computation saving with similar performance to fine-tuning with vanilla attention. Particularly, it can be implemented with only two lines of code in training, while being optional in inference. On the other hand, we revisit the parameter-efficient fine-tuning regime for context expansion. Notably, we find that LoRA for context extension works well under the premise of trainable embedding and normalization. LongLoRA combines this improved LoRA with S$^2$-Attn. LongLoRA demonstrates strong empirical results on various tasks on Llama2 models from 7B/13B to 70B. LongLoRA adopts Llama2 7B from 4k context to 100k, or Llama2 70B to 32k on a single 8x A100 machine. LongLoRA extends models' context while retaining their original architectures, and is compatible with most existing techniques, like Flash-Attention2. In addition, we further conduct supervised fine-tuning with LongLoRA and our long instruction-following LongAlpaca dataset.
Spherical Transformer for LiDAR-based 3D Recognition
Lai, Xin, Chen, Yukang, Lu, Fanbin, Liu, Jianhui, Jia, Jiaya
LiDAR-based 3D point cloud recognition has benefited various applications. Without specially considering the LiDAR point distribution, most current methods suffer from information disconnection and limited receptive field, especially for the sparse distant points. In this work, we study the varying-sparsity distribution of LiDAR points and present SphereFormer to directly aggregate information from dense close points to the sparse distant ones. We design radial window self-attention that partitions the space into multiple non-overlapping narrow and long windows. It overcomes the disconnection issue and enlarges the receptive field smoothly and dramatically, which significantly boosts the performance of sparse distant points. Moreover, to fit the narrow and long windows, we propose exponential splitting to yield fine-grained position encoding and dynamic feature selection to increase model representation ability. Notably, our method ranks 1st on both nuScenes and SemanticKITTI semantic segmentation benchmarks with 81.9% and 74.8% mIoU, respectively. Also, we achieve the 3rd place on nuScenes object detection benchmark with 72.8% NDS and 68.5% mAP. Code is available at https://github.com/dvlab-research/SphereFormer.git.