Goto

Collaborating Authors

 Song, Chao


A new practical and effective source-independent full-waveform inversion with a velocity-distribution supported deep image prior: Applications to two real datasets

arXiv.org Artificial Intelligence

Full-waveform inversion (FWI) is an advanced technique for reconstructing high-resolution subsurface physical parameters by progressively minimizing the discrepancy between observed and predicted seismic data. However, conventional FWI encounters challenges in real data applications, primarily due to its conventional objective of direct measurements of the data misfit. Accurate estimation of the source wavelet is essential for effective data fitting, alongside the need for low-frequency data and a reasonable initial model to prevent cycle skipping. Additionally, wave equation solvers often struggle to accurately simulate the amplitude of observed data in real applications. To address these challenges, we introduce a correlation-based source-independent objective function for FWI that aims to mitigate source uncertainty and amplitude dependency, which effectively enhances its practicality for real data applications. We develop a deep-learning framework constrained by this new objective function with a velocity-distribution supported deep image prior, which reparameterizes velocity inversion into trainable parameters within an autoencoder, thereby reducing the nonlinearity in the conventional FWI's objective function. We demonstrate the superiority of our proposed method using synthetic data from benchmark velocity models and, more importantly, two real datasets. These examples highlight its effectiveness and practicality even under challenging conditions, such as missing low frequencies, a crude initial velocity model, and an incorrect source wavelet.


Diffusion Models for Molecules: A Survey of Methods and Tasks

arXiv.org Artificial Intelligence

Generative tasks about molecules, including but not limited to molecule generation, are crucial for drug discovery and material design, and have consistently attracted significant attention. In recent years, diffusion models have emerged as an impressive class of deep generative models, sparking extensive research and leading to numerous studies on their application to molecular generative tasks. Despite the proliferation of related work, there remains a notable lack of up-to-date and systematic surveys in this area. Particularly, due to the diversity of diffusion model formulations, molecular data modalities, and generative task types, the research landscape is challenging to navigate, hindering understanding and limiting the area's growth. To address this, this paper conducts a comprehensive survey of diffusion model-based molecular generative methods. We systematically review the research from the perspectives of methodological formulations, data modalities, and task types, offering a novel taxonomy. This survey aims to facilitate understanding and further flourishing development in this area. The relevant papers are summarized at: https://github.com/AzureLeon1/awesome-molecular-diffusion-models.


A Framework to Implement 1+N Multi-task Fine-tuning Pattern in LLMs Using the CGC-LORA Algorithm

arXiv.org Artificial Intelligence

With the productive evolution of large language models (LLMs) in the field of natural language processing (NLP), tons of effort has been made to effectively fine-tune common pre-trained LLMs to fulfill a variety of tasks in one or multiple specific domain. In practice, there are two prevailing ways, in which the adaptation can be achieved: (i) Multiple Independent Models: Pre-trained LLMs are fine-tuned a few times independently using the corresponding training samples from each task. (ii) An Integrated Model: Samples from all tasks are employed to fine-tune a pre-trianed LLM unitedly. To address the high computing cost and seesawing issue simultaneously, we propose a unified framework that implements a 1 + N mutli-task fine-tuning pattern in LLMs using a novel Customized Gate Control (CGC) Low-rank Adaptation (LoRA) algorithm. Our work aims to take an advantage of both MTL (i.e., CGC) and PEFT (i.e., LoRA) scheme. For a given cluster of tasks, we design an innovative layer that contains two types of experts as additional trainable parameters to make LoRA be compatible with MTL. To comprehensively evaluate the proposed framework, we conduct well-designed experiments on two public datasets. The experimental results demonstrate that the unified framework with CGC-LoRA modules achieves higher evaluation scores than all benchmarks on both two datasets.


AdaptSSR: Pre-training User Model with Augmentation-Adaptive Self-Supervised Ranking

arXiv.org Artificial Intelligence

User modeling, which aims to capture users' characteristics or interests, heavily relies on task-specific labeled data and suffers from the data sparsity issue. Several recent studies tackled this problem by pre-training the user model on massive user behavior sequences with a contrastive learning task. Generally, these methods assume different views of the same behavior sequence constructed via data augmentation are semantically consistent, i.e., reflecting similar characteristics or interests of the user, and thus maximizing their agreement in the feature space. However, due to the diverse interests and heavy noise in user behaviors, existing augmentation methods tend to lose certain characteristics of the user or introduce noisy behaviors. Thus, forcing the user model to directly maximize the similarity between the augmented views may result in a negative transfer. To this end, we propose to replace the contrastive learning task with a new pretext task: Augmentation-Adaptive SelfSupervised Ranking (AdaptSSR), which alleviates the requirement of semantic consistency between the augmented views while pre-training a discriminative user model. Specifically, we adopt a multiple pairwise ranking loss which trains the user model to capture the similarity orders between the implicitly augmented view, the explicitly augmented view, and views from other users. We further employ an in-batch hard negative sampling strategy to facilitate model training. Moreover, considering the distinct impacts of data augmentation on different behavior sequences, we design an augmentation-adaptive fusion mechanism to automatically adjust the similarity order constraint applied to each sample based on the estimated similarity between the augmented views. Extensive experiments on both public and industrial datasets with six downstream tasks verify the effectiveness of AdaptSSR.


G2PTL: A Pre-trained Model for Delivery Address and its Applications in Logistics System

arXiv.org Artificial Intelligence

Text-based delivery addresses, as the data foundation for logistics systems, contain abundant and crucial location information. How to effectively encode the delivery address is a core task to boost the performance of downstream tasks in the logistics system. Pre-trained Models (PTMs) designed for Natural Language Process (NLP) have emerged as the dominant tools for encoding semantic information in text. Though promising, those NLP-based PTMs fall short of encoding geographic knowledge in the delivery address, which considerably trims down the performance of delivery-related tasks in logistic systems such as Cainiao. To tackle the above problem, we propose a domain-specific pre-trained model, named G2PTL, a Geography-Graph Pre-trained model for delivery address in Logistics field. G2PTL combines the semantic learning capabilities of text pre-training with the geographical-relationship encoding abilities of graph modeling. Specifically, we first utilize real-world logistics delivery data to construct a large-scale heterogeneous graph of delivery addresses, which contains abundant geographic knowledge and delivery information. Then, G2PTL is pre-trained with subgraphs sampled from the heterogeneous graph. Comprehensive experiments are conducted to demonstrate the effectiveness of G2PTL through four downstream tasks in logistics systems on real-world datasets. G2PTL has been deployed in production in Cainiao's logistics system, which significantly improves the performance of delivery-related tasks. The code of G2PTL is available at https://huggingface.co/Cainiao-AI/G2PTL.


Experimental quantum adversarial learning with programmable superconducting qubits

arXiv.org Artificial Intelligence

State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China Quantum computing promises to enhance machine learning and artificial intelligence [1-3]. Different quantum algorithms have been proposed to improve a wide spectrum of machine learning tasks [4-12]. Yet, recent theoretical works show that, similar to traditional classifiers based on deep classical neural networks, quantum classifiers would suffer from the vulnerability problem: adding tiny carefully-crafted perturbations to the legitimate original data samples would facilitate incorrect predictions at a notably high confidence level [13-17]. This will pose serious problems for future quantum machine learning applications in safety and security-critical scenarios [18-20]. Here, we report the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits. We train quantum classifiers, which are built upon variational quantum circuits consisting of ten transmon qubits featuring average lifetimes of 150 µs, and average fidelities of simultaneous single-and two-qubit gates above 99.94% and 99.4% respectively, with both real-life images (e.g., medical magnetic resonance imaging scans) and quantum data. We demonstrate that these well-trained classifiers (with testing accuracy up to 99%) can be practically deceived by small adversarial perturbations, whereas an adversarial training process would significantly enhance their robustness to such perturbations. Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios and demonstrate an effective defense strategy against adversarial attacks, which provide a valuable guide for quantum artificial intelligence applications with both near-term and future quantum devices. In recent years, artificial intelligence (AI) [21-23] and been proposed to enhance the robustness of quantum classifiers quantum computing [24-26] have made dramatic progress. However, demonstrating Their intersection gives rise to a research frontier called, quantum adversarial examples for quantum classifiers experimentally machine learning or generally, quantum AI [1-3]. A number and showing the effectiveness of the proposed countermeasures of quantum algorithms have been proposed to enhance in practice are challenging and have not previously various AI tasks [4-12].