Wang, Kunpeng
3-D Magnetotelluric Deep Learning Inversion Guided by Pseudo-Physical Information
Jiang, Peifan, Wang, Xuben, Wang, Shuang, Deng, Fei, Wang, Kunpeng, Wang, Bin, Yang, Yuhan, Fadel, Islam
Magnetotelluric deep learning (DL) inversion methods based on joint data-driven and physics-driven have become a hot topic in recent years. When mapping observation data (or forward modeling data) to the resistivity model using neural networks (NNs), incorporating the error (loss) term of the inversion resistivity's forward modeling response--which introduces physical information about electromagnetic field propagation--can significantly enhance the inversion accuracy. To efficiently achieve data-physical dual-driven MT deep learning inversion for large-scale 3-D MT data, we propose using DL forward modeling networks to compute this portion of the loss. This approach introduces pseudo-physical information through the forward modeling of NN simulation, further guiding the inversion network fitting. Specifically, we first pre-train the forward modeling networks as fixed forward modeling operators, then transfer and integrate them into the inversion network training, and finally optimize the inversion network by minimizing the multinomial loss. Theoretical experimental results indicate that despite some simulation errors in DL forward modeling, the introduced pseudo-physical information still enhances inversion accuracy and significantly mitigates the overfitting problem during training. Additionally, we propose a new input mode that involves masking and adding noise to the data, simulating the field data environment of 3-D MT inversion, thereby making the method more flexible and effective for practical applications.
RecycleGPT: An Autoregressive Language Model with Recyclable Module
Jiang, Yufan, He, Qiaozhi, Zhuang, Xiaomin, Wu, Zhihua, Wang, Kunpeng, Zhao, Wenlai, Yang, Guangwen
Existing large language models have to run K times to generate a sequence of K tokens. Our approach relies on the observation that adjacent tokens in a sequence usually have strong correlations and the next token in a sequence can be reasonably guessed or inferred based on the preceding ones. Experiments and analysis demonstrate the effectiveness of our approach in lowering inference latency, achieving up to 1.4x speedup while preserving high performance. Large language models (LLMs) (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023; Chowdhery et al., 2022; Biderman et al., 2023; Smith et al., 2022) have revolutionized the field of natural language generation for their abilities in generating satisfactory text across various application domains. The excellent performance benefits greatly from the scaling of model size (100B+ parameters), but at the same time, the fact remains that a single decoding step gets slower as the model gets larger. In addition to the immense computation introduced by larger models, a larger memory footprint is also a major factor causing slower inference of LLMs (Dao et al., 2022; Pope et al., 2023). This large memory footprint includes the trained model parameters, the temporary state used during inference, and in addition to these, the KV cache is also stored in memory.