Goto

Collaborating Authors

 Zhong, Yu


A Decade of Deep Learning: A Survey on The Magnificent Seven

arXiv.org Artificial Intelligence

At the core of this transformation is the development of multi-layered neural network architectures that facilitate automatic feature extraction from raw data, significantly improving the efficiency on machine learning tasks. Given the rapid pace of these advancements, an accessible manual is necessary to distill the key advances of the past decade. With this in mind, we introduce a study which highlights the evolution of deep learning, largely attributed to powerful algorithms. Among the multitude of breakthroughs, certain algorithms, including Residual Networks (ResNets), Transformers, Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Graph Neural Networks (GNNs), Contrastive Language-Image Pretraining (CLIP) and Diffusion models, have emerged as the cornerstones and driving forces behind the discipline. We select these algorithms via a survey targeting a broad spectrum of academics and professionals with the aim of encapsulating the essence of the most influential algorithms over the past decade. In this work, we provide details on the selection methodology, exploring the mentioned architectures in a broader context of the history of deep learning. We present an overview of selected core architectures, their mathematical underpinnings, and the algorithmic procedures that define the subsequent extensions and variants of these models, their applications, and their challenges and potential future research directions. In addition, we explore the practical aspects related to these algorithms, such as training and optimization methods, normalization techniques, and rate scheduling strategies that are essential for their effective implementation. Therefore, our manuscript serves as a practical survey for understanding and applying these crucial algorithms and aims to provide a manual for experienced researchers transitioning into deep learning from other domains, as well as for beginners seeking to grasp the trending algorithms.


Disruption Precursor Onset Time Study Based on Semi-supervised Anomaly Detection

arXiv.org Artificial Intelligence

The full understanding of plasma disruption in tokamaks is currently lacking, and data-driven methods are extensively used for disruption prediction. However, most existing data-driven disruption predictors employ supervised learning techniques, which require labeled training data. The manual labeling of disruption precursors is a tedious and challenging task, as some precursors are difficult to accurately identify, limiting the potential of machine learning models. To address this issue, commonly used labeling methods assume that the precursor onset occurs at a fixed time before the disruption, which may not be consistent for different types of disruptions or even the same type of disruption, due to the different speeds at which plasma instabilities escalate. This leads to mislabeled samples and suboptimal performance of the supervised learning predictor. In this paper, we present a disruption prediction method based on anomaly detection that overcomes the drawbacks of unbalanced positive and negative data samples and inaccurately labeled disruption precursor samples. We demonstrate the effectiveness and reliability of anomaly detection predictors based on different algorithms on J-TEXT and EAST to evaluate the reliability of the precursor onset time inferred by the anomaly detection predictor. The precursor onset times inferred by these predictors reveal that the labeling methods have room for improvement as the onset times of different shots are not necessarily the same. Finally, we optimize precursor labeling using the onset times inferred by the anomaly detection predictor and test the optimized labels on supervised learning disruption predictors. The results on J-TEXT and EAST show that the models trained on the optimized labels outperform those trained on fixed onset time labels.


IDP-PGFE: An Interpretable Disruption Predictor based on Physics-Guided Feature Extraction

arXiv.org Artificial Intelligence

Disruption prediction has made rapid progress in recent years, especially in machine learning (ML)-based methods. Understanding why a predictor makes a certain prediction can be as crucial as the prediction's accuracy for future tokamak disruption predictors. The purpose of most disruption predictors is accuracy or cross-machine capability. However, if a disruption prediction model can be interpreted, it can tell why certain samples are classified as disruption precursors. This allows us to tell the types of incoming disruption and gives us insight into the mechanism of disruption. This paper designs a disruption predictor called Interpretable Disruption Predictor based On Physics-guided feature extraction (IDP-PGFE) on J-TEXT. The prediction performance of the model is effectively improved by extracting physics-guided features. A high-performance model is required to ensure the validity of the interpretation results. The interpretability study of IDP-PGFE provides an understanding of J-TEXT disruption and is generally consistent with existing comprehension of disruption. IDP-PGFE has been applied to the disruption due to continuously increasing density towards density limit experiments on J-TEXT. The time evolution of the PGFE features contribution demonstrates that the application of ECRH triggers radiation-caused disruption, which lowers the density at disruption. While the application of RMP indeed raises the density limit in J-TEXT. The interpretability study guides intuition on the physical mechanisms of density limit disruption that RMPs affect not only the MHD instabilities but also the radiation profile, which delays density limit disruption.


Social Biases in NLP Models as Barriers for Persons with Disabilities

arXiv.org Artificial Intelligence

Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models. In particular, representations encoded in models often inadvertently perpetuate undesirable social biases from the data on which they are trained. In this paper, we present evidence of such undesirable biases towards mentions of disability in two different English language models: toxicity prediction and sentiment analysis. Next, we demonstrate that the neural embeddings that are the critical first step in most NLP pipelines similarly contain undesirable biases towards mentions of disability. We end by highlighting topical biases in the discourse about disability which may contribute to the observed model biases; for instance, gun violence, homelessness, and drug addiction are over-represented in texts discussing mental illness.


Causally Driven Incremental Multi Touch Attribution Using a Recurrent Neural Network

arXiv.org Machine Learning

This paper describes a practical system for Multi Touch Attribution (MTA) for use by a publisher of digital ads. We developed this system for JD.com, an eCommerce company, which is also a publisher of digital ads in China. The approach has two steps. The first step ('response modeling') fits a user-level model for purchase of a product as a function of the user's exposure to ads. The second ('credit allocation') uses the fitted model to allocate the incremental part of the observed purchase due to advertising, to the ads the user is exposed to over the previous T days. To implement step one, we train a Recurrent Neural Network (RNN) on user-level conversion and exposure data. The RNN has the advantage of flexibly handling the sequential dependence in the data in a semi-parametric way. The specific RNN formulation we implement captures the impact of advertising intensity, timing, competition, and user-heterogeneity, which are known to be relevant to ad-response. To implement step two, we compute Shapley Values, which have the advantage of having axiomatic foundations and satisfying fairness considerations. The specific formulation of the Shapley Value we implement respects incrementality by allocating the overall incremental improvement in conversion to the exposed ads, while handling the sequence-dependence of exposures on the observed outcomes. The system is under production at JD.com, and scales to handle the high dimensionality of the problem on the platform (attribution of the orders of about 300M users, for roughly 160K brands, across 200+ ad-types, served about 80B ad-impressions over a typical 15-day period).