Feng, Yunlong
MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning
Li, Bohan, Dou, Longxu, Hou, Yutai, Feng, Yunlong, Mu, Honglin, Zhu, Qingfu, Sun, Qinghua, Che, Wanxiang
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template. This approach demonstrates its effectiveness, especially in few-shot learning scenarios, where the model is trained on a scarce amount of data. Despite its successes, the limited templates and text in few-shot prompt-based learning scenarios leave significant room for performance improvement. Moreover, existing methods sometimes resort to model ensembles, which, while effective, could potentially hamper model efficiency due to increased computational demands. To address these issues, we introduce MixPro, an augmentation method designed to augment both the vanilla input text and the templates. We implement this through the token-level, the sentence-level, and the template-level Mixup strategies. The experimental results on five few-shot datasets show that MixPro outperforms other augmentation baselines, improving model performance by an average of 5.08% compared to before augmentation.
OpenSLU: A Unified, Modularized, and Extensible Toolkit for Spoken Language Understanding
Qin, Libo, Chen, Qiguang, Xu, Xiao, Feng, Yunlong, Che, Wanxiang
Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an open-source toolkit to provide a unified, modularized, and extensible toolkit for spoken language understanding. Specifically, OpenSLU unifies 10 SLU models for both single-intent and multi-intent scenarios, which support both non-pretrained and pretrained models simultaneously. Additionally, OpenSLU is highly modularized and extensible by decomposing the model architecture, inference, and learning process into reusable modules, which allows researchers to quickly set up SLU experiments with highly flexible configurations. OpenSLU is implemented based on PyTorch, and released at \url{https://github.com/LightChen233/OpenSLU}.
A Two-Stage Framework with Self-Supervised Distillation For Cross-Domain Text Classification
Feng, Yunlong, Li, Bohan, Qin, Libo, Xu, Xiao, Che, Wanxiang
Cross-domain text classification aims to adapt models to a target domain that lacks labeled data. It leverages or reuses rich labeled data from the different but related source domain(s) and unlabeled data from the target domain. To this end, previous work focuses on either extracting domain-invariant features or task-agnostic features, ignoring domain-aware features that may be present in the target domain and could be useful for the downstream task. In this paper, we propose a two-stage framework for cross-domain text classification. In the first stage, we finetune the model with mask language modeling (MLM) and labeled data from the source domain. In the second stage, we further fine-tune the model with self-supervised distillation (SSD) and unlabeled data from the target domain. We evaluate its performance on a public cross-domain text classification benchmark and the experiment results show that our method achieves new state-of-the-art results for both single-source domain adaptations (94.17% $\uparrow$1.03%) and multi-source domain adaptations (95.09% $\uparrow$1.34%).
A Framework of Learning Through Empirical Gain Maximization
Feng, Yunlong, Wu, Qiang
We develop in this paper a framework of empirical gain maximization (EGM) to address the robust regression problem where heavy-tailed noise or outliers may present in the response variable. The idea of EGM is to approximate the density function of the noise distribution instead of approximating the truth function directly as usual. Unlike the classical maximum likelihood estimation that encourages equal importance of all observations and could be problematic in the presence of abnormal observations, EGM schemes can be interpreted from a minimum distance estimation viewpoint and allow the ignorance of those observations. Furthermore, it is shown that several well-known robust nonconvex regression paradigms, such as Tukey regression and truncated least square regression, can be reformulated into this new framework. We then develop a learning theory for EGM, by means of which a unified analysis can be conducted for these well-established but not fully-understood regression approaches. Resulting from the new framework, a novel interpretation of existing bounded nonconvex loss functions can be concluded. Within this new framework, the two seemingly irrelevant terminologies, the well-known Tukey's biweight loss for robust regression and the triweight kernel for nonparametric smoothing, are closely related. More precisely, it is shown that the Tukey's biweight loss can be derived from the triweight kernel. Similarly, other frequently employed bounded nonconvex loss functions in machine learning such as the truncated square loss, the Geman-McClure loss, and the exponential squared loss can also be reformulated from certain smoothing kernels in statistics. In addition, the new framework enables us to devise new bounded nonconvex loss functions for robust learning.
A Statistical Learning Assessment of Huber Regression
Feng, Yunlong, Wu, Qiang
As one of the triumphs and milestones of robust statistics, Huber regression plays an important role in robust inference and estimation. It has also been finding a great variety of applications in machine learning. In a parametric setup, it has been extensively studied. However, in the statistical learning context where a function is typically learned in a nonparametric way, there is still a lack of theoretical understanding of how Huber regression estimators learn the conditional mean function and why it works in the absence of light-tailed noise assumptions. To address these fundamental questions, we conduct an assessment of Huber regression from a statistical learning viewpoint. First, we show that the usual risk consistency property of Huber regression estimators, which is usually pursued in machine learning, cannot guarantee their learnability in mean regression. Second, we argue that Huber regression should be implemented in an adaptive way to perform mean regression, implying that one needs to tune the scale parameter in accordance with the sample size and the moment condition of the noise. Third, with an adaptive choice of the scale parameter, we demonstrate that Huber regression estimators can be asymptotic mean regression calibrated under $(1+\epsilon)$-moment conditions ($\epsilon>0$). Last but not least, under the same moment conditions, we establish almost sure convergence rates for Huber regression estimators. Note that the $(1+\epsilon)$-moment conditions accommodate the special case where the response variable possesses infinite variance and so the established convergence rates justify the robustness feature of Huber regression estimators. In the above senses, the present study provides a systematic statistical learning assessment of Huber regression estimators and justifies their merits in terms of robustness from a theoretical viewpoint.
New Insights into Learning with Correntropy Based Regression
Feng, Yunlong
Stemming from information-theoretic learning, the correntropy criterion and its applications to machine learning tasks have been extensively explored and studied. Its application to regression problems leads to the robustness enhanced regression paradigm -- namely, correntropy based regression. Having drawn a great variety of successful real-world applications, its theoretical properties have also been investigated recently in a series of studies from a statistical learning viewpoint. The resulting big picture is that correntropy based regression regresses towards the conditional mode function or the conditional mean function robustly under certain conditions. Continuing this trend and going further, in the present study, we report some new insights into this problem. First, we show that under the additive noise regression model, such a regression paradigm can be deduced from minimum distance estimation, implying that the resulting estimator is essentially a minimum distance estimator and thus possesses robustness properties. Second, we show that the regression paradigm, in fact, provides a unified approach to regression problems in that it approaches the conditional mean, the conditional mode, as well as the conditional median functions under certain conditions. Third, we present some new results when it is utilized to learn the conditional mean function by developing its error bounds and exponential convergence rates under conditional $(1+\epsilon)$-moment assumptions. The saturation effect on the established convergence rates, which was observed under $(1+\epsilon)$-moment assumptions, still occurs, indicating the inherent bias of the regression estimator. These novel insights deepen our understanding of correntropy based regression, help cement the theoretic correntropy framework, and also enable us to investigate learning schemes induced by general bounded nonconvex loss functions.
Learning with Correntropy-induced Losses for Regression with Mixture of Symmetric Stable Noise
Feng, Yunlong, Ying, Yiming
In recent years, correntropy and its applications in machine learning have been drawing continuous attention owing to its merits in dealing with non-Gaussian noise and outliers. However, theoretical understanding of correntropy, especially in the statistical learning context, is still limited. In this study, within the statistical learning framework, we investigate correntropy based regression in the presence of non-Gaussian noise or outliers. To this purpose, we first introduce mixture of symmetric stable noise, which include Gaussian noise, Cauchy noise, and the mixture of Gaussian noise as special cases, to model non-Gaussian noise and outliers. We demonstrate that under the mixture of symmetric stable noise assumption, correntropy based regression can learn the conditional mean function or the conditional median function well without requiring the finite variance assumption of the noise. In particular, we establish learning rates for correntropy based regression estimators that are asymptotically of type $\mathcal{O}(n^{-1})$. We believe that the present study completes our understanding towards correntropy based regression from a statistical learning viewpoint, and may also shed some light on robust statistical learning for regression.
A Statistical Learning Approach to Modal Regression
Feng, Yunlong, Fan, Jun, Suykens, Johan A. K.
This paper studies the nonparametric modal regression problem systematically from a statistical learning view. Originally motivated by pursuing a theoretical understanding of the maximum correntropy criterion based regression (MCCR), our study reveals that MCCR with a tending-to-zero scale parameter is essentially modal regression. We show that nonparametric modal regression problem can be approached via the classical empirical risk minimization. Some efforts are then made to develop a framework for analyzing and implementing modal regression. For instance, the modal regression function is described, the modal regression risk is defined explicitly and its \textit{Bayes} rule is characterized; for the sake of computational tractability, the surrogate modal regression risk, which is termed as the generalization risk in our study, is introduced. On the theoretical side, the excess modal regression risk, the excess generalization risk, the function estimation error, and the relations among the above three quantities are studied rigorously. It turns out that under mild conditions, function estimation consistency and convergence may be pursued in modal regression as in vanilla regression protocols, such as mean regression, median regression, and quantile regression. However, it outperforms these regression models in terms of robustness as shown in our study from a re-descending M-estimation view. This coincides with and in return explains the merits of MCCR on robustness. On the practical side, the implementation issues of modal regression including the computational algorithm and the tuning parameters selection are discussed. Numerical assessments on modal regression are also conducted to verify our findings empirically.
Kernel Density Estimation for Dynamical Systems
Hang, Hanyuan, Steinwart, Ingo, Feng, Yunlong, Suykens, Johan A. K.
We study the density estimation problem with observations generated by certain dynamical systems that admit a unique underlying invariant Lebesgue density. Observations drawn from dynamical systems are not independent and moreover, usual mixing concepts may not be appropriate for measuring the dependence among these observations. By employing the $\mathcal{C}$-mixing concept to measure the dependence, we conduct statistical analysis on the consistency and convergence of the kernel density estimator. Our main results are as follows: First, we show that with properly chosen bandwidth, the kernel density estimator is universally consistent under $L_1$-norm; Second, we establish convergence rates for the estimator with respect to several classes of dynamical systems under $L_1$-norm. In the analysis, the density function $f$ is only assumed to be H\"{o}lder continuous which is a weak assumption in the literature of nonparametric density estimation and also more realistic in the dynamical system context. Last but not least, we prove that the same convergence rates of the estimator under $L_\infty$-norm and $L_1$-norm can be achieved when the density function is H\"{o}lder continuous, compactly supported and bounded. The bandwidth selection problem of the kernel density estimator for dynamical system is also discussed in our study via numerical simulations.
Learning theory estimates with observations from general stationary stochastic processes
Hang, Hanyuan, Feng, Yunlong, Steinwart, Ingo, Suykens, Johan A. K.
This paper investigates the supervised learning problem with observations drawn from certain general stationary stochastic processes. Here by \emph{general}, we mean that many stationary stochastic processes can be included. We show that when the stochastic processes satisfy a generalized Bernstein-type inequality, a unified treatment on analyzing the learning schemes with various mixing processes can be conducted and a sharp oracle inequality for generic regularized empirical risk minimization schemes can be established. The obtained oracle inequality is then applied to derive convergence rates for several learning schemes such as empirical risk minimization (ERM), least squares support vector machines (LS-SVMs) using given generic kernels, and SVMs using Gaussian kernels for both least squares and quantile regression. It turns out that for i.i.d.~processes, our learning rates for ERM recover the optimal rates. On the other hand, for non-i.i.d.~processes including geometrically $\alpha$-mixing Markov processes, geometrically $\alpha$-mixing processes with restricted decay, $\phi$-mixing processes, and (time-reversed) geometrically $\mathcal{C}$-mixing processes, our learning rates for SVMs with Gaussian kernels match, up to some arbitrarily small extra term in the exponent, the optimal rates. For the remaining cases, our rates are at least close to the optimal rates. As a by-product, the assumed generalized Bernstein-type inequality also provides an interpretation of the so-called "effective number of observations" for various mixing processes.