Goto

Collaborating Authors

Results


Nothing deep about deep learning

#artificialintelligence

At Chicago, I recall undergraduate students gawking about deep learning to Professor Lafferty after class. I recall professor Lafferty had hesitation in his voice at the time. It felt as though he was discussing a controversial, politically-sensitive issue. At that time, we knew only a fraction of what we know now and many of us were still wondering how deep learning could be anything more than non-linear regression. I had no motivation or curiosity to understand the subject and even the trio at Stanford--the ones who gave us the best-selling ML book of all time--only put a few paragraphs in the first edition of their textbook saying just that.



Low-rank features based double transformation matrices learning for image classification

arXiv.org Artificial Intelligence

Linear regression is a supervised method that has been widely used in classification tasks. In order to apply linear regression to classification tasks, a technique for relaxing regression targets was proposed. However, methods based on this technique ignore the pressure on a single transformation matrix due to the complex information contained in the data. A single transformation matrix in this case is too strict to provide a flexible projection, thus it is necessary to adopt relaxation on transformation matrix. This paper proposes a double transformation matrices learning method based on latent low-rank feature extraction. The core idea is to use double transformation matrices for relaxation, and jointly projecting the learned principal and salient features from two directions into the label space, which can share the pressure of a single transformation matrix. Firstly, the low-rank features are learned by the latent low rank representation (LatLRR) method which processes the original data from two directions. In this process, sparse noise is also separated, which alleviates its interference on projection learning to some extent. Then, two transformation matrices are introduced to process the two features separately, and the information useful for the classification is extracted. Finally, the two transformation matrices can be easily obtained by alternate optimization methods. Through such processing, even when a large amount of redundant information is contained in samples, our method can also obtain projection results that are easy to classify. Experiments on multiple data sets demonstrate the effectiveness of our approach for classification, especially for complex scenarios.


Glossary of Machine Learning Terminology: A Beginner's Guide

#artificialintelligence

Machine learning algorithms, models, strategies, and other influential features are assisting us in unlocking a wide range of applications. These computer systems are capable of self-learning and making business decisions, as well as assisting research and improving technology. As machine learning finds new applications across various sectors, the demand for professionals in the field is growing. According to the US Bureau of Labor Statistics, the job outlook will rise 22 percent until 2030 for computer and information research scientists. Whichever area of machine learning interests you more, you must first familiarize yourself with machine learning terminology.


Relaxing the Feature Covariance Assumption: Time-Variant Bounds for Benign Overfitting in Linear Regression

arXiv.org Machine Learning

Benign overfitting demonstrates that overparameterized models can perform well on test data while fitting noisy training data. However, it only considers the final min-norm solution in linear regression, which ignores the algorithm information and the corresponding training procedure. In this paper, we generalize the idea of benign overfitting to the whole training trajectory instead of the min-norm solution and derive a time-variant bound based on the trajectory analysis. Starting from the time-variant bound, we further derive a time interval that suffices to guarantee a consistent generalization error for a given feature covariance. Unlike existing approaches, the newly proposed generalization bound is characterized by a time-variant effective dimension of feature covariance. By introducing the time factor, we relax the strict assumption on the feature covariance matrix required in previous benign overfitting under the regimes of overparameterized linear regression with gradient descent. This paper extends the scope of benign overfitting, and experiment results indicate that the proposed bound accords better with empirical evidence.


Fast and Robust Sparsity Learning over Networks: A Decentralized Surrogate Median Regression Approach

arXiv.org Machine Learning

In recent years, decentralized machine learning (ML) has received growing research interest due to its advantages in system stability, data privacy, and computation efficiency [24, 28]. In contrast to the traditional centralized distributed architecture coordinated by a master machine, decentralized ML works with peer-topeer networked systems, where workers can perform local computation and pass the message through the network links. The goal of decentralized ML is to learn a global ML model by having workers optimize their own models and share local model information with their neighbors. So far, decentralized ML has achieved significant success in many scientific and engineering areas, including distributed sensing in wireless sensor networks[25, 29, 33, 50], multi-agent robotic systems[4, 31, 53], smart grids[13, 17] etc. However, in spite of the increasing adoption in applications, the performances of most decentralized ML methods are not robust and are vulnerable to the following three aspects: 1) Data Heterogeneity. Due to the lack of the global information aggregated by the central master, workers in decentralized network systems learn the model heavily relied on the local data and neighboring information.


Benign-Overfitting in Conditional Average Treatment Effect Prediction with Linear Regression

arXiv.org Machine Learning

We study the benign overfitting theory in the prediction of the conditional average treatment effect (CATE), with linear regression models. As the development of machine learning for causal inference, a wide range of large-scale models for causality are gaining attention. One problem is that suspicions have been raised that the large-scale models are prone to overfitting to observations with sample selection, hence the large models may not be suitable for causal prediction. In this study, to resolve the suspicious, we investigate on the validity of causal inference methods for overparameterized models, by applying the recent theory of benign overfitting (Bartlett et al., 2020). Specifically, we consider samples whose distribution switches depending on an assignment rule, and study the prediction of CATE with linear models whose dimension diverges to infinity. We focus on two methods: the T-learner, which based on a difference between separately constructed estimators with each treatment group, and the inverse probability weight (IPW)-learner, which solves another regression problem approximated by a propensity score. In both methods, the estimator consists of interpolators that fit the samples perfectly. As a result, we show that the T-learner fails to achieve the consistency except the random assignment, while the IPW-learner converges the risk to zero if the propensity score is known. This difference stems from that the T-learner is unable to preserve eigenspaces of the covariances, which is necessary for benign overfitting in the overparameterized setting. Our result provides new insights into the usage of causal inference methods in the overparameterizated setting, in particular, doubly robust estimators.


Transfer-Learning Across Datasets with Different Input Dimensions: An Algorithm and Analysis for the Linear Regression Case

arXiv.org Machine Learning

With the development of new sensors and monitoring devices, more sources of data become available to be used as inputs for machine learning models. These can on the one hand help to improve the accuracy of a model. On the other hand however, combining these new inputs with historical data remains a challenge that has not yet been studied in enough detail. In this work, we propose a transfer-learning algorithm that combines the new and the historical data, that is especially beneficial when the new data is scarce. We focus the approach on the linear regression case, which allows us to conduct a rigorous theoretical study on the benefits of the approach. We show that our approach is robust against negative transfer-learning, and we confirm this result empirically with real and simulated data.


Posterior Consistency for Bayesian Relevance Vector Machines

arXiv.org Machine Learning

Statistical modeling and inference problems with sample sizes substantially smaller than the number of available covariates are challenging. Chakraborty et al. (2012) did a full hierarchical Bayesian analysis of nonlinear regression in such situations using relevance vector machines based on reproducing kernel Hilbert space (RKHS). But they did not provide any theoretical properties associated with their procedure. The present paper revisits their problem, introduces a new class of global-local priors different from theirs, and provides results on posterior consistency as well as posterior contraction rates.


Random Forests Weighted Local Fr\'echet Regression with Theoretical Guarantee

arXiv.org Machine Learning

Statistical analysis is increasingly confronted with complex data from general metric spaces, such as symmetric positive definite matrix-valued data and probability distribution functions. [47] and [17] establish a general paradigm of Fr\'echet regression with complex metric space valued responses and Euclidean predictors. However, their proposed local Fr\'echet regression approach involves nonparametric kernel smoothing and suffers from the curse of dimensionality. To address this issue, we in this paper propose a novel random forests weighted local Fr\'echet regression paradigm. The main mechanism of our approach relies on the adaptive kernels generated by random forests. Our first method utilizes these weights as the local average to solve the Fr\'echet mean, while the second method performs local linear Fr\'echet regression, making both methods locally adaptive. Our proposals significantly improve existing Fr\'echet regression methods. Based on the theory of infinite order U-processes and infinite order Mmn-estimator, we establish the consistency, rate of convergence, and asymptotic normality for our proposed random forests weighted Fr\'echet regression estimator, which covers the current large sample theory of random forests with Euclidean responses as a special case. Numerical studies show the superiority of our proposed two methods for Fr\'echet regression with several commonly encountered types of responses such as probability distribution functions, symmetric positive definite matrices, and sphere data. The practical merits of our proposals are also demonstrated through the application to the human mortality distribution data.