Lendasse, Amaury
Extreme AutoML: Analysis of Classification, Regression, and NLP Performance
Ratner, Edward, Farmer, Elliot, Warner, Brandon, Douglas, Christopher, Lendasse, Amaury
Utilizing machine learning techniques has always required choosing hyperparameters. This is true whether one uses a classical technique such as a KNN or very modern neural networks such as Deep Learning. Though in many applications, hyperparameters are chosen by hand, automated methods have become increasingly more common. These automated methods have become collectively known as automated machine learning, or AutoML. Several automated selection algorithms have shown similar or improved performance over state-of-the-art methods. This breakthrough has led to the development of cloud-based services like Google AutoML, which is based on Deep Learning and is widely considered to be the industry leader in AutoML services. Extreme Learning Machines (ELMs) use a fundamentally different type of neural architecture, producing better results at a significantly discounted computational cost. We benchmark the Extreme AutoML technology against Google's AutoML using several popular classification data sets from the University of California at Irvine's (UCI) repository, and several other data sets, observing significant advantages for Extreme AutoML in accuracy, Jaccard Indices, the variance of Jaccard Indices across classes (i.e. class variance) and training times.
Incremental ELMVIS for unsupervised learning
Akusok, Anton, Eirola, Emil, Miche, Yoan, Oliver, Ian, Björk, Kaj-Mikael, Gritsenko, Andrey, Baek, Stephen, Lendasse, Amaury
The ELMVIS method [5] is an interesting Machine Learning method that optimize a cost function by changing assignment between two sets of samples, or by changing the order of samples in one set which is the same. The cost function is learned by an Extreme Learning Machine (ELM) [13, 12, 10], a fast method for training feed-forward neural networks with convenient mathematical properties [11, 14]. Such optimization problem is found in various applications like open-loop Traveling Salesman problem [7] or clustering [4] (mapping between samples and clusters), but not in Neural Networks. ELMVIS is unique in a sense that it combines the optimal assignment task with neural network optimization problem; the latter is optimized at each step of ELMVIS. A recent advance in ELMVIS method [2] set its runtime speed comparable or faster than other state-of-the-art methods in visualization application.
LARSEN-ELM: Selective Ensemble of Extreme Learning Machines using LARS for Blended Data
Han, Bo, He, Bo, Nian, Rui, Ma, Mengmeng, Zhang, Shujing, Li, Minghui, Lendasse, Amaury
Extreme learning machine (ELM) as a neural network algorithm has shown its good performance, such as fast speed, simple structure etc, but also, weak robustness is an unavoidable defect in original ELM for blended data. We present a new machine learning framework called LARSEN-ELM for overcoming this problem. In our paper, we would like to show two key steps in LARSEN-ELM. In the first step, preprocessing, we select the input variables highly related to the output using least angle regression (LARS). In the second step, training, we employ Genetic Algorithm (GA) based selective ensemble and original ELM. In the experiments, we apply a sum of two sines and four datasets from UCI repository to verify the robustness of our approach. The experimental results show that compared with original ELM and other methods such as OP-ELM, GASEN-ELM and LSBoost, LARSEN-ELM significantly improve robustness performance while keeping a relatively high speed.