lssvm
High-Dimensional Analysis of Bootstrap Ensemble Classifiers
Cherkaoui, Hamza, Tiomoko, Malik, Seddik, Mohamed El Amine, Louart, Cosme, Schnoor, Ekkehard, Kegl, Balazs
Bootstrap methods have long been a cornerstone of ensemble learning in machine learning. This paper presents a theoretical analysis of bootstrap techniques applied to the Least Square Support Vector Machine (LSSVM) ensemble in the context of large and growing sample sizes and feature dimensionalities. Leveraging tools from Random Matrix Theory, we investigate the performance of this classifier that aggregates decision functions from multiple weak classifiers, each trained on different subsets of the data. We provide insights into the use of bootstrap methods in high-dimensional settings, enhancing our understanding of their impact. Based on these findings, we propose strategies to select the number of subsets and the regularization parameter that maximize the performance of the LSSVM. Empirical experiments on synthetic and real-world datasets validate our theoretical results.
- Europe > Germany > Berlin (0.04)
- Europe > France (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- (3 more...)
Least Squares Maximum and Weighted Generalization-Memorization Machines
Wang, Shuai, Wang, Zhen, Shao, Yuan-Hai
In this paper, we propose a new way of remembering by introducing a memory influence mechanism for the least squares support vector machine (LSSVM). Without changing the equation constraints of the original LSSVM, this mechanism, allows an accurate partitioning of the training set without overfitting. The maximum memory impact model (MIMM) and the weighted impact memory model (WIMM) are then proposed. It is demonstrated that these models can be degraded to the LSSVM. Furthermore, we propose some different memory impact functions for the MIMM and WIMM. The experimental results show that that our MIMM and WIMM have better generalization performance compared to the LSSVM and significant advantage in time cost compared to other memory models.
- North America > United States > Wisconsin (0.04)
- Asia > Mongolia (0.04)
- Asia > China > Inner Mongolia > Hohhot (0.04)
- Asia > China > Hainan Province > Haikou (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.56)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Memory-Based Learning > Rote Learning (0.42)
Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs
Liu, Jiani, Tao, Qinghua, Zhu, Ce, Liu, Yipeng, Huang, Xiaolin, Suykens, Johan A. K.
Multitask learning (MTL) leverages task-relatedness to enhance performance. With the emergence of multimodal data, tasks can now be referenced by multiple indices. In this paper, we employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices and preserve their structural relations. Based on this representation, we propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs), where the CP factorization is deployed over the coefficient tensor. Our approach allows to model the task relation through a linear combination of shared factors weighted by task-specific factors and is generalized to both classification and regression problems. Through the alternating optimization scheme and the Lagrangian function, each subproblem is transformed into a convex problem, formulated as a quadratic programming or linear system in the dual form. In contrast to previous MTL frameworks, our decision function in the dual induces a weighted kernel function with a task-coupling term characterized by the similarities of the task-specific factors, better revealing the explicit relations across tasks in MTL. Experimental results validate the effectiveness and superiority of our proposed methods compared to existing state-of-the-art approaches in MTL. The code of implementation will be available at https://github.com/liujiani0216/TSVM-MTL.
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (0.68)
- Consumer Products & Services > Restaurants (0.68)
Tensorized LSSVMs for Multitask Regression
Liu, Jiani, Tao, Qinghua, Zhu, Ce, Liu, Yipeng, Suykens, Johan A. K.
Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement. The advent of multimodal data allows tasks to be referenced by multiple indices. High-order tensors are capable of providing efficient representations for such tasks, while preserving structural task-relations. In this paper, a new MTL method is proposed by leveraging low-rank tensor analysis and constructing tensorized Least Squares Support Vector Machines, namely the tLSSVM-MTL, where multilinear modelling and its nonlinear extensions can be flexibly exerted. We employ a high-order tensor for all the weights with each mode relating to an index and factorize it with CP decomposition, assigning a shared factor for all tasks and retaining task-specific latent factors along each index. Then an alternating algorithm is derived for the nonconvex optimization, where each resulting subproblem is solved by a linear system. Experimental results demonstrate promising performances of our tLSSVM-MTL.
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.05)
- North America (0.04)
- Asia > China > Sichuan Province > Chengdu (0.04)
Sparse Least Squares Low Rank Kernel Machines
Fang, Manjing, Xu, Di, Hong, Xia, Gao, Junbin
Abstract--A general framework of least squares support vector machine with low rank kernels, referred to as LR-LSSVM, is introduced in this paper . The special structure of low rank kernels with a controlled model size brings sparsity as well as computational efficiency to the proposed model. Meanwhile, a two-step optimization algorithm with three different criteria is proposed and various experiments are carried out using the example of the so-call robust RBF kernel to validate the model. The experiment results show that the performance of the proposed algorithm is comparable or superior to several existing kernel machines. With the proliferation of big data in scientific and business research, in practical nonlinear modeling approaches, one wishes to build sparse models with more efficient algorithms. Kernel machines (KMs) have attracted great attention since the support vector machines (SVM), a well linear binary classification model under the principle of risk minimization, was introduced in earlier 1990s [1]. In fact, KMs have extended SVM by implementing the linearity in the so-called high dimensional feature space under a feature mapping implicitly determined by a Mercer kernel function.
- Oceania > Australia > New South Wales > Sydney (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Sparse Algorithm for Robust LSSVM in Primal Space
Li Chen a,b, Shuisheng Zhou a, a School of Mathematics and Statistics, Xidian University, 266 Xinglong Section, Xifeng Road, Xi'an, China b Department of Basic Science, College of Information and Business, Zhongyuan Technology University, 41 Zhongyuan Middle Road, Zhengzhou, ChinaAbstract As enjoying the closed form solution, least squares support vector machine (LSSVM) has been widely used for classification and regression problems having the comparable performance with other types of SVMs. However, LSSVM has two drawbacks: sensitive to outliers and lacking sparseness. Robust LSSVM (R-LSSVM) overcomes the first partly via nonconvex truncated loss function, but the current algorithms for R-LSSVM with the dense solution are faced with the second drawback and are inefficient for training large-scale problems. In this paper, we interpret the robustness of R-LSSVM from a re-weighted viewpoint and give a primal R-LSSVM by the representer theorem. The new model may have sparse solution if the corresponding kernel matrix has low rank. Then approximating the kernel matrix by a low-rank matrix and smoothing the loss function by entropy penalty function, we propose a convergent sparse R-LSSVM (SR-LSSVM) algorithm to achieve the sparse solution of primal R-LSSVM, which overcomes two drawbacks of LSSVM simultaneously. The proposed algorithm has lower complexity than the existing algorithms and is very efficient for training large-scale problems. Many experimental results illustrate that SR-LSSVM can achieve better or comparable performance with less training time than related algorithms, especially for training large scale problems. Keywords: Primal LSSVM, Sparse solution, Re-weighted LSSVM, Low-rank approximation, Outliers 2010 MSC: 00-01, 99-00 1. Introduction Least squares support vector machine (LSSVM) was introduced by Suykens[1] and has been a powerful learning technique for classification and regression. It has been successfully used in many real world pattern recognition problems, such as disease diagnosis[2], fault detection[3], image classification [4], partial differential equations solving[5] and visual tracking[6]. LSSVM tries to minimize least squares errors on the training samples.
- Asia > China > Shaanxi Province > Xi'an (0.24)
- Asia > China > Henan Province > Zhengzhou (0.24)
- Europe > Switzerland > Geneva > Geneva (0.04)
- (5 more...)
Marginal Structured SVM with Hidden Variables
Ping, Wei, Liu, Qiang, Ihler, Alexander
In this work, we propose the marginal structured SVM (MSSVM) for structured prediction with hidden variables. MSSVM properly accounts for the uncertainty of hidden variables, and can significantly outperform the previously proposed latent structured SVM (LSSVM; Yu & Joachims (2009)) and other state-of-art methods, especially when that uncertainty is large. Our method also results in a smoother objective function, making gradient-based optimization of MSSVMs converge significantly faster than for LSSVMs. We also show that our method consistently outperforms hidden conditional random fields (HCRFs; Quattoni et al. (2007)) on both simulated and real-world datasets. Furthermore, we propose a unified framework that includes both our and several other existing methods as special cases, and provides insights into the comparison of different models in practice.
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.68)
- (2 more...)
Efficient Structured Prediction with Latent Variables for General Graphical Models
Schwing, Alexander, Hazan, Tamir, Pollefeys, Marc, Urtasun, Raquel
In this paper we propose a unified framework for structured prediction with latent variables which includes hidden conditional random fields and latent structured support vector machines as special cases. We describe a local entropy approximation for this general formulation using duality, and derive an efficient message passing algorithm that is guaranteed to converge. We demonstrate its effectiveness in the tasks of image segmentation as well as 3D indoor scene understanding from single images, showing that our approach is superior to latent structured support vector machines and hidden conditional random fields.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.73)