A graphical heuristic for reduction and partitioning of large datasets for scalable supervised training

Yadav, Sumedh, Bode, Mathis

arXiv.org Machine Learning 

Y adav and BodeMETHODOLOGY A graphical heuristic for reduction and partitioning of large datasets for scalable supervised training Sumedh Y adav 1* and Mathis Bode 2 * Correspondence: sumedhyadav.iitkgp@gmail.com 1 Gstech T echnology Pvt. Ltd., 415, 2nd Floor, 16th Cross Road, 17th Main Road, HSR Layout Sector 4, 560102, Bengaluru, India Full list of author information is available at the end of the article Abstract A scalable graphical method is presented for selecting, and partitioning datasets for the training phase of a classification task. For the heuristic, a clustering algorithm is required to get its computation cost in a reasonable proportion to the task itself. This step is proceeded by construction of an information graph of the underlying classification patterns using approximate nearest neighbor methods. The presented method constitutes of two approaches, one for reducing a given training set, and another for partitioning the selected/reduced set. The heuristic targets large datasets, since the primary goal is significant reduction in training computation run-time without compromising prediction accuracy . T est results show that both approaches significantly speedup the training task when compared against that of state-of-the-art shrinking heuristic available in LIBSVM. Furthermore, the approaches closely follow or even outperform in prediction accuracy . A network design is also presented for the partitioning based distributed training formulation. Added speedup in training run-time is observed when compared to that of serial implementation of the approaches. Keywords: training set selection; machine learning; large datasets; distributed machine learning; classification; graph coarsening objective; network architecture design Introduction Two decades earlier, some of the most seminal works in machine learning were done on training set selection [1, 2] under the banner of relevance reasoning. However, the better part of recent works have been exclusively towards feature selection [3, 4]. With increased processing power, run time of training is feasible even for datasets erstwhile considered large. Additionally, dimensionality ( d) dominates dataset size ( n) in the algorithmic complexities of learning algorithms. In the training phase, less data points mean fewer generalization guarantees, however, as we are moving in the era of big data, even the fastest classification algorithms are taking unfeasible time to train models. When data sources are abundant, it is befitting to separate data based on relevance to the learning task. This has led to a renewed interest in the once famous problem statement of relevance reasoning [5, 6]. Reasoning on relevance to get improved scalability of classification algorithms is currently explored on graphical/network data [7], and learned models [8]. One research area where training set selection has been given attention to is support vector machines (SVM).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found