Distributed Learning of Mixtures of Experts

Chamroukhi, Faïcel, Pham, Nhat Thien

arXiv.org Machine Learning 

In modern machine learning problems one has to deal with datasets that are not centralized. This may be related to the application context in which the data can be by nature available at different locations and not accessible in a centralized mode, or distributed for computational issues in case of a large amount of data. Indeed, even if the dataset is fully available in a centralized mode, implementing reasonable learning algorithms may be computationally demanding in case of a large number of examples. The construction of distributed techniques in a Federated Learning setting Yang et al. (2019) in which the model is trained collaboratively under the orchestration of a central server, while keeping the data decentralized, is an increasing area of research. The most attractive strategy is to perform standard inference on local machines to obtain local estimators, then transmits them to a central machine where they are aggregated to produce an overall estimator, while attempting to satisfy some statistical guarantees criteria. There are many successful attempts in this direction of parallelizing the existing learning algorithms and statistical methods. Those that may be mentioned here include, among others, parallelizing stochastic gradient descent (Zinkevich et al., 2010), multiple linear regression (Mingxian et al., 1991), parallel K-means in clustering based on MapReduce (Zhao et al., 2009), distributed learning for heterogeneous data via model integration (Merugu and Ghosh, 2005), split-and-conquer approach for penalized regressions (Chen and ge Xie, 2014), for logistic regression (Shofiyah and Sofro, 2018), for k-clustering with heavy noise Li and Guo (2018). It is only very recently that a distributed learning approach has been proposed for mixture distributions, specifically for finite Gaussian mixtures (Zhang and Chen, 2022a). In this paper we focus on mixtures of experts (MoE) models (Jacobs et al., 1991; Jordan and Xu, 1995) which extend the standard unconditional mixture distributions that are typically used for clustering purposes, to model complex non-linear relationships of a response Y conditionally on some predictors X, for prediction purposes, while enjoying denseness results, e.g.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found