Quasi-Newton Updating for Large-Scale Distributed Learning
Wu, Shuyuan, Huang, Danyang, Wang, Hansheng
–arXiv.org Artificial Intelligence
Modern statistical analysis often involves massive datasets (Gopal and Yang, 2013). In several cases, such datasets are too large to be efficiently handled by a single computer. Instead, they have to be divided and then processed on a distributed computer system, which consists of a large number of computers (Zhang et al., 2012). Among all such computers, one often serves as the central computer, while the rest serve as worker computers. In this scenario, the central computer should be connected with all worker computers to construct a distributed computing system. Thus, approaches for the realization of efficient statistical learning on such distributed computing systems have received considerable interest from the research community (Mcdonald et al., 2009; Jordan et al., 2019; Tang et al., 2020; Hector and Song, 2020, 2021). Here, we consider a standard statistical learning problem with a total of N observations, where N is assumed to be very large.
arXiv.org Artificial Intelligence
Jun-11-2023