Linearized Optimal Transport pyLOT Library: A Toolkit for Machine Learning on Point Clouds

Linwu, Jun, Khurana, Varun, Karris, Nicholas, Cloninger, Alexander

arXiv.org Machine Learning 

Instead, point clouds or continuous probability measures are the appropriate data structures. These data arise naturally in fields such as computer vision, image processing, shape analysis, and generative modeling, where representing complex objects as probability distributions provides a richer and more flexible framework for analysis. Real-world examples include text documents with bag-of-words models treating word counts as features, which forms a histogram for each document [35], imaging data where pixel intensity is interpreted as mass [26] and results in 2D discrete probability measures over the image grid, and gene expression data that is interpretted as a distribution across a gene network [8, 15]. Optimal transport (OT) theory [30] has recently emerged as a powerful tool to compare probability measures. Qualitatively, OT generates a distance metric between probability measures by minimizing the work needed to move one distribution into another over all transport plans. It has gained significant popularity for applications [4, 26, 27] involving point clouds and probability distributions. OT allows for the computation of distances between distributions by solving a minimization problem over transportation plans. Despite its theoretical elegance and its ability to capture geometric properties of distributions, using vanilla OT is computationally expensive and does not directly integrate into existing machine learning pipelines. For this reason, OT has been somewhat limited in practical applications, particularly in settings that demand scalable and efficient algorithms for tasks such as classification, dimension reduction, and generation.