laplace-beltrami operator
- South America > Chile (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States (0.46)
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- North America > Canada (0.04)
- Education > Educational Setting (0.46)
- Government > Regional Government (0.46)
- Energy (0.46)
Kernel smoothing on manifolds
Bae, Eunseong, Polonik, Wolfgang
Under the assumption that data lie on a compact (unknown) manifold without boundary, we derive finite sample bounds for kernel smoothing and its (first and second) derivatives, and we establish asymptotic normality through Berry-Esseen type bounds. Special cases include kernel density estimation, kernel regression and the heat kernel signature. Connections to the graph Laplacian are also discussed.
- North America > United States > California > Yolo County > Davis (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Manifold limit for the training of shallow graph convolutional neural networks
Tengler, Johanna, Brune, Christoph, Iglesias, José A.
We study the discrete-to-continuum consistency of the training of shallow graph convolutional neural networks (GCNNs) on proximity graphs of sampled point clouds under a manifold assumption. Graph convolution is defined spectrally via the graph Laplacian, whose low-frequency spectrum approximates that of the Laplace-Beltrami operator of the underlying smooth manifold, and shallow GCNNs of possibly infinite width are linear functionals on the space of measures on the parameter space. From this functional-analytic perspective, graph signals are seen as spatial discretizations of functions on the manifold, which leads to a natural notion of training data consistent across graph resolutions. To enable convergence results, the continuum parameter space is chosen as a weakly compact product of unit balls, with Sobolev regularity imposed on the output weight and bias, but not on the convolutional parameter. The corresponding discrete parameter spaces inherit the corresponding spectral decay, and are additionally restricted by a frequency cutoff adapted to the informative spectral window of the graph Laplacians. Under these assumptions, we prove $Γ$-convergence of regularized empirical risk minimization functionals and corresponding convergence of their global minimizers, in the sense of weak convergence of the parameter measures and uniform convergence of the functions over compact sets. This provides a formalization of mesh and sample independence for the training of such networks.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- (6 more...)
Data driven estimation of Laplace-Beltrami operator
Approximations of Laplace-Beltrami operators on manifolds through graph Laplacians have become popular tools in data analysis and machine learning. These discretized operators usually depend on bandwidth parameters whose tuning remains a theoretical and practical problem. In this paper, we address this problem for the unormalized graph Laplacian by establishing an oracle inequality that opens the door to a well-founded data-driven procedure for the bandwidth selection. Our approach relies on recent results by Lacour and Massart (2015) on the so-called Lepski's method.
- Europe > France > Pays de la Loire > Loire-Atlantique > Nantes (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Laplace Learning in Wasserstein Space
Oliver, Mary Chriselda Antony, Roberts, Michael, Schönlieb, Carola-Bibiane, Thorpe, Matthew
The curation of large-scale, fully annotated training datasets remains a major bottleneck due to the high cost and expertise required for manual labelling. For example, in biomedical imaging applications such as flow cytom-etry [12, 67], gene expression microarrays [23, 24], and proteomic assays [18], modern technologies generate high-dimensional data far faster than it can be annotated. As a result, only a small fraction of samples receive reliable labels, despite their routine use in classification tasks. This motivates graph-based semi-supervised methods, which exploits the geometric structure of the data to improve predictions with limited supervision. In this paper, we focus on a special class of graph-based semi-supervised methods, namely Laplace Learning [68], to study classification in high-dimensional settings. This method exploits the geometric structure inherent in large quantities of unlabelled data to improve label predictions. However, leveraging the underlying geometry in high-dimensional datasets presents substantial challenges, including the well-known curse of dimensionality [22, 44] and poor generalization capacity [18]. In theory, a well-established trend in statistics suggests that high-dimensional data often possess an intrinsic low-dimensional structure, a concept formalized by the manifold hypothesis [25]. This hypothesis asserts that data are supported (or nearly supported) on a low-dimensional manifold with a small intrinsic dimension.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- North America > United States > Indiana (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.34)