Goto

Collaborating Authors

 k-function


Differentially private synthesis of Spatial Point Processes

Kim, Dangchan, Lim, Chae Young

arXiv.org Machine Learning

This paper proposes a method to generate synthetic data for spatial point patterns within the differential privacy (DP) framework. Specifically, we define a differentially private Poisson point synthesizer (PPS) and Cox point synthesizer (CPS) to generate synthetic point patterns with the concept of the $\alpha$-neighborhood that relaxes the original definition of DP. We present three example models to construct a differentially private PPS and CPS, providing sufficient conditions on their parameters to ensure the DP given a specified privacy budget. In addition, we demonstrate that the synthesizers can be applied to point patterns on the linear network. Simulation experiments demonstrate that the proposed approaches effectively maintain the privacy and utility of synthetic data.


Statistical investigations into the geometry and homology of random programs

Sporring, Jon, Larsen, Ken Friis

arXiv.org Artificial Intelligence

AI-supported programming has taken giant leaps with tools such as Meta's Llama and openAI's chatGPT. These are examples of stochastic sources of programs and have already greatly influenced how we produce code and teach programming. If we consider input to such models as a stochastic source, a natural question is, what is the relation between the input and the output distributions, between the chatGPT prompt and the resulting program? In this paper, we will show how the relation between random Python programs generated from chatGPT can be described geometrically and topologically using Tree-edit distances between the program's syntax trees and without explicit modeling of the underlying space. A popular approach to studying high-dimensional samples in a metric space is to use low-dimensional embedding using, e.g., multidimensional scaling. Such methods imply errors depending on the data and dimension of the embedding space. In this article, we propose to restrict such projection methods to purely visualization purposes and instead use geometric summary statistics, methods from spatial point statistics, and topological data analysis to characterize the configurations of random programs that do not rely on embedding approximations. To demonstrate their usefulness, we compare two publicly available models: ChatGPT-4 and TinyLlama, on a simple problem related to image processing. Application areas include understanding how questions should be asked to obtain useful programs; measuring how consistently a given large language model answers; and comparing the different large language models as a programming assistant. Finally, we speculate that our approach may in the future give new insights into the structure of programming languages.


The Manifold Density Function: An Intrinsic Method for the Validation of Manifold Learning

Holmgren, Benjamin, Quist, Eli, Schupbach, Jordan, Fasy, Brittany Terese, Rieck, Bastian

arXiv.org Artificial Intelligence

We introduce the manifold density function, which is an intrinsic method to validate manifold learning techniques. Our approach adapts and extends Ripley's $K$-function, and categorizes in an unsupervised setting the extent to which an output of a manifold learning algorithm captures the structure of a latent manifold. Our manifold density function generalizes to broad classes of Riemannian manifolds. In particular, we extend the manifold density function to general two-manifolds using the Gauss-Bonnet theorem, and demonstrate that the manifold density function for hypersurfaces is well approximated using the first Laplacian eigenvalue. We prove desirable convergence and robustness properties.