Goto

Collaborating Authors

 cox process





Fast Bayesian Estimation of Point Process Intensity as Function of Covariates

Neural Information Processing Systems

In this paper, we tackle the Bayesian estimation of point process intensity as a function of covariates. We propose a novel augmentation of permanental process called augmented permanental process, a doubly-stochastic point process that uses a Gaussian process on covariate space to describe the Bayesian a pri-ori uncertainty present in the square root of intensity, and derive a fast Bayesian estimation algorithm that scales linearly with data size without relying on either domain discretization or Markov Chain Monte Carlo computation. The proposed algorithm is based on a non-trivial finding that the representer theorem, one of the most desirable mathematical property for machine learning problems, holds for the augmented permanental process, which provides us with many significant computational advantages. We evaluate our algorithm on synthetic and real-world data, and show that it outperforms state-of-the-art methods in terms of predictive accuracy while being substantially faster than a conventional Bayesian method.



Differentially private synthesis of Spatial Point Processes

Kim, Dangchan, Lim, Chae Young

arXiv.org Machine Learning

This paper proposes a method to generate synthetic data for spatial point patterns within the differential privacy (DP) framework. Specifically, we define a differentially private Poisson point synthesizer (PPS) and Cox point synthesizer (CPS) to generate synthetic point patterns with the concept of the $\alpha$-neighborhood that relaxes the original definition of DP. We present three example models to construct a differentially private PPS and CPS, providing sufficient conditions on their parameters to ensure the DP given a specified privacy budget. In addition, we demonstrate that the synthesizers can be applied to point patterns on the linear network. Simulation experiments demonstrate that the proposed approaches effectively maintain the privacy and utility of synthetic data.


Exact Bayesian Gaussian Cox Processes Using Random Integral

Tang, Bingjing, Palacios, Julia

arXiv.org Machine Learning

A Gaussian Cox process is a popular model for point process data, in which the intensity function is a transformation of a Gaussian process. Posterior inference of this intensity function involves an intractable integral (i.e., the cumulative intensity function) in the likelihood resulting in doubly intractable posterior distribution. Here, we propose a nonparametric Bayesian approach for estimating the intensity function of an inhomogeneous Poisson process without reliance on large data augmentation or approximations of the likelihood function. We propose to jointly model the intensity and the cumulative intensity function as a transformed Gaussian process, allowing us to directly bypass the need of approximating the cumulative intensity function in the likelihood. We propose an exact MCMC sampler for posterior inference and evaluate its performance on simulated data. We demonstrate the utility of our method in three real-world scenarios including temporal and spatial event data, as well as aggregated time count data collected at multiple resolutions. Finally, we discuss extensions of our proposed method to other point processes.


Heterogeneous Multi-Task Gaussian Cox Processes

Zhou, Feng, Kong, Quyu, Deng, Zhijie, He, Fengxiang, Cui, Peng, Zhu, Jun

arXiv.org Machine Learning

Inhomogeneous Poisson process data defined on a continuous spatio-temporal domain has attracted immense attention recently in a wide variety of applications, including reliability analysis in manufacturing systems (Soleimani et al, 2017), event capture in sensing regions (Mutny and Krause, 2021), crime prediction in urban area (Shirota and Gelfand, 2017) and disease diagnosis based on medical records (Lasko, 2014). The reliable training of an inhomogeneous Poisson process model critically relies on a large amount of data to avoid overfitting, especially when modeling high-dimensional point processes. However, one challenge is that the available training data is routinely sparse or even partially missing in specific applications. Taking manufacturing failure and healthcare analysis as motivating examples: the modern manufacturing machines are reliable and sparsely fail; the individuals with healthy constitution will not visit hospital very often. The data missing problems also arise, e.g., the event location capture is intermittent for sensing systems because of weather or other related barriers.


RTB Formulation Using Point Process

Lee, Seong Jin, Kim, Bumsik

arXiv.org Artificial Intelligence

With the rapid growth of the digital advertisement industry, programmatic advertisement became a crucial part of the industry. A key component of the programmatic display advertisement is the Real Time Bidding (RTB) where the supply-side platform (SSP) puts an ad-inventory on auction and the demand-side platforms (DSP) computes the potential value of the inventory and submits a bid according to the estimated value from the buyer-perspective to win the advertising opportunity. Many studies have been conducted to propose the optimal strategies for each of the participant in this ecosystem. Some approaches uses classical auction theories from game-theoretical views[1], [2], [22], [21] which considers the strategies of SSP and the game between DSP's and the SSP. In this paper, we focus on the perspective of the DSP, where the player participates as a buyer in the auction.


Toward optimal placement of spatial sensors

Kim, Mingyu, Yetkin, Harun, Stilwell, Daniel J., Jimenez, Jorge, Shrestha, Saurav, Stark, Nina

arXiv.org Artificial Intelligence

This paper addresses the challenges of optimally placing a finite number of sensors to detect Poisson-distributed targets in a bounded domain. We seek to rigorously account for uncertainty in the target arrival model throughout the problem. Sensor locations are selected to maximize the probability that no targets are missed. While this objective function is well-suited to applications where failure to detect targets is highly undesirable, it does not lead to a computationally efficient optimization problem. We propose an approximation of the objective function that is non-negative, submodular, and monotone and for which greedy selection of sensor locations works well. We also characterize the gap between the desired objective function and our approximation. For numerical illustrations, we consider the case of the detection of ship traffic using sensors mounted on the seafloor.