A Gibbs Sampling for bi-conv-PGDS
–Neural Information Processing Systems
It is a non-trivial task to develop Gibbs sampling update equations for the bi-conv-PGDS model, mainly due to the difficult to sample the gamma shape parameters from their conditional posteriors. By exploiting related variable augmentation and marginalization techniques of Zhou el al.[11] and their generalizations into the inference for gamma Markov chains [43, 51, 60], we propose a bidirectional Gibbs sampler to make it simple to compute the conditional posterior of the model parameters. We repeatedly exploit the following three properties, as summarized in [43], in order to do the inference. Property 3 (P3): If x NB(a, g(ζ)) and l CRT(x, a) is a Chinese restaurant table (CRT) distributed random variable, then x and l are equivalently jointly distributed as x SumLog(l, g(ζ)) and l Poisson(aζ) [11]. The sum logarithmic (SumLog) distribution is further defined as the sum of l independent and identically logarithmic-distributed random variables, i.e., x = A.3 Inference Similar to Wang et al. [20], to avoid directly process sparse document matrix, which will bring unnecessary burden in computation and storage, we apply variable augmentation under the Poisson likelihood [7, 13] to upward propagate latent count matrices M While the computation of the Gibbs sampler can be accelerated inside each iteration, it requires processing all documents in each iteration and hence has limited scalability.
Neural Information Processing Systems
May-28-2025, 18:41:21 GMT