Entropy-based Training Methods for Scalable Neural Implicit Sampler
–Neural Information Processing Systems
Efficiently sampling from un-normalized target distributions is a fundamental problem in scientific computing and machine learning. Traditional approaches such as Markov Chain Monte Carlo (MCMC) guarantee asymptotically unbiased samples from such distributions but suffer from computational inefficiency, particularly when dealing with high-dimensional targets, as they require numerous iterations to generate a batch of samples. In this paper, we introduce an efficient and scalable neural implicit sampler that overcomes these limitations. The implicit sampler can generate large batches of samples with low computational costs by leveraging a neural transformation that directly maps easily sampled latent vectors to target samples without the need for iterative procedures. To train the neural implicit samplers, we introduce two novel methods: the KL training method and the Fisher training method.
Neural Information Processing Systems
Jun-2-2025, 11:33:56 GMT
- Country:
- Asia > Middle East > Israel (0.14)
- Genre:
- Research Report > Promising Solution (0.34)
- Technology: