Learning discrete distributions with infinite support
–Neural Information Processing Systems
We present a novel approach to estimating discrete distributions with (potentially) infinite support in the total variation metric. In a departure from the established paradigm, we make no structural assumptions whatsoever on the sampling distribution. In such a setting, distribution-free risk bounds are impossible, and the best one could hope for is a fully empirical data-dependent bound. We derive precisely such bounds, and demonstrate that these are, in a well-defined sense, the best possible. Our main discovery is that the half-norm of the empirical distribution provides tight upper and lower estimates on the empirical risk.
Neural Information Processing Systems
Mar-18-2025, 12:56:14 GMT
- Country:
- Europe (0.68)
- North America > United States
- Nebraska (0.28)
- Genre:
- Overview (0.34)
- Research Report (0.48)
- Technology: