Modelling Sampling Distributions of Test Statistics with Autograd
Kadhim, Ali Al, Prosper, Harrison B.
Automatic differentiation (see, for example, Ref.[1]) has revolutionized machine learning, permitting the routine application of gradient descent algorithms to fit to data models of essentially unlimited complexity. The same technology can be used to take the derivative of these models with respect to their inputs without the need to explicitly calculate the derivatives [2]. A potentially useful application of this capability is approximating the probability density function (pdf), f(x | θ), given an accurate neural network model of the associated conditional cumulative distribution function (cdf), F (x | θ), using the fact that F (x | θ) f(x | θ) =, (1) x where θ are the parameters of the data-generation mechanism, which we distinguish from the parameters w of the neural network model. This paper explores this possibility in the context of simulation-based frequentist inference [3-7]. Equation (1) furnishes an approximation of the pdf f(x | θ) whether x is a function of the underlying observations D only or if x = λ(D; θ) is a test statistic that depends on D as well as on the parameters θ. Moreover, computing the derivative of the cdf using autograd to obtain the pdf is exact; autograd does not use finite difference approximations.
May-3-2024
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > Italy
- Sardinia (0.04)
- North America > United States
- Florida > Leon County > Tallahassee (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology: