Generative neural networks for characteristic functions
The characteristic function is one of the fundamental objects in probability theory, since it uniquely characterizes the distribution of a real-valued random vector in a concise way. Its properties often allow to simplify theoretical derivations, especially when sums of independent random variables are investigated. Further, it also allows to easily derive certain properties of the underlying random vector, such as its moments. A disadvantage of working with characteristic functions is that simulation from the corresponding random vector is not straightforward when there is no further information about the underlying random vector. As Devroye comments on the simulation from a (univariate) characteristic function in [12]: "If the characteristic function is known in black-box format, very little can be done in a universal manner". This poses major challenges in applications, since simulation from the corresponding random vector is often essential to asses certain quantities of interest. Several approaches to simulate from a random vector that corresponds to a given characteristic function seem to naturally come to mind. There are various ways of "inverting" the characteristic function to obtain its corresponding (Lebesgue) density or distribution function, such as the Fourier inversion formula, Lévy's characterization theorem and several other variants thereof.
Jan-9-2024
- Country:
- North America > United States > California > San Diego County > San Diego (0.04)
- Genre:
- Research Report (0.82)
- Technology: