Recurrent Neural Networks with more flexible memory: better predictions than rough volatility
Challet, Damien, Ragel, Vincent
–arXiv.org Artificial Intelligence
Some time series in Nature have a very long memory (Robinson, 2003): fluid turbulence (Resagk et al., 2006), asset price volatility (Cont, 2001) and tick-by-tick events in financial markets (Challet and Stinchcombe, 2001; Lillo and Farmer, 2004). From a modelling point of view, this means that the current value of an observable of interest depends on the past by a convolution of itself with a long-tailed kernel. Deep learning tackles past dependence in time series with recurrent neural networks (RNNs). These networks are in essence moving averages of nonlinear functions of the inputs and learn the parameters of these averages and functions. Provided that they are sufficiently large, these networks can approximate long-tailed kernels in a satisfactory way, and are of course able to account for more complex problems than a simple linear convolution.
arXiv.org Artificial Intelligence
Aug-4-2023
- Country:
- Europe > France
- Île-de-France > Paris > Paris (0.04)
- North America > Trinidad and Tobago
- Europe > France
- Genre:
- Research Report (0.64)
- Industry:
- Banking & Finance (1.00)
- Technology: