Goto

Collaborating Authors

Deep Smoothing of the Implied Volatility Surface

arXiv.org Machine Learning

We present an artificial neural network (ANN) approach to value financial derivatives. Atypically to standard ANN applications, practitioners equally use option pricing models to validate market prices and to infer unobserved prices. Importantly, models need to generate realistic arbitrage-free prices, meaning that no option portfolio can lead to risk-free profits. The absence of arbitrage opportunities is guaranteed by penalizing the loss using soft constraints on an extended grid of input values. ANNs can be pre-trained by first calibrating a standard option pricing model, and then training an ANN to a larger synthetic dataset generated from the calibrated model. The parameters transfer as well as the non-arbitrage constraints appear to be particularly useful when only sparse or erroneous data are available. We also explore how deeper ANNs improve over shallower ones, as well as other properties of the network architecture. We benchmark our method against standard option pricing models, such as Heston with and without jumps. We validate our method both on training sets, and testing sets, namely, highlighting both their capacity to reproduce observed prices and predict new ones.


Neural networks for option pricing and hedging: a literature review

arXiv.org Machine Learning

This work provides a review of this literature. The motivation for this summary arose from our companion paper Ruf and W ang [2019]. There we continue th e discussions of this note; in particular, of potentially problematic data leakage when training ANNs to historic financial data. This paper is organised in the following way. Section 2 featu res Table 1, a summary of the literature that concerns the use of ANNs for nonparametric pricing (and hedging) of options. Section 3 provides a list of recommended papers from Table 1. Section 4 provides a n overview of related work where ANNs are applied in the context of option pricing and hedging, but not necessarily as nonparametric estimation tools. Section 5 briefly discusses various regularisation techniq ues used in the reviewed literature.


Sequential Tracking in Pricing Financial Options using Model Based and Neural Network Approaches

Neural Information Processing Systems

This paper shows how the prices of option contracts traded in financial marketscan be tracked sequentially by means of the Extended Kalman Filter algorithm. I consider call and put option pairs with identical strike price and time of maturity as a two output nonlinear system.The Black-Scholes approach popular in Finance literature andthe Radial Basis Functions neural network are used in modelling the nonlinear system generating these observations. I show how both these systems may be identified recursively using the EKF algorithm. I present results of simulations on some FTSE 100 Index options data and discuss the implications of viewing the pricing problem in this sequential manner. 1 INTRODUCTION Data from the financial markets has recently been of much interest to the neural computing community. The complexity of the underlying macroeconomic system and how traders react to the flow of information leads to highly nonlinear relationships betweenobservations.


Incorporating Second-Order Functional Knowledge for Better Option Pricing

Neural Information Processing Systems

Incorporating prior knowledge of a particular task into the architecture of a learning algorithm can greatly improve generalization performance. We study here a case where we know that the function to be learned is non-decreasing in two of its arguments and convex in one of them. For this purpose we propose a class of functions similar to multi-layer neural networks but (1) that has those properties, (2) is a universal approximator of continuous functions with these and other properties. We apply this new class of functions to the task of modeling the price of call options. Experiments show improvements on regressing the price of call options using the new types of function classes that incorporate the a priori constraints. 1 Introduction Incorporating a priori knowledge of a particular task into a learning algorithm helps reducing thenecessary complexity of the learner and generally improves performance, if the incorporated knowledge is relevant to the task and really corresponds to the generating process ofthe data. In this paper we consider prior knowledge on the positivity of some first and second derivatives of the function to be learned. In particular such constraints have applications to modeling the price of European stock options. Based on the Black-Scholes formula, the price of a call stock option is monotonically increasing in both the "moneyness" andtime to maturity of the option, and it is convex in the "moneyness". Section 3 better explains these terms and stock options.


The option pricing model based on time values: an application of the universal approximation theory on unbounded domains

arXiv.org Artificial Intelligence

Hutchinson, Lo and Poggio raised the question that if learning works can learn the Black-Scholes formula, and they proposed the network mapping the ratio of underlying price to strike $S_t/K$ and the time to maturity $\tau$ directly into the ratio of option price to strike $C_t/K$. In this paper we propose a novel descision function and study the network mapping $S_t/K$ and $\tau$ into the ratio of time value to strike $V_t/K$. Time values' appearance in artificial intelligence fits into traders' natural intelligence. Empirical experiments will be carried out to demonstrate that it significantly improves Hutchinson-Lo-Poggio's original model by faster learning and better generalization performance. In order to take a conceptual viewpoint and to prove that $V_t/K$ but not $C_t/K$ can be approximated by superpositions of logistic functions on its domain of definition, we work on the theory of universal approximation on unbounded domains. We prove some general results which imply that an artificial neural network with a single hidden layer and sigmoid activation represents no function in $L^{p}(\RR^2 \times [0, 1]^{n})$ unless it is constant zero, and that an artificial neural network with a single hidden layer and logistic activation is a universal approximator of $L^{2}(\RR \times [0, 1]^{n})$. Our work partially generalizes Cybenko's fundamental universal approximation theorem on the unit hypercube $[0, 1]^{n}$.