Deep Fictitious Play for Stochastic Differential Games

Hu, Ruimeng

arXiv.org Machine Learning 

In stochastic differential games, a Nash equilibrium refers to strategies by which no player has an incentive to deviate. Finding a Nash equilibrium is one of the core problems in noncooperative game theory, however, due to the notorious intractability of N-player game, the computation of the Nash equilibrium has been shown extremely time-consuming and memory demanding, especially for large N [16]. On the other hand, a rich literature on game theory has been developed to study consequences of strategies on interactions between a large group of rational "agents", e.g., system risk caused by inter-bank borrowing and lending, price impacts imposed by agents' optimal liquidation, and market price from monopolistic competition. This makes it crucial to develop efficient theory and fast algorithms for computing the Nash equilibrium of N-player stochastic differential games. Deep neural networks with many layers have been recently shown to do a great job in artificial intelligence (e.g., [2, 39]). The idea behind is to use compositions of simple functions to approximate complicated ones, and there are approximation theorems showing that a wide class of functions on compact subsets can be approximated by a single hidden layer neural network (e.g., [53]). This brings a possibility of solving a high-dimensional system using deep neural networks, and in fact, these techniques have been successfully applied to solve stochastic control problems [20, 29, 1]. In this paper, we propose to build deep neural networks by using strategies of fictitious play, and develop deep learning algorithms for computing the Nash equilibrium of asymmetric N-player non-zerosum stochastic differential games.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found