Shirai, Tomoyuki
Dynamic Determinantal Point Processes
Osogami, Takayuki (IBM Research AI) | Raymond, Rudy (IBM Research AI) | Goel, Akshay (Graduate School of Mathematics, Kyushu University) | Shirai, Tomoyuki (Institute of Mathematics for Industry, Kyushu University) | Maehara, Takanori (RIKEN Center for Advanced Intelligence Project)
The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O ( M 2 K ) time to O ( M K 2 ) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.
Mixing-Time Regularized Policy Gradient
Morimura, Tetsuro (IBM Research - Tokyo) | Osogami, Takayuki (IBM Research - Tokyo) | Shirai, Tomoyuki (Kyushu University)
Policy gradient reinforcement learning (PGRL) has been receiving substantial attention as a mean for seeking stochastic policies that maximize cumulative reward. However, the learning speed of PGRL is known to decrease substantially when PGRL explores the policies that give the Markov chains having long mixing time. We study a new approach of regularizing how the PGRL explores the policies by the use of the hitting time of the Markov chains. The hitting time gives an upper bound on the mixing time, and the proposed approach improves the learning efficiency by keeping the mixing time of the Markov chains short. In particular, we propose a method of temporal-difference learning for estimating the gradient of the hitting time. Numerical experiments show that the proposed method outperforms conventional methods of PGRL.