conopt
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- North America > Canada (0.04)
- (2 more...)
- North America > United States > New York (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (4 more...)
Reproducibility Challenge NeurIPS 2019 Report on "Competitive Gradient Descent"
Authors suggest their method is a natural generalization of gradient descent to the two-player scenario where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. The paper proposes several experiments to establish the robustness of their method. This project aims at replicating their results. The paper provides a detailed comparison to methods based on optimism and consensus on the properties of convergence and stability of various discussed methods using numerical experiments and rigorous analysis. In order to understand these terms, comparison and proposed method and examine the results of the experiments, next section gives a necessary background of the original paper. 2 Background The traditional optimization is concerned with a single agent trying to optimize a cost function. It can be seen as min x R m f ( x). The agent has a clear objective to find ("Good local") minimum of f . Gradeint Descent (and its varients) are reliable Algorithmic Baseline for this purpose.
JR-GAN: Jacobian Regularization for Generative Adversarial Networks
Generative adversarial networks (GANs) are notoriously difficult to train and the reasons for their (non-)convergence behaviors are still not completely understood. Using a simple GAN example, we mathematically analyze the local convergence behavior of its training dynamics in a non-asymptotic way. We find that in order to ensure a good convergence rate two factors of the Jacobian should be \textit{simultaneously} avoided, which are (1) Phase Factor: the Jacobian has complex eigenvalues with a large imaginary-to-real ratio, (2) Conditioning Factor: the Jacobian is ill-conditioned. Previous methods of regularizing the Jacobian can only alleviate one of these two factors, while making the other more severe. From our theoretical analysis, we propose the Jacobian Regularized GANs (JR-GANs), which insure the two factors are alleviated by construction. With extensive experiments on several popular datasets, we show that the JR-GAN training is highly stable and achieves near state-of-the-art results both qualitatively and quantitatively.