Appendix

Neural Information Processing Systems 

Potential games are appealing as they provide guarantees about the existence of Nash equilibria without requiring optimizingoveracompact set. The learning rate isset to6 10 4 atinitialization and issequentially lowered during training by a factor of 0.35 every 80 training steps, in the same way for all experiments. Similar to [52], we normalize the initial dictionaryD0 by its largest singular value as explained in the main paper in Section 3.4. Divergence is monitored by computing the loss on the training set every 10 epochs.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found