Appendix
–Neural Information Processing Systems
Potential games are appealing as they provide guarantees about the existence of Nash equilibria without requiring optimizingoveracompact set. The learning rate isset to6 10 4 atinitialization and issequentially lowered during training by a factor of 0.35 every 80 training steps, in the same way for all experiments. Similar to [52], we normalize the initial dictionaryD0 by its largest singular value as explained in the main paper in Section 3.4. Divergence is monitored by computing the loss on the training set every 10 epochs.
Neural Information Processing Systems
Feb-19-2026, 06:22:17 GMT
- Technology: