Goto

Collaborating Authors

 notation



Appendix

Neural Information Processing Systems

We experiment with 8 implementations of MoCaD, i.e. two different calibrators combined with four different ensembling strategies as the same as in previous experiments. For Learned-Mixin, the entropy term weight is set to the value suggested by [1]. We run each experiment five times and report the mean scores and the standard deviations. For the Dirichlet calibrator, we use the same configurationasinFEVER. Experimental Results Table 2 shows the experimental result on image classification.




Supplementary Materials for " Multi-Agent Meta-Reinforcement Learning " AT echnical Lemmas

Neural Information Processing Systems

From the three-points identity of the Bregman divergence (Lemma 3.1 of [9]), KL (x y) KL ( x y) = KL (x x) + ln x ln y,x x (12) The first term in (12) can be bounded by KL (x x) = By the Hölder's inequality, the second term in (12) is bounded as ln x ln y,x x ln x ln y Lemma 5. Consider a block diagonal matrix We prove the lemma via induction on N . This completes the induction proof.Lemma 6. We introduce one more notation before presenting the proof. This leads us to the initialization-dependent convergence rate of Algorithm 1, which we re-state and prove as follows. In addition, if we initialize the players' policies to be uniform policies, i.e., The rest of the proof follows by putting all the aforementioned results together.