Goto

Collaborating Authors

 nulla



Impure Simplicial Complex and Term-Modal Logic with Assignment Operators

Yang, Yuanzhe

arXiv.org Artificial Intelligence

Impure simplicial complexes are a powerful tool to model multi-agent epistemic situations where agents may die, but it is difficult to define a satisfactory semantics for the ordinary propositional modal language on such models, since many conceptually dubious expressions involving dead agents can be expressed in this language. In this paper, we introduce a term-modal language with assignment operators, in which such conceptually dubious expressions are syntactically excluded. We define both simplicial semantics and first-order Kripke semantics for this language, characterize their respective expressivity through notions of bisimulation, and show that the two semantics are equivalent when we consider a special class of first order Kripke models called local epistemic models. We also offer a complete axiomatization for the epistemic logic based on this language, and show that our language has a notion of assignment normal form. Finally, we discuss the behavior of a kind of intensional distributed knowledge that can be naturally expressed in our language.





Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games

Neural Information Processing Systems

Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently asymmetric and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is general-sum . The majority of existing results in this field focuses on either symmetric solution concepts (e.g.


Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games

Neural Information Processing Systems

Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently asymmetric and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is general-sum . The majority of existing results in this field focuses on either symmetric solution concepts (e.g.


Appendix T able of Contents

Neural Information Processing Systems

C.2 Proof of item 2 for constrained Markov game Here we construct a counter example to prove item 2. Consider a constrained Markov game with


Sharper Generalization Bounds for Pairwise Learning: Supplementary Material A Proof of Theorem 1

Neural Information Processing Systems

To prove Theorem 1, we need to introduce some lemmas. With these lemmas, we can give the proof of Theorem 1 on high-probability bounds of the generalization gap. The concentration inequality established in Lemma A.1 applies to a summation of According to Lemma A.3, we know null null null null(A ( S); Z, Z) E Therefore, all the assumptions of Lemma A.1 hold for the random functions Lemma A.1 to derive null null null null We first prove Lemma 2 on the norm of output model. We can plug the above inequality back into (B.1) to derive σ 2 Enull null A (S) w To prove Theorem 3, we introduce some lemmas. Assume for all z, z we have (4.3) .