Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with General Utilities IEOR Department UC Berkeley
–Neural Information Processing Systems
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints. The objective and constraints are described by general utilities, i.e., nonlinear functions of the long-term state-action occupancy measure, which encompass broader decision-making goals such as risk, exploration, or imitations. The exponential growth of the state-action space size with the number of agents presents challenges for global observability, further exacerbated by the global coupling arising from agents' safety constraints. To tackle this issue, we propose a primal-dual method utilizing shadow reward and κ-hop neighbor truncation under a form of correlation decay property, where κ is the communication radius.
Neural Information Processing Systems
Feb-10-2025, 20:16:42 GMT
- Genre:
- Overview (0.67)
- Research Report > New Finding (0.67)
- Industry:
- Education > Educational Setting
- Higher Education (0.40)
- Information Technology (0.46)
- Education > Educational Setting
- Technology: