Scalable Multi-Objective Reinforcement Learning with Fairness Guarantees using Lorenz Dominance
Michailidis, Dimitris, Röpke, Willem, Roijers, Diederik M., Ghebreab, Sennay, Santos, Fernando P.
–arXiv.org Artificial Intelligence
Multi-Objective Reinforcement Learning (MORL) aims to learn a set of policies that optimize trade-offs between multiple, often conflicting objectives. MORL is computationally more complex than single-objective RL, particularly as the number of objectives increases. Additionally, when objectives involve the preferences of agents or groups, ensuring fairness is socially desirable. This paper introduces a principled algorithm that incorporates fairness into MORL while improving scalability to many-objective problems. We propose using Lorenz dominance to identify policies with equitable reward distributions and introduce {\lambda}-Lorenz dominance to enable flexible fairness preferences. We release a new, large-scale real-world transport planning environment and demonstrate that our method encourages the discovery of fair policies, showing improved scalability in two large cities (Xi'an and Amsterdam). Our methods outperform common multi-objective approaches, particularly in high-dimensional objective spaces.
arXiv.org Artificial Intelligence
Nov-27-2024
- Country:
- Asia > China
- Shaanxi Province > Xi'an (0.26)
- Europe > Netherlands
- North Holland > Amsterdam (0.26)
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Transportation > Infrastructure & Services (0.46)
- Technology: