Relative Entropy Regularized Reinforcement Learning for Efficient Encrypted Policy Synthesis
Suh, Jihoon, Jang, Yeongjun, Teranishi, Kaoru, Tanaka, Takashi
–arXiv.org Artificial Intelligence
We propose an efficient encrypted policy synthesis to develop privacy-preserving model-based reinforcement learning. We first demonstrate that the relative-entropy-regularized reinforcement learning framework offers a computationally convenient linear and ``min-free'' structure for value iteration, enabling a direct and efficient integration of fully homomorphic encryption with bootstrapping into policy synthesis. Convergence and error bounds are analyzed as encrypted policy synthesis propagates errors under the presence of encryption-induced errors including quantization and bootstrapping. Theoretical analysis is validated by numerical simulations. Results demonstrate the effectiveness of the RERL framework in integrating FHE for encrypted policy synthesis.
arXiv.org Artificial Intelligence
Jun-17-2025
- Country:
- Asia
- Japan (0.04)
- South Korea > Seoul
- Seoul (0.04)
- North America > United States
- California > Santa Clara County
- Stanford (0.04)
- Indiana > Tippecanoe County
- Lafayette (0.04)
- West Lafayette (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- New Jersey > Hudson County
- Hoboken (0.04)
- California > Santa Clara County
- Asia
- Genre:
- Research Report (0.84)
- Industry:
- Information Technology > Security & Privacy (0.94)
- Technology: