Use the Online Network If You Can: Towards Fast and Stable Reinforcement Learning

Hendawy, Ahmed, Metternich, Henrik, Vincent, Théo, Kallel, Mahdi, Peters, Jan, D'Eramo, Carlo

arXiv.org Artificial Intelligence 

The use of target networks is a popular approach for estimating value functions in deep Reinforcement Learning (RL). While effective, the target network remains a compromise solution that preserves stability at the cost of slowly moving targets, thus delaying learning. Conversely, using the online network as a bootstrapped target is intuitively appealing, albeit well-known to lead to unstable learning. In this work, we aim to obtain the best out of both worlds by introducing a novel update rule that computes the target using the MINimum estimate between the Target and Online network, giving rise to our method, MINTO. Through this simple, yet effective modification, we show that MINTO enables faster and stable value function learning, by mitigating the potential overestimation bias of using the online network for bootstrapping. Notably, MINTO can be seamlessly integrated into a wide range of value-based and actor-critic algorithms with a negligible cost. We evaluate MINTO extensively across diverse benchmarks, spanning online and of-fline RL, as well as discrete and continuous action spaces. Across all benchmarks, MINTO consistently improves performance, demonstrating its broad applicability and effectiveness. Reinforcement Learning (RL) has demonstrated exceptional performance and achieved major breakthroughs across a diverse spectrum of decision-making challenges. Noteworthy applications include learning complex locomotion skills (Haarnoja et al., 2018b; Rudin et al., 2022) and enabling sophisticated, real-world capabilities such as robotic manipulation (Andrychowicz et al., 2020; Lu et al., 2025). The foundation of this success lies primarily in Deep RL, initiated by the introduction of the Deep Q-Network (DQN) (Mnih et al., 2013), which marked the first successful application of deep neural networks in RL. To make that happen, Mnih et al. (2013) introduce various techniques to mitigate mainly the deadly triad issue (V an Hasselt et al., 2018) due to the usage of function approximators, off-policy data, and target bootstrapping.