Goto

Collaborating Authors

 proximal update


Residual-as-Teacher: Mitigating Bias Propagation in Student--Teacher Estimation

Yamamoto, Kakei, Wainwright, Martin J.

arXiv.org Machine Learning

We study statistical estimation in a student--teacher setting, where predictions from a pre-trained teacher are used to guide a student model. A standard approach is to train the student to directly match the teacher's outputs, which we refer to as student soft matching (SM). This approach directly propagates any systematic bias or mis-specification present in the teacher, thereby degrading the student's predictions. We propose and analyze an alternative scheme, known as residual-as-teacher (RaT), in which the teacher is used to estimate residuals in the student's predictions. Our analysis shows how the student can thereby emulate a proximal gradient scheme for solving an oracle optimization problem, and this provably reduces the effect of teacher bias. For general student--teacher pairs, we establish non-asymptotic excess risk bounds for any RaT fixed point, along with convergence guarantees for the student-teacher iterative scheme. For kernel-based student--teacher pairs, we prove a sharp separation: the RaT method achieves the minimax-optimal rate, while the SM method incurs constant prediction error for any sample size. Experiments on both synthetic data and ImageNette classification under covariate shift corroborate our theoretical findings.




10 Appendix 10.1 Pseudo-code for DQN Pro Below, we present the pseudo-code for DQN Pro. Notice that the difference between DQN and DQN

Neural Information Processing Systems

Below, we present the pseudo-code for DQN Pro. Pro is minimal (highlighted in gray). Sticky actions True Optimizer Adam Kingma & Ba (2015) Network architecture Nature DQN network Mnih et al. (2015) Random seeds { 0, 1, 2, 3, 4 } Rainbow hyper-parameters (shared) Batch size 64 Other Config file rainbow_aaai.gin Theorem 2. Consider the PMPI algorithm specified by: We make two assumptions: 1. we assume error in policy evaluation step, as already stated in equation (4). All results are averaged over 5 independent seeds.


Faster Deep Reinforcement Learning with Slower Online Network

Neural Information Processing Systems

Deep reinforcement learning algorithms often use two networks for value function optimization: an online network, and a target network that tracks the online network with some delay. Using two separate networks enables the agent to hedge against issues that arise when performing bootstrapping.