Goto

Collaborating Authors

 hessian trace






Scalable Multi-Objective and Meta Reinforcement Learning via Gradient Estimation

Zhang, Zhenshuo, Duan, Minxuan, Ye, Youran, Zhang, Hongyang R.

arXiv.org Artificial Intelligence

We study the problem of efficiently estimating policies that simultaneously optimize multiple objectives in reinforcement learning (RL). Given $n$ objectives (or tasks), we seek the optimal partition of these objectives into $k \ll n$ groups, where each group comprises related objectives that can be trained together. This problem arises in applications such as robotics, control, and preference optimization in language models, where learning a single policy for all $n$ objectives is suboptimal as $n$ grows. We introduce a two-stage procedure -- meta-training followed by fine-tuning -- to address this problem. We first learn a meta-policy for all objectives using multitask learning. Then, we adapt the meta-policy to multiple randomly sampled subsets of objectives. The adaptation step leverages a first-order approximation property of well-trained policy networks, which is empirically verified to be accurate within a 2% error margin across various RL environments. The resulting algorithm, PolicyGradEx, efficiently estimates an aggregate task-affinity score matrix given a policy evaluation algorithm. Based on the estimated affinity score matrix, we cluster the $n$ objectives into $k$ groups by maximizing the intra-cluster affinity scores. Experiments on three robotic control and the Meta-World benchmarks demonstrate that our approach outperforms state-of-the-art baselines by 16% on average, while delivering up to $26\times$ faster speedup relative to performing full training to obtain the clusters. Ablation studies validate each component of our approach. For instance, compared with random grouping and gradient-similarity-based grouping, our loss-based clustering yields an improvement of 19%. Finally, we analyze the generalization error of policy networks by measuring the Hessian trace of the loss surface, which gives non-vacuous measures relative to the observed generalization errors.




Review for NeurIPS paper: HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks

Neural Information Processing Systems

Summary and Contributions: This paper suggests that Hessian trace can be a good metric to automate the process to decide the number of quantization bits for each layer unlike previous attempts such as using top Hessian eigenvalue. Some mathematical analysis to support that Hessian trace is better than top Hessian eigenvalue is provided while memory footprint and mode accuracy are compared on several models using ImageNet database. This paper also shows that Hessian trace computations can be simplified by following the Hutchinson's algorithm. Strengths: - Hessian-related metrics have been widely adopted to present different sensitivity of layers. This paper compares a few different Hessian-related approaches and provides some mathematical analysis to claim why Hessian trace can be considered as a good metric to produce some optimal number of quantization bits.


HA WQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks

Neural Information Processing Systems

Furthermore, we present results for object detection on Microsoft COCO, where we achieve 2.6 higher mAP than direct uniform quantization and 1.6 higher mAP than the recently proposed method of


A Experimental Setup in Detail

Neural Information Processing Systems

We implement our attack framework using Python 3.7.3 and PyTorch 1.7.1 For all our attacks in 4.1, 4.2, 4.3, and 4.5, we use symmetric quantization for In 4.4 where we examine the transferability of our attacks, we use the same Banner et al., 2019] while re-training clean models. Prior work showed that a model, less sensitive to the perturbations to its parameters or activation, will have less accuracy degradation after quantization. Alizadeh et al. [2020] look into the decision boundary of a model to examine In Eqn 2, we use label-smoothing to reduce the confidence of a model's prediction Clean is a pre-trained model. Table 6 shows our results. We experiment with an AlexNet model trained on CIFAR10.