Goto

Collaborating Authors

 training process






Appendix for Bayesian Active Causal Discovery with Multi-Fidelity Experiments Anonymous Author(s) Affiliation Address email

Neural Information Processing Systems

Then, we intend to calculate the constraint part. The algorithm for Licence method for single-target interventiion scenario is shown in Algorithm 1. The details of experimental baselines are demonstrated as follows. AIT [11] is an active learning method that utilize f-score to select intervention queries. REAL fidelity means the model always choose the highest fidelity to conduct experiments.


How to Fine-tune the Model: Unified Model Shift and Model Bias Policy Optimization

Neural Information Processing Systems

Designing and deriving effective model-based reinforcement learning (MBRL) algorithms with a performance improvement guarantee is challenging, mainly attributed to the high coupling between model learning and policy optimization. Many prior methods that rely on return discrepancy to guide model learning ignore the impacts of model shift, which can lead to performance deterioration due to excessive model updates. Other methods use performance difference bound to explicitly consider model shift. However, these methods rely on a fixed threshold to constrain model shift, resulting in a heavy dependence on the threshold and a lack of adaptability during the training process. In this paper, we theoretically derive an optimization objective that can unify model shift and model bias and then formulate a fine-tuning process. This process adaptively adjusts the model updates to get a performance improvement guarantee while avoiding model over-fitting.



Efficient Meta Neural Heuristic for Multi-Objective Combinatorial Optimization (Appendix) A Model architecture The architecture of the base model in meta-learning is the same as POMO [ 26

Neural Information Processing Systems

Each sublayer adds a skip-connection (ADD) and batch normalization (BN). The decoder sequentially chooses a node according to a probability distribution produced by the node embeddings to construct a solution. The scaled symmetric sampling method is shown in Algorithm 2. The scaled factor The uniform division of the weight space is illustrated as follows. Thus, its approximate Pareto optimal solutions are commonly pursued. V ehicles must serve all the customers and finally return to the depot.



KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training

Neural Information Processing Systems

This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of