Goto

Collaborating Authors

 argmax


e464656edca5e58850f8cec98cbb979b-Supplemental.pdf

Neural Information Processing Systems

To be consistent with accuracy definition, we denote the correctness ofstj for instance t as sim(stj,rt) = ( 2 distance(stj,rt))/ 2 where sim(stj,rt) is in the range [0,1] and distance(stj,rt) is in range [0, 2], 2 is the largest Euclidean distance in the probability simplex. Given a test dataset I, the correctness of a learner SLj on I can be denoted as 2 corrSLj = 1n Pn t=1sim(stj,rt). In this section, we define multiple metrics for consistency, accuracy, and correct-consistency in detail. Figure 1 shows the metrics computation in our experiments. We have created a git repository for this work and will be posted upon the acceptance and publicationofthiswork.


Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork Qiang Gao

Neural Information Processing Systems

DSN primarily seeks to transfer knowledge to the new coming task from the learned tasks by selecting the affiliated weights of a small set of neurons to be activated, including the reused neurons from prior tasks via neuron-wise masks. And it also transfers possibly valuable knowledge to the earlier tasks via data-free replay.


Model-FreeActiveExploration inReinforcementLearning

Neural Information Processing Systems

We study the problem of exploration in Reinforcement Learning and present a novel model-free solution. We adopt an information-theoretical viewpoint and start from the instance-specific lower bound ofthe number ofsamples that have to be collected to identify a nearly-optimal policy.







LatentTemplateInductionwithGumbel-CRFs Appendix

Neural Information Processing Systems

Papandreou and Yuille[4] proposed the Perturb-and-MAP Random Field, an efficient sampling method forgeneral MarkovRandom Field. We compare the detailed structure of gradients of each estimator. All gradients are formed as a summation over the steps. The Gumbel-CRF and PM-MRF estimator can be decomposed with a pathwise term, where we take gradientoff w.r.t. Since the official test set is not publically available, we use the same training/ validation/ test split as Fu et al.[1].