Goto

Collaborating Authors

 discriminator



SafeDICE: Offline Safe Imitation Learning with Non-Preferred Demonstrations

Neural Information Processing Systems

In this paper, we present a hyperparameter-free offline safe IL algorithm, SafeDICE, that learns safe policy by leveraging the non-preferred demonstrations in the space of stationary distributions. Our algorithm directly estimates the stationary distribution corrections of the policy that imitate the demonstrations excluding the non-preferred behavior.


Appendix Figure A.1: Input spikes. A. The input spikes, x

Neural Information Processing Systems

They are 300 Poisson neurons, where the first 100 encode the whisker stimulus, the next 100 encode the auditory cue and the last 100 act as an extra noise source for our model. Out of the 300 neurons, 60 of them are inhibitory (red). The input neurons project unrestrictedly to the whole RSNN. The baseline firing rate of all input neurons is 5 Hz. The whisker stimulus and auditory cue are encoded with an increase of the firing rate for 10 ms, starting 4 ms after the onset of the actual stimuli.





Domain Re-Modulation for Few-Shot Generative Domain Adaptation Yi Wu, Ziqiang Li University of Science and Technology of China Chaoyue Wang, Heliang Zheng, Shanshan Zhao JD Explore Academy Bin Li

Neural Information Processing Systems

In this study, we delve into the task of few-shot Generative Domain Adaptation (GDA), which involves transferring a pre-trained generator from one domain to a new domain using only a few reference images. Inspired by the way human brains acquire knowledge in new domains, we present an innovative generator structure called Domain Re-Modulation (DoRM) .


A Constrained sampling via post-processed denoiser In this section, we provide more details on the apparatus necessary to perform a posteriori conditional

Neural Information Processing Systems

Eq. (6) suggests that the SDE drift corresponding to the score may be broken down into 3 steps: 1. However, in practice this modification creates a "discontinuity" between the constrained and unconstrained components, leading to erroneous correlations between them in the generated samples. "learning rate" that is determined empirically such that the loss value reduces adequately close to zero Thus it needs to be tuned empirically. The correction in Eq. (16) is equivalent to imposing a Gaussian likelihood on Remark 2. The post-processing presented in this section is similar to [ In this section, we present the most relevant components for completeness and better reproducibility. B.2 Sampling The reverse SDE in Eq. (5) used for sampling may be rewritten in terms of denoiser D As stated in 4.1 of the main text, for this The energy-based metrics are already defined in Eq. (12) and Eq.