Goto

Collaborating Authors

 supplementaryto


Supplementaryto"DSelect-k: Differentiable SelectionintheMixtureofExpertswithApplications toMulti-TaskLearning "

Neural Information Processing Systems

MTL: InMTL, deep learning-based architectures that perform soft-parameter sharing, i.e., share model parameters partially, are proving to be effective at exploiting both the commonalities and differences among tasks [6]. Ourwork is also related to [5] who introduced "routers" (similar to gates) that can choose which layers or components of layers to activate per-task. The routers in the latter work are not differentiable and requirereinforcementlearning. To construct α, there are two cases to consider: (i)s = k and (ii) s < k. If s = k, then set αi = log(w ti) for i [k]. Our base case is fort = 1.


Supplementaryto" Instance-dependentLabel-noise LearningunderaStructuralCausalModel "

Neural Information Processing Systems

Let S be the noisy training set, andd2 be the dimension of an instancex. Let y1 and z1 be the estimated clean label and latent representation for the instancex, respectively, by the first branch. As mentioned in our main paper (see Section 3.2), the negative ELBO loss is to minimize 1). a reconstruction lossbetween eachinstancexand ˆpθ11(x,y1);2). For co-teaching loss, we directly follow Han et al. [1].