supplementarymaterialsf...
SupplementaryMaterialsFor: " DomainAdaptation with InvariantRepresentationLearning: What TransformationstoLearn? "
Furthermore, letφ: X Z be an encoder s.t. Then, there is no functionφ s.t. Let there be a subset in the invariant spaceB Z, and suppose that we have marginal invariance inthelatent space:PS(φ(X) B) = PT(φ(X) B), B. Define thepre-image ofB as: A={a X:φ(a) B}. Let A X be a region s.t. We followed the procedure in [2], and used a mixture kernel function ofq RBF kernels: κ(z1,z2) = Pq i=1ηiexp{ ||z1 z2||2}/σ2i, where σ2i is the kernel width of the i-th kernel, and ηi is a mixing weight which we set to1/q.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
SupplementaryMaterialsfor" PrivateSetGeneration withDiscriminativeInformation "
To compute the privacy cost of our approach, we numerically computeDα(M(D) M(D)) in Definition A.1 for a range of ordersα [9, 14] in each training step that requires access to the real gradientgDθ . In comparison to normal non-private training, the major part of the additional memory and computation costisintroduced bytheDP-SGD [1]step(fortheper-sample gradient computation) that sanitizes the parameter gradient on real data, while the other steps (including the update onS, and theupdates ofF(;θ)onS areequivalent tomultiple calls ofthenormal non-privateforward and backward passes (whose costs havelower magnitude than theDP-SGD step). GS-WGAN [3] 5 We adopt the default configuration provided by the official implementation (ε=10): thesubsamplingrate =1/1000,DPnoisescaleσ =1.07,batchsize=32. Following[3], we pretrain (warm-start) the model for2K iterations, and subsequently train for 20K iterations. The experiments presented in Section 5.2 of the main paper correspond to the classincremental learning setting [10]where thedata partition ateach stage contains data from disjoint subsets of label classes.
SupplementaryMaterialsfor" POLY-HOOT: Monte-CarloPlanning inContinuousSpaceMDPswithNon-AsymptoticAnalysis " AAlgorithmDetails
Finally,defineXε, {x X: f(x) f ε}to be the set of arms that are ε-closetooptimal. Notethatwiththedepth limitation H itispossible that the nodes on depth H might be played more than once atdifferent rounds. LetT1 bethe set of nodes abovedepth H that are descendants of nodes inIH. In the following, we analyze each of the four parts individually. To proceed further, we first need to state several definitions that are useful throughout.