Goto

Collaborating Authors

 main manuscript



Supplementary Material for DDF-HO: Hand-Held Object Reconstruction via Conditional Directed Distance Field 1 Network Architecture

Neural Information Processing Systems

After 2D ray sampling process depicted in Sec. The total number of parameters of our network is 25M. In Section 3.4 of the main manuscript, we introduce the 3D intersection-aware hand feature This process enables the extraction of global information from the hand joints. Our training process involves the utilization of five distinct types of data samples. In this section, we provide the corresponding table (Tab.


AInjectiveChange-of-VariableFormulaandStacking InjectiveFlows Wefirstderive(5)from(3). Bythechainrule,wehave: J[gφ ] g

Neural Information Processing Systems

We summarize our methods for computing/estimating the gradient of the log determinant arising inmaximum likelihood training ofrectangular flows. Algorithm 2showstheexactmethod, where jvp(f,z,)denotes computingJ[f](z) usingforward-mode AD,and i Rd isthei-thstandard basis vector, i.e. a one-hot vector with a1 on its i-th coordinate. Note that / θlogdetAθ is computed using backpropagation. Thefor loop is easily parallelized in practice.




Supplementary Information 10 Relation between low pass filter and

Neural Information Processing Systems

Eqn. 3 represents the solution for a stationary energy with respect to the prospective voltage ŭ Here we consider a generalization of the energy function from the main manuscript that includes arbitrary "connectivity functions" f with parameters θ: E(ŭ Pseudo-code for our vanilla implementation can be found in Algorithm 1. Algorithm 1 Pseudo-code for the multi-layer implementation of Latent Equilibrium (LE) Figure 5: Learning to mimic a teacher microcircuit with LE. For the interneurons, the somatic membrane potentials of the pyramidal neurons in the layer above serve as targets. First, the output rate of the neurons must depend on the prospective voltage: ϕ (u) ϕ (ŭ). Note that this includes also the rates in the calculation of dendritic membrane potentials (Eqns. Learning is split into two stages: first, the learning of the so-called self-predicting state and afterwards the learning of the actual task.



Supplementary for UDH: Universal Deep Hiding for Steganography, Watermarking, and Light Field Messaging

Neural Information Processing Systems

This supplementary content is mainly organized in the order of being referenced in the main manuscript. The architectures of the R networks are shown in Table 3. The training curve is shown in Figure 1. B.1 Where is the secret image encoded? Is every channel equally important?


On Measuring Fairness in Generative Models Supplementary Material

Neural Information Processing Systems

These were not included in the main paper due to space limitations. In Sec 4.1 of main paper, we have proposed a statistical model for the sensitive attribute classifier Generators are not completely biased. Given that a generator is trained on a reliable dataset with the availability of all classes of a given sensitive attribute, coupled with the advancement in generator's architecture, it is a fair assumption that the generator would learn some representation of each class in the sensitive attribute and not be completely Here, we provide more information on the necessary assumptions and the expanded forms of the equations. A.2, we will similarly provide more information on MLE value of Population Mean. A.1, we can equate the sample mean to the expanded theoretical model: µ Now given that the classifier's accuracy Fairness in generative models is defined as Equal Representation meaning that the generator is supposed to generate an equal number of samples for each element of an attribute, e.g., an equal number In the main paper Sec.3, we discussed that there could be considerable error in the fairness measurement, In our extended experiments in Sec.