Goto

Collaborating Authors

 supplementary





1 Supplementary

Neural Information Processing Systems

Code and data to replicate our experiments can be found at https://github.com/ppope/rho-learn. 1.1 DFT Relaxations We use the PBE exchange-correlation functional for all relaxations. In particular a much smaller model was used than the state-of-the-art SCN results. An SCF run may be initialized with a custom density, e.g. one generated from a machine-learning




Supplementary for UniTSFace

Neural Information Processing Systems

We have derived three sample-to-sample based losses in the manuscript, i.e., USS loss, sample-to-sample based softmax, and BCE losses. The experimental evaluations of such marginal losses have been included in Sec. In our work, we choose the cosine function to represent the similarity of two features, i.e., g (x, x The learning rate starts at 0.1 and is reduced by a factor of 10 at the All models in ablation and parameter study were trained on CASIA-WebFace. For Glint360K, we train the models(ResNet-100) for 20 epochs using a batch size of 1024. The UniTSFace under the'Large' protocol of MegaFace Challenge 1 (as shown in Table 4) was trained on Glint360K.



Supplementary: SubsidiaryPrototypeAlignment forUniversalDomainAdaptation

Neural Information Processing Systems

The proposed approach may be unsuitable for datasets with very less number of classes. When number of classes are low,our Insight 3(main paper) may not hold, making the pretexttask very difficult tolearn. This might make technology more accessible to organizations and individuals with limited resources. It can also aid applications where data is protected by privacy regulations and hence difficult to collect. The negative consequences might include making these systems more availabletoorganizations orindividuals whotrytoutilize them forillegalpurposes.