Goto

Collaborating Authors

Generative Probabilistic Novelty Detection with Adversarial Autoencoders

Stanislav Pidhorskyi, Ranya Almohsen, Gianfranco Doretto

Neural Information Processing Systems

We assume that training data is available to describe only the inlier distribution. Recent approaches primarily leverage deep encoder-decoder network architectures to compute a reconstruction error that is used to either compute a novelty score or to train a one-class classifier.


SingularValueFine-tuning: Few-shotSegmentation requiresFew-parametersFine-tuning-SupplementaryMaterial

Neural Information Processing Systems

Different finetune strategy: In Figure 1, we visualize the mIoU curve of different fine-tuning strategies. It can be seen that both layer-based and convolution-based fine-tuning methods bring over-fitting problems. This result shows that traditional fine-tuning methods are not suitable for few-shot segmentation tasks. Directly fine-tuning theparameters ofbackbone infew-shot learning affects the robustness ofFSS models. Therefore, we propose anovelfine-tuning strategy,namely SVF.









8804f94e16ba5b680e239a554a08f7d2-AuthorFeedback.pdf

Neural Information Processing Systems

We train the autoencoder and the classifier on the training set, which is6 diverse and contains texts of varying degrees of attributes, reflected by the different confidence values given by the7 classifier. Different from most previous work that only provides binary control overattributes, one advantage of our model is13 the ability to givecontrol over the degree of attribute transfer desired. Particularly, 'Acc' is used to evaluate the attribute's accuracy.