Few-Shot Unsupervised Image-to-Image Translation
Liu, Ming-Yu, Huang, Xun, Mallya, Arun, Karras, Tero, Aila, Timo, Lehtinen, Jaakko, Kautz, Jan
Image-to-image Translation (FUNIT) framework, aiming at learning an image-to-image translation Unsupervised image-to-image translation methods learn model for mapping an image of a source class to an analogous to map images in a given class to an analogous image in image of a target class by leveraging few images of a different class, drawing on unstructured (non-registered) the target class given at test time. The model is never shown datasets of images. While remarkably successful, current images of the target class during training but is asked to methods require access to many images in both source and generate some of them at test time. To proceed, we first hypothesize destination classes at training time. We argue this greatly that the few-shot generation capability of humans limits their use. Drawing inspiration from the human capability develops from their past visual experiences--a person can of picking up the essence of a novel object from better imagine views of a new object if the person has seen a small number of examples and generalizing from there, many more different object classes in the past. Based on we seek a few-shot, unsupervised image-to-image translation the hypothesis, we train our FUNIT model using a dataset algorithm that works on previously unseen target containing images of many different object classes for simulating classes that are specified, at test time, only by a few example the past visual experiences. Specifically, we train the images. Our model achieves this few-shot generation model to translate images from one class to another class capability by coupling an adversarial training scheme by leveraging few example images of the another class.
May-5-2019