Learning Gaze-aware Compositional GAN
Aranjuelo, Nerea, Huang, Siyu, Arganda-Carreras, Ignacio, Unzueta, Luis, Otaegui, Oihana, Pfister, Hanspeter, Wei, Donglai
–arXiv.org Artificial Intelligence
Gaze-annotated facial data is crucial for training deep neural networks (DNNs) for gaze estimation. However, obtaining these data is labor-intensive and requires specialized equipment due to the challenge of accurately annotating the gaze direction of a subject. In this work, we present a generative framework to create annotated gaze data by leveraging the benefits of labeled and unlabeled data sources. We propose a Gaze-aware Compositional GAN that learns to generate annotated facial images from a limited labeled dataset. Then we transfer this model to an unlabeled data domain to take advantage of the diversity it provides. Experiments demonstrate our approach's effectiveness in generating within-domain image augmentations in the ETH-XGaze dataset and cross-domain augmentations in the CelebAMask-HQ dataset domain for gaze estimation DNN training. We also show additional applications of our work, which include facial image editing and gaze redirection.
arXiv.org Artificial Intelligence
May-31-2024
- Country:
- Europe
- Spain (0.14)
- United Kingdom (0.17)
- North America > United States (0.14)
- Europe
- Genre:
- Research Report > Promising Solution (0.46)
- Industry:
- Media > Photography (0.34)
- Technology: