Diverse Image Captioning with Context-Object Split Latent Spaces
–Neural Information Processing Systems
Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, eg. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset.
Neural Information Processing Systems
Oct-9-2024, 18:58:12 GMT
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (0.88)
- Vision (0.65)
- Information Technology > Artificial Intelligence