context-object split latent space
Diverse Image Captioning with Context-Object Split Latent Spaces
Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, eg. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset. Our framework not only enables diverse captioning through context-based pseudo supervision, but extends this to images with novel objects and without paired captions in the training data. We evaluate our COS-CVAE approach on the standard COCO dataset and on the held-out COCO dataset consisting of images with novel objects, showing significant gains in accuracy and diversity.
Review for NeurIPS paper: Diverse Image Captioning with Context-Object Split Latent Spaces
Weaknesses: My main issues are with some of the evaluations in the paper: 1. Oracle accuracy is a bit of a cheat as it scores all proposed sentences and selects the top scoring sentence. I notice that consensus re-ranking is also reported in the supplemental. The results on this are good in comparison to prior work, so I am not sure why this is not mentioned in the paper (or if the results could be squeezed into the paper by rearranging Table 3). However, even the consensus based ranking is a bit odd since it relies on finding nearest neighbor train images (how are nearest neighbors found? Stronger networks will do a better job won't they?).
Review for NeurIPS paper: Diverse Image Captioning with Context-Object Split Latent Spaces
All reviewers recommend accept (one indicated it only in the discussion but did not update their score). The reviewers appreciate the author response and value the paper for its contributions including - the problem addressed - the idea and method to split context and objects - the extensive evaluation I agree with this evaluation and accept, however, I expect the authors to include the clarifications and improvements suggested by the reviewers and made in the author response. I also encourage the authors to include the results on nocaps as suggested by R4.
Diverse Image Captioning with Context-Object Split Latent Spaces
Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, eg. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset.