Variational Structured Semantic Inference for Diverse Image Captioning
Chen, Fuhai, Ji, Rongrong, Ji, Jiayi, Sun, Xiaoshuai, Zhang, Baochang, Ge, Xuri, Wu, Yongjian, Huang, Feiyue, Wang, Yan
–Neural Information Processing Systems
Despite the exciting progress in image captioning, generating diverse captions for a given image remains as an open problem. Existing methods typically apply generative models such as Variational Auto-Encoder to diversify the captions, which however neglect two key factors of diverse expression, i.e., the lexical diversity and the syntactic diversity. To model these two inherent diversities in image captioning, we propose a Variational Structured Semantic Inferring model (termed VSSI-cap) executed in a novel structured encoder-inferer-decoder schema. VSSI-cap mainly innovates in a novel structure, i.e., Variational Multi-modal Inferring tree (termed VarMI-tree). In particular, conditioned on the visual-textual features from the encoder, the VarMI-tree models the lexical and syntactic diversities by inferring their latent variables (with variations) in an approximate posterior inference guided by a visual semantic prior.
Neural Information Processing Systems
Mar-18-2020, 21:04:05 GMT
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (0.86)
- Vision (1.00)
- Information Technology > Artificial Intelligence