Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations

Liu, Fenglin, Liu, Yuanxin, Ren, Xuancheng, He, Xiaodong, Sun, Xu

Neural Information Processing Systems 

In vision-and-language grounding problems, fine-grained representations of the image are considered to be of paramount importance. Most of the current systems incorporate visual features and textual concepts as a sketch of an image. However, plainly inferred representations are usually undesirable in that they are composed of separate components, the relations of which are elusive. In this work, we aim at representing an image with a set of integrated visual regions and corresponding textual concepts, reflecting certain semantics. To this end, we build the Mutual Iterative Attention (MIA) module, which integrates correlated visual features and textual concepts, respectively, by aligning the two modalities.