Coreference as an indicator of context scope in multimodal narrative

Ilinykh, Nikolai, Lappin, Shalom, Sayeed, Asad, Loáiciga, Sharid

arXiv.org Artificial Intelligence 

We demonstrate that large multimodal language models differ substantially from humans in the distribution of coreferential expressions in a visual storytelling task. We introduce a number of metrics to quantify the characteristics of coreferential patterns in both human- and machine-written texts. Humans distribute coreferential expressions in a way that maintains consistency across texts and images, interleaving references to different entities in a highly varied way. Machines are less able to track mixed references, despite achieving perceived improvements in generation quality.