Coreference as an indicator of context scope in multimodal narrative
Ilinykh, Nikolai, Lappin, Shalom, Sayeed, Asad, Loáiciga, Sharid
–arXiv.org Artificial Intelligence
We demonstrate that large multimodal language models differ substantially from humans in the distribution of coreferential expressions in a visual storytelling task. We introduce a number of metrics to quantify the characteristics of coreferential patterns in both human- and machine-written texts. Humans distribute coreferential expressions in a way that maintains consistency across texts and images, interleaving references to different entities in a highly varied way. Machines are less able to track mixed references, despite achieving perceived improvements in generation quality.
arXiv.org Artificial Intelligence
Mar-7-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe (1.00)
- North America > United States
- California (0.14)
- Asia > Middle East
- Genre:
- Research Report > Experimental Study (0.46)
- Industry:
- Health & Medicine (0.46)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.70)
- Media (0.46)
- Technology: