Large Language Models Preserve Semantic Isotopies in Story Continuations
–arXiv.org Artificial Intelligence
In this work, we explore the relevance of textual semantics to Large Language Models (LLMs), extending previous insights into the connection between distributional semantics and structural semantics. We investigate whether LLM-generated texts preserve semantic isotopies. We design a story continuation experiment using 10,000 ROCStories prompts completed by five LLMs. We first validate GPT-4o's ability to extract isotopies from a linguistic benchmark, then apply it to the generated stories. We then analyze structural (coverage, density, spread) and semantic properties of isotopies to assess how they are affected by completion. Results show that LLM completion within a given token horizon preserves semantic isotopies across multiple properties.
arXiv.org Artificial Intelligence
Oct-7-2025
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Middle East > Israel
- Jerusalem District > Jerusalem (0.04)
- Japan > Honshū
- Europe
- Austria > Vienna (0.14)
- France (0.04)
- Greece > Attica
- Athens (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- North America
- Canada > Ontario
- Toronto (0.14)
- Cuba (0.04)
- United States
- California
- San Diego County > San Diego (0.04)
- Santa Clara County > Stanford (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Georgia > Fulton County
- Atlanta (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Washington > King County
- Seattle (0.04)
- California
- Canada > Ontario
- South America > Brazil
- São Paulo (0.04)
- Africa > Ethiopia
- Genre:
- Research Report > New Finding (1.00)
- Technology: