Fundamental Principles of Linguistic Structure are Not Represented by o3
Murphy, Elliot, Leivada, Evelina, Dentella, Vittoria, Gunther, Fritz, Marcus, Gary
–arXiv.org Artificial Intelligence
Instead of scaling to unprecendented levels of compute via architectures that are fundamentally grounded in token prediction, a return to more traditional design features of the human mind (predicate-argument structure, variable binding, constituent structure, minimal compositional binding; Donatelli & Koller 2023) may be needed to orchestrate a more reliable expertise in human language (Ramchand 2024). This could be implemented by forms of neuro-symbolic approaches. Still, it is also certainly true that mainstream theoretical linguistics (e.g., the minimalist enterprise) was in some ways ill-equipped to successfully predict which patterns of linguistic activity might be (un)approachable by LLMs. To illustrate, a potential weakness in this direction with respect to recent generative grammar theorizing has been the underestimation of the extent to which lexical information drives composition. This type of information may permit LLMs to abductively infer certain elements of grammatical rules, in whatever format this ultimately takes (Ramchand 2024). Future research should more carefully apply the tools of linguistics to isolate specific sub-components of syntax that might be in principle achievable by language models, given specific design features. For instance, with LLMs "complete recovery of syntax might be very di`icult computationally" (Marcolli et al. 2025: 13), even if we assume that attention modules can in principle "satisfy the same algebraic structure" as what Marcolli et al. postulate as being necessary for syntaxsemantics interface mappings.
arXiv.org Artificial Intelligence
Feb-15-2025
- Country:
- Europe (1.00)
- North America > United States (1.00)
- Genre:
- Research Report > New Finding (0.46)
- Technology: