No Such Thing as a General Learner: Language models and their dual optimization
Chemla, Emmanuel, Nefdt, Ryan M.
–arXiv.org Artificial Intelligence
In section 4, we discuss the consequences of to this question, we first argue that neither this for the current field that is structured around humans nor LLMs are general learners, benchmarks mostly concerned with measures of the in a variety of senses. We make a novel final, trained states of LLMs. In section 5, we apply case for how in particular LLMs follow a our arguments to the evaluations more focused dual-optimization process: they are optimized on the learning stages of LLMs. One debate asks during their training (which is typically whether LLMs are not too powerful, often phrases compared to language acquisition), around the question as to whether'impossible' languages, and modern LLMs have also been selected, that allegedly cannot be learned by humans, through a process akin to natural selection can be learned by LLMs. We add to the debate in a species. From this perspective, the fact that, even when trained to learn possible we argue that the performance of LLMs, languages, parts of the languages that LLMs whether similar or dissimilar to that of humans, learn are indeed impossible. This shows that the does not weigh easily on important biases of LLMs are different from ours, and remind debates about the importance of human us that an adequate model of learning has to learn cognitive biases for language.
arXiv.org Artificial Intelligence
Aug-21-2024
- Country:
- Africa > South Africa
- Western Cape > Cape Town (0.04)
- Asia
- China > Hong Kong (0.04)
- Middle East > Jordan (0.04)
- Europe
- Belgium (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- North America > United States
- Africa > South Africa
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology: