Why machines do not understand: A response to S{\o}gaard

Landgrebe, Jobst, Smith, Barry

arXiv.org Artificial Intelligence 

Some defenders of so-called'artificial intelligence' believe that machines can understand language. In particular, Søgaard has argued in this journal for a thesis of this sort, on the basis of the idea (1) that where there is semantics there is also understanding and (2) that machines are not only capable of what he calls'inferential semantics', but even that they can (with the help of inputs from sensors) 'learn' referential semantics (Søgaard, 2022). We show that he goes wrong because he pays insufficient attention to the difference between language as used by humans and the sequences of inert of symbols which arise when language is stored on hard drives or in books in libraries. So-called large language models (LLMs), such as the ones built into chatGPT and GPT-4, contain encodings of natural language symbol sequences which represent morphological and syntactic relationships between their constituent symbols. This means that a model of this sort can represent both the internal structure of words and the ways in which words are put together to form phrases, sentences and paragraphs.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found