chomsky


GPT-2 and the Nature of Intelligence

#artificialintelligence

OpenAI's GPT-2 has been discussed everywhere from The New Yorker to The Economist. What does it really tell us about natural and artificial intelligence? The Economist: Which technologies are worth watching in 2020? GPT-2: I would say it is hard to narrow down the list. The world is full of disruptive technologies with real and potentially huge global impacts. The most important is artificial intelligence, which is becoming exponentially more powerful. Consider two classic hypotheses about the development of language and cognition. One main line of Western intellectual thought, often called nativism, goes back to Plato and Kant; in recent memory it has been developed by Noam Chomsky, Steven Pinker, Elizabeth Spelke, and others (including myself).


Education Is a System of Indoctrination of the Young - Noam Chomsky

#artificialintelligence

Chomsky has been known to vigorously defend and debate his views and opinions, in philosophy, linguistics, and politics. He has had notable debates with Jean Piaget, Michel Foucault, William F. Buckley, Jr., Christopher Hitchens, George Lakoff, Richard Perle, Hilary Putnam, Willard Quine, and Alan Dershowitz, to name a few. In response to his speaking style being criticized as boring, Chomsky said that "I'm a boring speaker and I like it that way.... I doubt that people are attracted to whatever the persona is.... People are interested in the issues, and they're interested in the issues because they are important."


Biology and Compositionality: Empirical Considerations for Emergent-Communication Protocols

arXiv.org Artificial Intelligence

Significant advances have been made in artificial systems by using biological systems as a guide. However, there is often little interaction between computational models for emergent communication and biological models of the emergence of language. Many researchers in language origins and emergent communication take compositionality as their primary target for explaining how simple communication systems can become more like natural language. However, there is reason to think that compositionality is the wrong target on the biological side, and so too the wrong target on the machine-learning side. As such, the purpose of this paper is to explore this claim. This has theoretical implications for language origins research more generally, but the focus here will be the implications for research on emergent communication in computer science and machine learning---specifically regarding the types of programmes that might be expected to work and those which will not. I further suggest an alternative approach for future research which focuses on reflexivity, rather than compositionality, as a target for explaining how simple communication systems may become more like natural language. I end by providing some reference to the language origins literature that may be of some use to researchers in machine learning.


AI winter - Wikipedia

#artificialintelligence

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter.[2] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky--two leading AI researchers who had survived the "winter" of the 1970s--warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.[2] Hype is common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists.


Lexical semantics - Wikipedia

#artificialintelligence

Lexical semantics (also known as lexicosemantics), is a subfield of linguistic semantics. The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such as affixes and even compound words and phrases. Lexical units make up the catalogue of words in a language, the lexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language or syntax. This is referred to as syntax-semantic interface.[1] Lexical units, also referred to as syntactic atoms, can stand alone such as in the case of root words or parts of compound words or they necessarily attach to other units such as prefixes and suffixes do. The former are called free morphemes and the latter bound morphemes.[2]


A Mathematical Model for Linguistic Universals

arXiv.org Artificial Intelligence

W e present a Markov model at the discourse level for Steven Pinker's "mentalese", or chains of mental states that transcend the spoken/written forms. Such (potentially) universal temporal structures of textual pa tterns lead us to a language-independent semantic representation, or a translationally-invariant word embe dding, thereby forming the common ground for both comprehensibility within a given language and transla tability between different languages. Applying our model to documents of moderate lengths, without relying on external knowledge bases, we reconcile Noam Chomsky's "poverty of stimulus" paradox with statisti cal learning of natural languages. W e human beings distinguish ourselves from other animals ( 1-3), in that our brain development ( 4-6) enables us to convey sophisticated ideas and to share individual experience s, via languages ( 7-9). Texts written in natural languages constitute a major medium that perpetuates our civilizations ( 10), as a cumulative body of knowledge.


Semantics, not syntax, creates NLU - Pat Inc - Medium

#artificialintelligence

A scientific hypothesis starts the process of scientific enquiry. False hypotheses can start the path to disaster, as was seen with the geocentric model of the'universe' in which heavenly bodies moved in circular orbits. It became heresy to suggest that orbits aren't circular around the stationary earth, leading to epicycles. It's a good story worth studying in school to appreciate how a hypothesis is critical to validating science. Here's an important hypothesis: "The fundamental aim in the linguistic analysis of a language L is to separate the grammatical sequences which are the sentences of L from the ungrammatical sequences which are not sentences of L and to study the structure of the grammatical sequences."


A Science Journal Funded by Peter Thiel Is Running Articles Dismissing Climate Change and Evolution

Mother Jones

But Inference, which bills itself as a "quarterly review of the sciences," was offering me a chance to write about a topic of my own choosing (subject to their approval). They also promised to pay me "appropriately" for my work, and the timing would have been great for book promotion. While I waited for an answer, I went to Inference's website. It looked like a real science publication -- featuring the original writing of scientists and other thinkers I respect, including MIT's Noam Chomsky and George Ellis at the University of Cape Town. There were 13 issues ranging back to 2014, covering a mix of subjects including physics, biology, and linguistics.


A New Capability Maturity Model for Deep Learning – Intuition Machine – Medium

#artificialintelligence

How can we understand progress in Deep Learning without a map? I created one such map a couple years ago, but this map needs a drastic overhaul. In "Five Capability Levels of Deep Learning Intelligence", I proposed a hierarchy of capabilities that was meant to inform the progress of Deep Learning development. So specifically, you begin with a feed forward network in the first level. That would be followed by memory enhanced networks, examples of which would include LSTM and Neural Turing Machine (NTM).


The Future is Now Smartlogic

#artificialintelligence

About two weeks ago, I saw an article (actually, one of my colleagues posted it on our intranet) from the MIT Technology Review about the limitations of Artificial Intelligence. The article is here for those of you who want to read it in full, but the fundamental concept is; while AI has made great strides in the last 20 years or so (see the recent win by Google's AlphaGo over Lee Sodol, who is thought to be one of the best Go players of all time), it is still fundamentally inadequate in one respect – we have not yet built a machine that can carry on a conversation with anything remotely approximating human facility. Quite simply, the computer does not understand the meaning of words that it is using and is therefore unable to use them intelligently. The reason for this, according to the article, is that "words often have meaning based on context and the appearance of the letters and words." It's not enough to be able to identify a concept represented by a bunch of letters strung together.