Goto

Collaborating Authors

 noam chomsky


Mathematical Structure of Syntactic Merge

Marcolli, Matilde, Chomsky, Noam, Berwick, Robert

arXiv.org Artificial Intelligence

The syntactic Merge operation of the Minimalist Program in linguistics can be described mathematically in terms of Hopf algebras, with a formalism similar to the one arising in the physics of renormalization. This mathematical formulation of Merge has good descriptive power, as phenomena empirically observed in linguistics can be justified from simple mathematical arguments. It also provides a possible mathematical model for externalization and for the role of syntactic parameters.


NOAM CHOMSKY: AI ISN'T COMING FOR US ALL, – SkyMagzines

#artificialintelligence

The world's preeminent linguist has spoken -- and he seems mighty tired of everyone's whining about artificial intelligence as it stands today. In an op-ed for the New York Times, Noam Chomsky said that although the current spate of AI chatbots such as OpenAI's ChatGPT and Microsoft's Bing AI "have been hailed as the first glimmers on the horizon of artificial general intelligence" -- the point at which AIs are able to think and act in ways superior to humans -- we absolutely are not anywhere near that level yet. "That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments," the Massachusetts Institute of Technology cognitive scientist mused. "However useful these programs may be in some narrow domains," Chomsky notes, there's no way that machine learning as it is today could compete with the human mind. Headlines about AI coming for our jobs and taking over our future are, as the public intellectual writes, like something out of a tragicomedy by Argentinian writer Jorge Luis Borges -- and should be taken as such.


Inaccessible Neural Language Models Could Reinvigorate Linguistic Nativism

Perrine, Patrick

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have been making big waves in the machine learning community within the past few years. The impressive scalability of LLMs due to the advent of deep learning can be seen as a continuation of empiricist lingusitic methods, as opposed to rule-based linguistic methods that are grounded in a nativist perspective. Current LLMs are generally inaccessible to resource-constrained researchers, due to a variety of factors including closed source code. This work argues that this lack of accessibility could instill a nativist bias in researchers new to computational linguistics, given that new researchers may only have rule-based, nativist approaches to study to produce new work. Also, given that there are numerous critics of deep learning claiming that LLMs and related methods may soon lose their relevancy, we speculate that such an event could trigger a new wave of nativism in the language processing community. To prevent such a dramatic shift and placing favor in hybrid methods of rules and deep learning, we call upon researchers to open source their LLM code wherever possible to allow both empircist and hybrid approaches to remain accessible.


An epic AI Debate--and why everyone should be at least a little bit worried about AI going into 2023

#artificialintelligence

What do Noam Chomsky, living legend of linguistics, Kai-Fu Lee, perhaps the most famous AI researcher in all of China, and Yejin Choi, the 2022 MacArthur Fellowship winner who was profiled earlier this week in The New York Times Magazine--and more than a dozen other scientists, economists, researchers, and elected officials--all have in common? They are all worried about the near-term future of AI. They are all worried about different things. Each spoke last week at December 23's AGI Debate (co-organized by Montreal.AI's Vince Boucher and myself). No summary can capture all that was said (though Tiernan Ray's 8,000 word account at ZDNet comes close), but here are a few of the many concerns that were raised: Noam Chomsky, who led off the night, was worried about whether the current approach to artificial intelligence would ever tell us anything about the thing that he cares about most: what makes the human mind what it is?


Noam Chomsky and GPT-3

#artificialintelligence

"You can't go to a physics conference and say: I've got a great theory. It accounts for everything and is so simple it can be captured in two words: "Anything goes."" Every now and then engineers make an advance, and scientists and lay people begin to ponder the question of whether that advance might yield important insight into the human mind. Descartes wondered whether the mind might work on hydraulic principles; throughout the second half of the 20th century, many wondered whether the digital computer would offer a natural metaphor for the mind. The latest hypothesis to attract notice, both within the scientific community, and in the world at large, is the notion that a technology that is popular today, known as large language models, such as OpenAI's GPT-3, might offer important insight into the mechanics of the human mind. Enthusiasm for such models has grown rapidly; OpenAI's Chief Science Officer Ilya Sutskever recently suggested that such systems could conceivably be "slightly conscious".


What Is Generative Grammar?

#artificialintelligence

Generative grammar is a theory of human language that posits that the grammatical structure of sentences is generated by the human mind as a generative process. The theory was originally developed by Noam Chomsky in the late 1950s and 1960s. The term "generative grammar" was introduced by Chomsky in his 1965 book "Aspects of the Theory of Syntax", where he argued that his theory was a significant departure from the prevailing structuralist theories of the time, such as those of Ferdinand de Saussure and Roman Jakobson. In Chomsky's view, structuralist theories were not sufficiently explanatory. In contrast, generative grammar has a descriptive power that structuralist theories lack.


Noam Chomsky on the Future of Deep Learning

#artificialintelligence

For the past few weeks, I've been engaged in an email exchange with my favourite anarcho-syndicalist Noam Chomsky. I reached out to him initially to ask whether recent developments in ANNs (artificial neural networks) had caused him to reconsider his famous linguistic theory Universal Grammar. Our conversation touched on the possible limitations of Deep Learning, how well ANNs really model biological brains and also meandered into more philosophical territory. I'm not going to quote Professor Chomsky directly in this article as our discussion was informal but I will attempt to summarise the key take-aways. Noam Chomsky is first and foremost a professor of linguistics (considered by many to be "the father of modern linguistics") but he is probably better known outside of academic circles as an activist, philosopher and historian.


Who is the Father Of Artificial Intelligence?

#artificialintelligence

Every feature of intelligence or learning aspects in principle can be so precisely described that a machine can seamlessly simulate it. John McCarthy, who is the Father of Artificial Intelligence, was a pioneer in the fields of AI. He not only is credited to be the founder of AI, but also one who coined the term Artificial Intelligence. In 1955, John McCarthy coined the term Artificial Intelligence, which he proposed in the famous Dartmouth conference in 1956. This conference attended by 10-computer scientists, saw McCarthy explore ways in which machines can learn and reason like humans.


Word Sense Disambiguation

#artificialintelligence

The history and development of Artificial Intelligence has seen numerous peaks and troughs. Hype around what machines can accomplish lead to boosts in AI funding while unmet expectations cripple the industry until the next breakthrough. The term AI Winter refers to periods in history of reduced funding and interest in artificial intelligence development. During the cold war, there was an increased interest in Machine Translation to automate the translation of Russian documents into English. This time period also coincided with massive strides in linguistic developments and the early career of the famed linguist Noam Chomsky.


GPT-2 and the Nature of Intelligence

#artificialintelligence

OpenAI's GPT-2 has been discussed everywhere from The New Yorker to The Economist. What does it really tell us about natural and artificial intelligence? The Economist: Which technologies are worth watching in 2020? GPT-2: I would say it is hard to narrow down the list. The world is full of disruptive technologies with real and potentially huge global impacts. The most important is artificial intelligence, which is becoming exponentially more powerful. Consider two classic hypotheses about the development of language and cognition. One main line of Western intellectual thought, often called nativism, goes back to Plato and Kant; in recent memory it has been developed by Noam Chomsky, Steven Pinker, Elizabeth Spelke, and others (including myself).