abstract thought
Can a new book crack one of neuroscience's hardest problems? Not quite
The ideas presented in George Lakoff and Srini Narayanan's The Neural Mind are fascinating, but the writing is far less compelling This is a book review in two parts. The first is about the ideas presented in The Neural Mind: How brains think, which are fascinating. The second is about the actual experience of reading it. The book tackles one of the biggest questions in neuroscience: how do neurons perform all the different kinds of human thought possible, from planning motor actions to composing sentences and musing about philosophy? The authors have very different perspectives.
- Europe > Switzerland > Zürich > Zürich (0.15)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Europe > United Kingdom > England > Devon (0.05)
The Emergence of Abstract Thought in Large Language Models Beyond Any Language
Chen, Yuxin, Zhao, Yiran, Zhang, Yang, Zhang, An, Kawaguchi, Kenji, Joty, Shafiq, Li, Junnan, Chua, Tat-Seng, Shieh, Michael Qizhe, Zhang, Wenxuan
As large language models (LLMs) continue to advance, their capacity to function effectively across a diverse range of languages has shown marked improvement. Preliminary studies observe that the hidden activations of LLMs often resemble English, even when responding to non-English prompts. This has led to the widespread assumption that LLMs may "think" in English. However, more recent results showing strong multilingual performance, even surpassing English performance on specific tasks in other languages, challenge this view. In this work, we find that LLMs progressively develop a core language-agnostic parameter space-a remarkably small subset of parameters whose deactivation results in significant performance degradation across all languages. This compact yet critical set of parameters underlies the model's ability to generalize beyond individual languages, supporting the emergence of abstract thought that is not tied to any specific linguistic system. Specifically, we identify language-related neurons-those are consistently activated during the processing of particular languages, and categorize them as either shared (active across multiple languages) or exclusive (specific to one). As LLMs undergo continued development over time, we observe a marked increase in both the proportion and functional importance of shared neurons, while exclusive neurons progressively diminish in influence. These shared neurons constitute the backbone of the core language-agnostic parameter space, supporting the emergence of abstract thought. Motivated by these insights, we propose neuron-specific training strategies tailored to LLMs' language-agnostic levels at different development stages. Experiments across diverse LLM families support our approach.
- Asia > Singapore (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (3 more...)
The Creation of Abstract Thoughts in the Brain - Neuroscience News
Here, in two fMRI experiments, we demonstrate a mechanism of abstraction built upon the valuation of sensory features. Human volunteers learned novel association rules based on simple visual features. Reinforcement-learning algorithms revealed that, with learning, high-value abstract representations increasingly guided participant behaviour, resulting in better choices and higher subjective confidence. We also found that the brain area computing value signals – the ventromedial prefrontal cortex – prioritised and selected latent task elements during abstraction, both locally and through its connection to the visual cortex. Such a coding scheme predicts a causal role for valuation. Hence, in a second experiment, we used multivoxel neural reinforcement to test for the causality of feature valuation in the sensory cortex, as a mechanism of abstraction.
Artificial intelligence helps reveal how people process abstract thought: Study of deep neural networks suggests knowledge comes via sensory experience
"As we rely more and more on these systems, it is important to know how they work and why," said Cameron Buckner, assistant professor of philosophy and author of a paper exploring the topic published in the journal Synthese. Better understanding how the systems work, in turn, led him to insights into the nature of human learning. Philosophers have debated the origins of human knowledge since the days of Plato -- is it innate, based on logic, or does knowledge come from sensory experience in the world? Deep Convolutional Neural Networks, or DCNNs, suggest human knowledge stems from experience, a school of thought known as empiricism, Buckner concluded. These neural networks -- multi-layered artificial neural networks, with nodes replicating how neurons process and pass along information in the brain -- demonstrate how abstract knowledge is acquired, he said, making the networks a useful tool for fields including neuroscience and psychology.
Artificial intelligence helps reveal how people process abstract thought
As artificial intelligence becomes more sophisticated, much of the public attention has focused on how successfully these technologies can compete against humans at chess and other strategy games. A philosopher from the University of Houston has taken a different approach, deconstructing the complex neural networks used in machine learning to shed light on how humans process abstract learning. "As we rely more and more on these systems, it is important to know how they work and why," said Cameron Buckner, assistant professor of philosophy and author of a paper exploring the topic published in the journal Synthese. Better understanding how the systems work, in turn, led him to insights into the nature of human learning. Philosophers have debated the origins of human knowledge since the days of Plato - is it innate, based on logic, or does knowledge come from sensory experience in the world?
Artificial intelligence helps reveal how people process abstract thought
As artificial intelligence becomes more sophisticated, much of the public attention has focused on how successfully these technologies can compete against humans at chess and other strategy games. A philosopher from the University of Houston has taken a different approach, deconstructing the complex neural networks used in machine learning to shed light on how humans process abstract learning. "As we rely more and more on these systems, it is important to know how they work and why," said Cameron Buckner, assistant professor of philosophy and author of a paper exploring the topic published in the journal Synthese. Better understanding how the systems work, in turn, led him to insights into the nature of human learning. Philosophers have debated the origins of human knowledge since the days of Plato--is it innate, based on logic, or does knowledge come from sensory experience in the world?
Google's DeepMind AI is now training machines using IQ tests to improve abstract thought
AI has moved a step closer to achieving human-like thought, after a new project developed machines capable of abstract thought to pass parts of an IQ test. Experts from DeepMind, which is owned by Google parent company Alphabet, put machine learning systems through their paces with IQ tests, which are designed to measure a number of reasoning skills. The puzzles in the test involve a series of seemingly random shapes, which participants need to study to determine the rules of that dictate the pattern. Once they have worked out the rules of the puzzle, they should be able to accurately pick the next shape in the sequence. DeepMind researchers hope that developing AI which is capable of thinking outside the box could lead to machines dreaming-up novel solutions to problems that humans may not ever have considered.
- Asia > North Korea (0.05)
- North America > United States > California > Santa Clara County > Mountain View (0.05)
- North America > Canada > Quebec > Montreal (0.05)
DeepMind AI takes IQ tests to probe its ability for abstract thought
Will artificial intelligences ever be able to match humans in abstract thought, or are they just very fancy number crunchers? Researchers at Google DeepMind are trying to find out by challenging AIs to solve abstract reasoning puzzles similar to those found in IQ tests. If you have ever taken an IQ test, you'll know that one kind of question involves looking at sets of abstract shapes and choosing which should come next in a given the sequence. These puzzles are known as Raven's progressive …
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Creativity & Intelligence (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.71)
Move Over, Coders--Physicists Will Soon Rule Silicon Valley
At least, that's what Oscar Boykin says. He majored in physics at the Georgia Institute of Technology and in 2002 he finished a physics PhD at UCLA. But four years ago, physicists at the Large Hadron Collider in Switzerland discovered the Higgs boson, a subatomic particle first predicted in the 1960s. As Boykin points out, everyone expected it. It didn't change anything or give physcists anything new to strive for.
- North America > United States > California (0.45)
- Europe > Switzerland (0.25)
- North America > United States > New York > New York County > New York City (0.05)
Seven outstanding scientific breakthroughs in 2016
December 27, 2016 --With excitement swirling around the possibility of a ninth planet, a rebound in the global tiger population for the first time in a century, and the DNA sequenced in space for the first time, 2016 has been a year full of scientific wonder. But as the year comes to a close, there are some breakthroughs particularly worth highlighting. In February, a century after Albert Einstein predicted their existence, an international team of researchers confirmed that they had actually detected a ripple in the fabric of spacetime for the first time. The detection of gravitational waves came across as a "chirp" across the detectors that make up the Laser Interferometer Gravitational-wave Observatory (LIGO), but the researchers say it was the result of two large celestial bodies, possibly black holes, colliding some 1.3 billion years ago. Then, in June, the scientists announced that the cosmos had chirped again.
- North America > United States > Alabama (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- Education (0.48)
- Leisure & Entertainment > Games (0.31)