dicarlo
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Santa Clara County > Stanford (0.05)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- (3 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Santa Clara County > Stanford (0.05)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- (3 more...)
- North America > United States (0.68)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (0.97)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- North America > United States (0.93)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (0.97)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
Brain-Model Evaluations Need the NeuroAI Turing Test
Feather, Jenelle, Khosla, Meenakshi, Murty, N. Apurva Ratan, Nayebi, Aran
What makes an artificial system a good model of intelligence? The classical test proposed by Alan Turing focuses on behavior, requiring that an artificial agent's behavior be indistinguishable from that of a human. While behavioral similarity provides a strong starting point, two systems with very different internal representations can produce the same outputs. Thus, in modeling biological intelligence, the field of NeuroAI often aims to go beyond behavioral similarity and achieve representational convergence between a model's activations and the measured activity of a biological system. This position paper argues that the standard definition of the Turing Test is incomplete for NeuroAI, and proposes a stronger framework called the ``NeuroAI Turing Test'', a benchmark that extends beyond behavior alone and \emph{additionally} requires models to produce internal neural representations that are empirically indistinguishable from those of a brain up to measured individual variability, i.e. the differences between a computational model and the brain is no more than the difference between one brain and another brain. While the brain is not necessarily the ceiling of intelligence, it remains the only universally agreed-upon example, making it a natural reference point for evaluating computational models. By proposing this framework, we aim to shift the discourse from loosely defined notions of brain inspiration to a systematic and testable standard centered on both behavior and internal representations, providing a clear benchmark for neuroscientific modeling and AI development.
- North America > United States > California (0.28)
- Oceania > Australia (0.14)
- Europe > Austria > Vienna (0.14)
- Information Technology > Artificial Intelligence > Issues > Turing's Test (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Neuroscience (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.92)
Fixing the problems of deep neural networks will require better training data and learning algorithms
Over the past decade, vision scientists have turned to deep neural networks (DNNs) to model biological vision. The popularity of DNNs comes from their ability to rival human performance on visual tasks [1] and the seemingly concomitant correspondence of their hidden units with biological vision [2]. Bowers and colleagues [3] marshal evidence from psychology and neuroscience to argue that while DNNs and biological systems may achieve similar accuracy on visual benchmarks, they often do so by relying on qualitatively different visual features and strategies [4-6]. Based on these findings, Bowers and colleagues call for a re-evaluation of what DNNs can tell us about biological vision and suggest dramatic adjustments going forward, potentially even moving on from DNNs altogether. Are DNNs poorly suited to model biological vision?
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.05)
- North America > United States > Rhode Island > Providence County > Providence (0.05)
- North America > Mexico > Puebla (0.04)
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex
Linsley, Drew, Rodriguez, Ivan F., Fel, Thomas, Arcaro, Michael, Sharma, Saloni, Livingstone, Margaret, Serre, Thomas
One of the most impactful findings in computational neuroscience over the past decade is that the object recognition accuracy of deep neural networks (DNNs) correlates with their ability to predict neural responses to natural images in the inferotemporal (IT) cortex. This discovery supported the long-held theory that object recognition is a core objective of the visual cortex, and suggested that more accurate DNNs would serve as better models of IT neuron responses to images. Since then, deep learning has undergone a revolution of scale: billion parameter-scale DNNs trained on billions of images are rivaling or outperforming humans at visual tasks including object recognition. Have today's DNNs become more accurate at predicting IT neuron responses to images as they have grown more accurate at object recognition? Surprisingly, across three independent experiments, we find this is not the case. DNNs have become progressively worse models of IT as their accuracy has increased on ImageNet. To understand why DNNs experience this trade-off and evaluate if they are still an appropriate paradigm for modeling the visual system, we turn to recordings of IT that capture spatially resolved maps of neuronal activity elicited by natural images. These neuronal activity maps reveal that DNNs trained on ImageNet learn to rely on different visual features than those encoded by IT and that this problem worsens as their accuracy increases. We successfully resolved this issue with the neural harmonizer, a plug-and-play training routine for DNNs that aligns their learned representations with humans. Our results suggest that harmonized DNNs break the trade-off between ImageNet accuracy and neural prediction accuracy that assails current DNNs and offer a path to more accurate models of biological vision.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- (2 more...)
AI Researchers Fight Noise by Turning to Biology
Artificial intelligence sees things we don't -- often to its detriment. While machines have gotten incredibly good at recognizing images, it's still easy to fool them. Simply add a tiny amount of noise to the input images, undetectable to the human eye, and the AI suddenly classifies school buses, dogs or buildings as completely different objects, like ostriches. In a paper posted online in June, Nicolas Papernot of the University of Toronto and his colleagues studied different kinds of machine learning models that process language and found a way to fool them by meddling with their input text in a process invisible to humans. The hidden instructions are only seen by the computer when it reads the code behind the text to map the letters to bytes in its memory. Papernot's team showed that even tiny additions, like single characters that encode for white space, can wreak havoc on the model's understanding of the text.
- North America > Canada > Ontario > Toronto (0.55)
- North America > United States > Massachusetts (0.05)
- North America > United States > California > San Diego County > San Diego (0.05)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.05)
Spider-Man: Miles Morales' Muscle Update Utilizes Machine Learning
Spider-Man: Miles Morales received a new update this week that added a new "Advanced Tech" suit and realistic simulated muscle deformation. Now a developer from Insomniac Games has revealed technical details on how the muscle deformation works. Even though Spider-Man: Miles Morales released last November in a complete, well-running, and well-polished state, Insomniac appears to be committed to adding new features and technical improvements to the game after launch. One of the largest updates came in December, when Insomniac released a new "Performance RT" mode that combined Ray Tracing and 60FPS, two features that had previously been separate options. This new PS5-exclusive update for Miles Morales adds realistic simulations for Miles' muscles deforming, as well as cloth simulations for the suit on top.
Insomniac may be using Sony AI machine learning for Spider-Man on PS5
Insomniac Games is using machine learning to enhance in-game visuals on the PlayStation 5, and the studio could be working alongside Sony's AI division on experimental new technology. Insomniac Games recently turned heads by confirming the PlayStation 5 can support machine learning. The team is using it in creative ways, starting with a new PS5 update for Spider-Man Miles Morales that significantly changes muscular appearance in the game. The new technique creates more realistic "muscle deformation," a term used in animation to describe how models are transformed and manipulated in 3D rigs. According to Lead Character Technical Director Josh DiCarlo, the technique is using machine learning inference, which means Insomniac is feeding the algorithm data in real-time that's running on the PlayStation 5. DiCarlo says this technique doesn't come at a graphical hit.