Goto

Collaborating Authors

 mcdermott


End-to-end Topographic Auditory Models Replicate Signatures of Human Auditory Cortex

Al-Tahan, Haider, Deb, Mayukh, Feather, Jenelle, Murty, N. Apurva Ratan

arXiv.org Artificial Intelligence

The human auditory cortex is topographically organized. Neurons with similar response properties are spatially clustered, forming smooth maps for acoustic features such as frequency in early auditory areas, and modular regions selective for music and speech in higher-order cortex. Yet, evaluations for current computational models of auditory perception do not measure whether such topographic structure is present in a candidate model. Here, we show that cortical topography is not present in the previous best-performing models at predicting human auditory fMRI responses. To encourage the emergence of topographic organization, we adapt a cortical wiring-constraint loss originally designed for visual perception. The new class of topographic auditory models, TopoAudio, are trained to classify speech, and environmental sounds from cochleagram inputs, with an added constraint that nearby units on a 2D cortical sheet develop similar tuning. Despite these additional constraints, TopoAudio achieves high accuracy on benchmark tasks comparable to the unconstrained non-topographic baseline models. Further, TopoAudio predicts the fMRI responses in the brain as well as standard models, but unlike standard models, TopoAudio develops smooth, topographic maps for tonotopy and amplitude modulation (common properties of early auditory representation, as well as clustered response modules for music and speech (higher-order selectivity observed in the human auditory cortex). TopoAudio is the first end-to-end biologically grounded auditory model to exhibit emergent topography, and our results emphasize that a wiring-length constraint can serve as a general-purpose regularization tool to achieve biologically aligned representations.



AI/ML at the Edge: 4 things CIOs should know

#artificialintelligence

And latency almost always matters when it comes to running artificial intelligence/machine learning (AI/ML) workloads. Great AI requires a lot of data, and it demands it immediately." That's both the blessing and the curse in any sector – industrial and manufacturing are prominent examples, but the principle applies widely across businesses – that generates tons of machine data outside of their centralized clouds or data centers and wants to feed it to an ML model or other form of automation for any number of purposes. Whether you're working with IoT data on a factory floor, or medical diagnostic data in a healthcare facility – or one of many other scenarios where AI/ML use cases are rolling out – you probably can't do so optimally if you're trying to send everything (or close to it) on a round-trip from the edge to the cloud and back again. In fact, if you're dealing with huge volumes of data, your trip might never get off the ground. "I've seen situations in manufacturing facilities ...


The autonomous enterprise is near, but there are still some missing pieces

#artificialintelligence

Joe McKendrick is an author and independent analyst who tracks the impact of information technology on management and markets. As an independent analyst, he has authored numerous research reports in partnership with Forbes Insights, IDC, and Unisphere Research, a division of Information Today, Inc. Building and supporting the artificial intelligence infrastructure that is guiding our businesses is not an easy job. The applications, data and networks behind the scenes have to perform as close to flawlessly as possible, in real time. The good news is AI itself can be employed to provide relief to stressed IT teams. AIOps - artificial intelligence for IT operations - is paving the way to autonomous operations of critical enterprise systems.


Where did that sound come from?

#artificialintelligence

The human brain is finely tuned not only to recognize particular sounds, but also to determine which direction they came from. By comparing differences in sounds that reach the right and left ear, the brain can estimate the location of a barking dog, wailing fire engine, or approaching car. MIT neuroscientists have now developed a computer model that can also perform that complex task. The model, which consists of several convolutional neural networks, not only performs the task as well as humans do, it also struggles in the same ways that humans do. "We now have a model that can actually localize sounds in the real world," says Josh McDermott, an associate professor of brain and cognitive sciences and a member of MIT's McGovern Institute for Brain Research.


Perfecting pitch perception

#artificialintelligence

New research from MIT neuroscientists suggests that natural soundscapes have shaped our sense of hearing, optimizing it for the kinds of sounds we most often encounter. In a study reported Dec. 14 in the journal Nature Communications, researchers led by McGovern Institute for Brain Research associate investigator Josh McDermott used computational modeling to explore factors that influence how humans hear pitch. Their model's pitch perception closely resembled that of humans -- but only when it was trained using music, voices, or other naturalistic sounds. Humans' ability to recognize pitch -- essentially, the rate at which a sound repeats -- gives melody to music and nuance to spoken language. Although this is arguably the best-studied aspect of human hearing, researchers are still debating which factors determine the properties of pitch perception, and why it is more acute for some types of sounds than others. McDermott, who is also an associate professor in MIT's Department of Brain and Cognitive Sciences, and an Investigator with the Center for Brains, Minds, and Machines (CBMM) at MIT, is particularly interested in understanding how our nervous system perceives pitch because cochlear implants, which send electrical signals about sound to the brain in people with profound deafness, don't replicate this aspect of human hearing very well.


Machine learning is booming in medicine. It's also facing a credibility crisis

#artificialintelligence

The mad dash accelerated as quickly as the pandemic. Researchers sprinted to see whether artificial intelligence could unravel Covid-19's many secrets -- and for good reason. There was a shortage of tests and treatments for a skyrocketing number of patients. Maybe AI could detect the illness earlier on lung images, and predict which patients were most likely to become severely ill. Hundreds of studies flooded onto preprint servers and into medical journals claiming to demonstrate AI's ability to perform those tasks with high accuracy.


Instagram Opens Up to Help Businesses Handle Customer Service

WSJ.com: WSJD - Technology

Get weekly insights into the ways companies optimize data, technology and design to drive success with their customers and employees. Instagram is now allowing developers and businesses to begin integrating messages they get from consumers on its platform into the outside tools many companies use to manage customer communications. To enable this function, developers and businesses use an API, which enables two applications to communicate with one another. The inability for the third-party tools that help manage such contacts to connect with direct messages on Instagram left companies grasping for details such as order history when customers contacted them on the platform. The platform's new tool, Messenger API for Instagram, is meant to help customer service agents and social teams get a more unified look at their increasingly digital customers.


Why AI is Harder Than We Think

Mitchell, Melanie

arXiv.org Artificial Intelligence

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.


Deep Neural Networks Help to Explain Living Brains

#artificialintelligence

In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position and other properties -- something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. "I remember very distinctly the time when we found a neural network that actually solved the task," he said. It was 2 a.m., a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. "I was really pumped," he said. It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years.