Goto

Collaborating Authors

 deepsqueak


Revisiting Wright: Improving supervised classification of rat ultrasonic vocalisations using synthetic training data

Scott, K. Jack, Speers, Lucinda J., Bilkey, David K.

arXiv.org Artificial Intelligence

Rodents communicate through ultrasonic vocalizations (USVs). These calls are of interest because they provide insight into the development and function of vocal communication, and may prove to be useful as a biomarker for dysfunction in models of neurodevelopmental disorders. Rodent USVs can be categorised into different components and while manual classification is time consuming, advances in neural computing have allowed for fast and accurate identification and classification. Here, we adapt a convolutional neural network (CNN), VocalMat, created for analysing mice USVs, for use with rats. We codify a modified schema, adapted from that previously proposed by Wright et al. (2010), for classification, and compare the performance of our adaptation of VocalMat with a benchmark CNN, DeepSqueak. Additionally, we test the effect of inserting synthetic USVs into the training data of our classification network in order to reduce the workload involved in generating a training set. Our results show that the modified VocalMat outperformed the benchmark software on measures of both call identification, and classification. Additionally, we found that the augmentation of training data with synthetic images resulted in a marked improvement in the accuracy of VocalMat when it was subsequently used to analyse novel data. The resulting accuracy on the modified Wright categorizations was sufficiently high to allow for the application of this software in rat USV classification in laboratory conditions. Our findings also show that inserting synthetic USV calls into the training set leads to improvements in accuracy with little extra time-cost.


This New AI Can Detect The Calls of Animals Swimming in an Ocean of Noise

#artificialintelligence

The ocean is swimming in sound, and a new artificial intelligence tool could help scientists sift through all that noise to track and study marine mammals. The tool is called DeepSqueak, not because it measures dolphin calls in the ocean underworld, but because it is based on a deep learning algorithm that was first used to categorize the different ultrasonic squeals of mice. Now, researchers are applying the technology to vast datasets of marine bioacoustics. Given that much of the ocean is out of our physical reach, underwater sound could help us understand where marine mammals swim, their density and abundance, and how they interact with one another. Already, recordings of whale songs have helped identify an unknown population of blue whales in the Indian Ocean and a never-before-heard species of beaked whale.


Artificial intelligence helps decode sounds of the animal kingdom

#artificialintelligence

Artificial intelligence is helping us understand the language of animals. The technology can analyze hours of animal audio in a fraction of the time the same work would take for a human. "If you're manually trying to isolate these calls from audio files, it takes a really long time," said Kevin Coffey, a professor at the University of Washington. Coffey is also one of the creators of DeepSqueak, an A.I. program designed to pick up on high-pitched rat calls that human ears often miss. "In rats, these calls are often related to positive or negative effect," Coffey said.


Deep learning deciphers what rats are saying

#artificialintelligence

For many years, researchers knew that rodents' squeaks tell a lot about how the animals are feeling. Much like a wagging tail on a dog, certain vocalizations indicate the rodents are happy. Conversely, other vocalizations indicate the rodents are stressed, or even depressed. But why were they interested in the rodents' moods? These researchers wanted to understand the rodents' responses to various stimuli.


'DeepSqueak' A.I. Decodes Mice Chatter

#artificialintelligence

In what is somehow the cutest science story of the new year so far, scientists at the University of Washington have announced a new artificial intelligence system for decoding mouse squeaks. Dubbed DeepSqueak, the software program can analyze rodent vocalizations and then pattern-match the audio to behaviors observed in laboratory settings. As such, the software can be used to partially decode the language of mice and other rodents. Researchers hope that the technology will be helpful in developing a broad range of medical and psychological studies. Published this week in the journal Neuropsychopharmacology, the study is based around a novel use of sonogram technology, which transforms an audio signal into an image or series of graphs.


AI Interprets What Rodents are Saying

#artificialintelligence

Artificial intelligence (AI) has improved greatly in recent years largely due to advances in deep learning, a method of machine-based learning. Deep learning's superior pattern-recognition has spawned a number of advancements in computer vision, translation, speech recognition, and other purposes. Deep learning algorithms are being applied in many industries for a variety of purposes. Last month, researchers in the Psychiatry and Behavioral Science department at the University of Washington School of Medicine announced the creation of "DeepSqueak," a deep learning system that can detect and analyze the vocalizations of rodents. Modern science, depends on laboratory rodents to serve as mammalian proxies to human test subjects.


DeepSqueak is a deep-learning algorithm used to study ultrasonic rat chatter

#artificialintelligence

Scientists at the University of Washington have released an interesting tool for studying the ultrasonic vocalizations of rats. It is a convolutional neural network capable of analyzing and categorizing rat calls. Studying rat vocalizations is time-consuming for a human. For one thing, most squeaks that rats make are above the 20kHz threshold of human hearing. So analysis requires slowing recordings down by as much as a factor of 20 to hear the highest vocalizations. This also means that one hour of audio becomes 20 hours of analysis.


DeepSqueak is an A.I. That Reveals What Rats are Chatting About Digital Trends

#artificialintelligence

These studies also focused only on simple aspects of vocalizations, such as their frequency range, which can correlate with happy or unhappy emotional states. With DeepSqueak, however, it is possible to look at more subtle features, such as the order of calls and their temporal association with actions and events. Right now, not enough is known about exactly what all rat squeaks refer to. But tools like this make a very promising platform to build upon. As biologists compile more calls over time, it hopefully won't be long before a rodent Rosetta Stone becomes a reality. A paper describing the work was recently published in the journal Neuropsychopharmacology.


'Deep Squeak' Helps Researchers Decode Rodent Chatter

#artificialintelligence

Rodents engage in social communication through a rich repertoire of ultrasonic vocalizations (USVs). Recording and analysis of USVs has broad utility during diverse behavioral tests and can be performed noninvasively in almost any rodent behavioral model to provide rich insights into the emotional state and motor function of the test animal. Despite strong evidence that USVs serve an array of communicative functions, technical and financial limitations have been barriers for most laboratories to adopt vocalization analysis. Recently, deep learning has revolutionized the field of machine hearing and vision, by allowing computers to perform human-like activities including seeing, listening, and speaking. Such systems are constructed from biomimetic, "deep", artificial neural networks.


'DeepSqueak' Helps Researchers Decode Rodent Chatter

#artificialintelligence

Two scientists at the University of Washington School of Medicine have developed a software program that represents the first use of deep artificial neural networks in squeak detection. University of Washington (UW) School of Medicine researchers have developed a software program to identify and decode rodent vocalizations. The DeepSqueak deep neural network converts audio signals into an image, or sonogram, which could be further refined with machine-vision algorithms developed for self-driving cars. Said the UW School of Medicine's Russell Marx, "DeepSqueak uses biomimetic algorithms that learn to isolate vocalizations by being given labeled examples of vocalizations and noise." According to co-developer Kevin Coffey, the program could distinguish between about 20 kinds of rodent calls.