rodent vocalization
- North America > United States (0.46)
- Asia > Middle East > Iran (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States (0.46)
- Asia > Middle East > Iran (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
Vocal Call Locator Benchmark (VCL) for localizing rodent vocalizations from multi-channel audio
Understanding the behavioral and neural dynamics of social interactions is a goalof contemporary neuroscience. Many machine learning methods have emergedin recent years to make sense of complex video and neurophysiological data thatresult from these experiments. Less focus has been placed on understanding howanimals process acoustic information, including social vocalizations. A criticalstep to bridge this gap is determining the senders and receivers of acoustic infor-mation in social interactions. While sound source localization (SSL) is a classicproblem in signal processing, existing approaches are limited in their ability tolocalize animal-generated sounds in standard laboratory environments.
AI Interprets What Rodents are Saying
Artificial intelligence (AI) has improved greatly in recent years largely due to advances in deep learning, a method of machine-based learning. Deep learning's superior pattern-recognition has spawned a number of advancements in computer vision, translation, speech recognition, and other purposes. Deep learning algorithms are being applied in many industries for a variety of purposes. Last month, researchers in the Psychiatry and Behavioral Science department at the University of Washington School of Medicine announced the creation of "DeepSqueak," a deep learning system that can detect and analyze the vocalizations of rodents. Modern science, depends on laboratory rodents to serve as mammalian proxies to human test subjects.