Goto

Collaborating Authors

 echolocation


A robot bat sheds new light on how they hunt in darkness

Popular Science

The lesser long-nosed bat (Leptonycteris yerbabuenae) is a medium-sized bat found in Central and North America. Breakthroughs, discoveries, and DIY tips sent six days a week. Biologists and engineers have joined forces to build a new robot bat that's helping us understand how bats use echolocation to hunt for food. By creating a robot that can echolocate, the team mimicked a bat's flight path and explained how bats can quickly determine whether or not their prey is on a leaf. This new bat's eye view is detailed in a study recently published in the The study was led in part by bat scientist and Smithsonian Tropical Research Institute research associate Inga Geipel .

  Genre: Research Report (0.35)


The 2025 Ig Nobel Prizes honor garlicky babies, drunk bats, and more

Popular Science

The annual awards celebrate achievements that make us'laugh then think.' Breakthroughs, discoveries, and DIY tips sent every weekday. In the weeks before the Nobel Prizes are announced, the scientific community gathers every year for something a little more lighthearted: The Ig Nobel Prizes. Awarded to "honor achievements so surprising that they make people LAUGH, then THINK," this year marks the 35th anniversary of the awards. These prestigious awards celebrate science's more unusual contributions, honor the imaginative, and perhaps most importantly, spur people's interest in science, medicine, and technology . This year's honorees brought us pizza-eating lizards, tipsy bats, nail growth, and more that all celebrate the joy and fun in asking any and all questions.


Do elephants really call to each other by name?

Al Jazeera

In a remarkable experiment of artificial intelligence meets elephants, researchers have successfully demonstrated how the giant mammals call to each other using individual names. According to a new study published in Nature Ecology and Evolution, African savannah elephants in Kenya were observed and listened to, using machine learning software called Elephant Voices which analysed calls being made between two herds of elephants. The research took place in Samburu National Reserve and Amboseli National Park over four years including 14 months of fieldwork, in which elephants were tracked and observed and their "calls" recorded. Some 469 unique calls or "rumbles" were captured from the African elephants in the experiment. It has long been known that elephants are highly social animals.


Sperm whale clicks could be the closest thing to a human language yet

New Scientist

Sperm whale calls are far more complex than we thought – and could be an animal communication system that is the closest thing to human language yet discovered. The claim is based on an analysis of thousands of exchanges made by east Caribbean sperm whales (Physeter macrocephalus), which were recorded over several years. "It's really extraordinary to see the possibility of another species on this planet having the capacity for communication," says Daniela Rus at the Massachusetts Institute of Technology. "We used to believe that we are the only ones." Sperm whales are long-lived animals with complex social lives, with females and their young living in small groups.


Efficient and Robust Spiking Neural Circuit for Navigation Inspired by Echolocating Bats

Neural Information Processing Systems

We demonstrate a spiking neural circuit for azimuth angle detection inspired by the echolocation circuits of the Horseshoe bat Rhinolophus ferrumequinum and utilize it to devise a model for navigation and target tracking, capturing several key aspects of information transmission in biology. Our network, using only a simple local-information based sensor implementing the cardioid angular gain function, operates at biological spike rate of approximately 10 Hz.


Accurate Gaussian-Process-based Distance Fields with applications to Echolocation and Mapping

Gentil, Cedric Le, Ouabi, Othmane-Latif, Wu, Lan, Pradalier, Cedric, Vidal-Calleja, Teresa

arXiv.org Artificial Intelligence

This paper introduces a novel method to estimate distance fields from noisy point clouds using Gaussian Process (GP) regression. Distance fields, or distance functions, gained popularity for applications like point cloud registration, odometry, SLAM, path planning, shape reconstruction, etc. A distance field provides a continuous representation of the scene defined as the shortest distance from any query point and the closest surface. The key concept of the proposed method is the transformation of a GP-inferred latent scalar field into an accurate distance field by using a reverting function related to the kernel inverse. The latent field can be interpreted as a smooth occupancy map. This paper provides the theoretical derivation of the proposed method as well as a novel uncertainty proxy for the distance estimates. The improved performance compared with existing distance fields is demonstrated with simulated experiments. The level of accuracy of the proposed approach enables novel applications that rely on precise distance estimation: this work presents echolocation and mapping frameworks for ultrasonic-guided wave sensing in metallic structures. These methods leverage the proposed distance field with a physics-based measurement model accounting for the propagation of the ultrasonic waves in the material. Real-world experiments are conducted to demonstrate the soundness of these frameworks.


The Audio-Visual BatVision Dataset for Research on Sight and Sound

Brunetto, Amandine, Hornauer, Sascha, Yu, Stella X., Moutarde, Fabien

arXiv.org Artificial Intelligence

Vision research showed remarkable success in understanding our world, propelled by datasets of images and videos. Sensor data from radar, LiDAR and cameras supports research in robotics and autonomous driving for at least a decade. However, while visual sensors may fail in some conditions, sound has recently shown potential to complement sensor data. Simulated room impulse responses (RIR) in 3D apartment-models became a benchmark dataset for the community, fostering a range of audiovisual research. In simulation, depth is predictable from sound, by learning bat-like perception with a neural network. Concurrently, the same was achieved in reality by using RGB-D images and echoes of chirping sounds. Biomimicking bat perception is an exciting new direction but needs dedicated datasets to explore the potential. Therefore, we collected the BatVision dataset to provide large-scale echoes in complex real-world scenes to the community. We equipped a robot with a speaker to emit chirps and a binaural microphone to record their echoes. Synchronized RGB-D images from the same perspective provide visual labels of traversed spaces. We sampled modern US office spaces to historic French university grounds, indoor and outdoor with large architectural variety. This dataset will allow research on robot echolocation, general audio-visual tasks and sound phaenomena unavailable in simulated data. We show promising results for audio-only depth prediction and show how state-of-the-art work developed for simulated data can also succeed on our dataset. The data can be downloaded at https://forms.gle/W6xtshMgoXGZDwsE7


Echolocation could give small robots the ability to find lost people

Engadget

Scientists and roboticists have long looked at nature for inspiration to develop new features for machines. In this case, researchers from the University of Toronto were inspired by bats and other animals that rely on echolocation to design a method that would give small robots that ability to navigate themselves -- one that doesn't need expensive hardware or components too large or too heavy for tiny machines. In fact, according to PopSci, the team only used the integrated audio hardware of an interactive puck robot and built an audio extension deck using cheap mic and speakers for a tiny flying drone that can fit in the palm of your hand. The system works just like bat echolocation. It was designed to emit sounds across frequencies, which a robot's microphone then picks up as they bounce off walls.

  Country: North America > Canada > Ontario > Toronto (0.59)

Blind as a bat: audible echolocation on small robots

Dümbgen, Frederike, Hoffet, Adrien, Kolundžija, Mihailo, Scholefield, Adam, Vetterli, Martin

arXiv.org Artificial Intelligence

For safe and efficient operation, mobile robots need to perceive their environment, and in particular, perform tasks such as obstacle detection, localization, and mapping. Although robots are often equipped with microphones and speakers, the audio modality is rarely used for these tasks. Compared to the localization of sound sources, for which many practical solutions exist, algorithms for active echolocation are less developed and often rely on hardware requirements that are out of reach for small robots. We propose an end-to-end pipeline for sound-based localization and mapping that is targeted at, but not limited to, robots equipped with only simple buzzers and low-end microphones. The method is model-based, runs in real time, and requires no prior calibration or training. We successfully test the algorithm on the e-puck robot with its integrated audio hardware, and on the Crazyflie drone, for which we design a reproducible audio extension deck. We achieve centimeter-level wall localization on both platforms when the robots are static during the measurement process. Even in the more challenging setting of a flying drone, we can successfully localize walls, which we demonstrate in a proof-of-concept multi-wall localization and mapping demo.