human ear
How to Use AI to Talk to Whales--and Save Life on Earth
Before Michelle Fournet moved to Alaska on a whim in her early twenties, she'd never seen a whale. She took a job on a whale watching boat and, each day she was out on the water, gazed at the grand shapes moving under the surface. For her entire life, she realized, the natural world had been out there, and she'd been missing it. "I didn't even know I was bereft," she recalls. Later, as a graduate student in marine biology, Fournet wondered what else she was missing.
- North America > United States > Alaska (0.27)
- North America > United States > New Hampshire (0.05)
Hearing better with skin than ears: Research team develops a sound-sensing skin-attachable acoustic sensor
Voice recognition technology is increasingly prevalent. It is a convenient technology with broad applications. However, to get the most of its intended functions, users must stand near the device and articulate carefully. What if the skin on our bodies could recognize voices without the use of devices? Professor Kilwon Cho and Dr. Siyoung Lee of the Department of Chemical Engineering, together with Professor Wonkyu Moon and Dr. Junsoo Kim of the Department of Mechanical Engineering at POSTECH have developed a microphone that detects sound by applying polymer materials to microelectro-mechanical systems (MEMS) The small, thin microphone demonstrates a wider auditory field than human ears, while it can be easily attachable to the skin. This academic achievement was recently presented in Advanced Materials.
Sound location inspired by bat ears could help robots navigate outdoors
Sound location technology has often been patterned around the human ear, but why do that when bats are clearly better at it? Virginia Tech researchers have certainly asked that question. They've developed a sound location system that mates a bat-like ear design with a deep neural network to pinpoint sounds within half a degree -- a pair of human ears is only accurate within nine degrees, and even the latest technology stops at 7.5 degrees. The system flutters the outer ear to create Doppler shift signatures related to the sound's source. As the patterns are too complex to easily decipher, the team trained the neural network to provide the source direction for every received echo.
AI-generated sound effects are now fooling human ears
If you'll permit us to spoil a little bit of movie magic, many of the sound effects you hear in film and TV are actually recreated and edited in later by Foley artists. Now, researchers are attempting to create sound effect-generating artificial intelligence to see if they can do their jobs well enough to fool the general population. In a recent study, a small cohort of participants fell for the trick: Most they believed that the AI-generated noises were real, IEEE Spectrum reports. Sometimes, they even chose the AI version over a video's original audio. In the study, which was published in June in the paper IEEE Transactions on Multimedia, 41 of the 53 participants were fooled by the AI-generated sounds.
Amazon 'human error' let Alexa user to eavesdrop on 1,700 private audio files from another person
Researchers at China's Zhejiang University published a study last year that showed many of the most popular smart speakers and smartphones, equipped with digital assistants, could be easily tricked into being controlled by hackers. They used a technique called DolphinAttack, which translates voice commands into ultrasonic frequencies that are too high for the human ear to recognize. While the commands may go unheard by humans, the low-frequency audio commands can be picked up, recovered and then interpreted by speech recognition systems. The team were able to launch attacks, which are higher than 20kHz, by using less than £2.20 ($3) of equipment which was attached to a Galaxy S6 Edge. They used an external battery, an amplifier, and an ultrasonic transducer.
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Speech > Speech Recognition (1.00)
Woman says her Echo device recorded and sent a private conversation
Be careful of what you say around your Echo devices. A Portland woman was shocked to discover that Echo recorded and sent audio of a private conversation to one of their contacts without their knowledge, according to KIRO 7. The woman, who is only identified as Danielle, said her family had installed the popular voice-activated speakers throughout their home. It wasn't until a random contact called to let them know that he'd received a call from Alexa that they realized their device had mistakenly transmitted a private conversation. The contact, who was one of her husband's work employees, told the woman to'unplug your Alexa devices right now. 'We unplugged all of them and he proceeded to tell us that he had received audio files of recordings from inside our house,' the woman said.
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > China (0.05)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.75)
- Information Technology > Artificial Intelligence > Speech > Speech Recognition (0.53)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.52)
Alexa, Siri, and Google Assistant can hear silent commands that you can't
A series of studies have proven that it's possible to secretly give silent commands to voice assistants like Amazon Alexa and Google Assistant without their owners ever knowing. According to the New York Times, researchers in both China and the U.S. have carried out a series of experiments which ultimately proved that it's possible to communicate silent commands that are undetectable to the human ear to voice assistants like Siri, Alexa and Google Assistant. The findings call to light a variety of security concerns as they reveal just how vulnerable voice assistant data could be. In one study conducted by Georgetown University and University of California, Berkeley in 2016, student researchers successfully hid secret voice commands with the help of white noise. The students were able to get smart devices to switch over to airplane mode and navigate to websites by hiding commands to do so in white noise that way played through YouTube videos and loudspeakers.
- North America > United States > California > Alameda County > Berkeley (0.26)
- Asia > China (0.26)
Study finds hackers can control Siri, Alexa and Google Assistant using inaudible commands
Researchers have developed a way to hijack popular voice assistants right under the user's nose. All it takes is slipping some secret commands into music playing on the radio, YouTube videos or white noise for someone to control your smart speaker. The commands are undetectable to the human ear so there's little the device owner can do to stop it. Researchers have developed a way to hijack popular voice assistants, including Apple's Siri, Google's Assistant and Amazon's Echo, using secret commands undetectable to the human ear Luckily, the disconcerting vulnerability was only carried out for the study, which was conducted by researchers from University of California, Berkeley. But it still highlights a critical flaw that experts warn could be used for far more nefarious purposes, such as unlocking doors, wiring money or purchasing items online, according to the New York Times.
- North America > United States > California > Alameda County > Berkeley (0.25)
- Asia > China (0.06)
Microsoft hits a speech recognition milestone with a system just as good as human ears
It's a red-letter day at Microsoft Research: a team working on speech recognition has hit a serious symbolic goal with a system that's as good as you at hearing what people are saying. Specifically, the system has a "word error rate" of 5.9 percent, on par with professional human transcribers. Even they don't hear things perfectly, of course, but 94 percent accuracy is more than good enough for conversation. "This accomplishment is the culmination of over twenty years of effort," said Geoffrey Zweig, one of the researchers, in a Microsoft blog post. Indeed, speech recognition is one of those tasks that's been pursued for decades by pretty much every major tech business and research outfit.
This Algorithm Is Good Enough to Fool the Human Ear
Researchers at MIT have created robots that pass an audio Turing Test, fooling humans into thinking that artificially-created sounds are natural. A robot's sounds are crucial to its interactions with people. The sounds a robot makes can identify its purpose and usefulness. But current "approaches in AI only focus on one of the five sense modalities, with vision researchers using images, speech researchers using audio, and so on," says Abhinav Gupta, an assistant professor of robotics at Carnegie Mellon University not involved in MIT's work. "This paper is a step in the right direction to mimic learning the way humans do, by integrating sound and sight."