cochlear implant
Enhancing Cochlear Implant Signal Coding with Scaled Dot-Product Attention
Essaid, Billel, Kheddar, Hamza, Batel, Noureddine
--Cochlear implants (CIs) play a vital role in restoring hearing for individuals with severe to profound sensorineural hearing loss by directly stimulating the auditory nerve with electrical signals. While traditional coding strategies, such as the advanced combination encoder (ACE), have proven effective, they are constrained by their adaptability and precision. This paper investigates the use of deep learning (DL) techniques to generate electrodograms for CIs, presenting our model as an advanced alternative. We compared the performance of our model with the ACE strategy by evaluating the intelligibility of reconstructed audio signals using the short-time objective intelligibility (STOI) metric. The results indicate that our model achieves a STOI score of 0.6031, closely approximating the 0.6126 score of the ACE strategy, and offers potential advantages in flexibility and adaptability. This study underscores the benefits of incorporating artificial intelligent (AI) into CI technology, such as enhanced personalization and efficiency.
- Africa > Middle East > Algeria (0.05)
- Europe > Switzerland (0.04)
- Health & Medicine > Consumer Health (0.57)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.35)
Subtitling Your Life
A little over thirty years ago, when he was in his mid-forties, my friend David Howorth lost all hearing in his left ear, a calamity known as single-sided deafness. "It happened literally overnight," he said. "My doctor told me, 'We really don't understand why.' " At the time, he was working as a litigator in the Portland, Oregon, office of a large law firm. His hearing loss had no impact on his job--"In a courtroom, you can get along fine with one ear"--but other parts of his life were upended. The brain pinpoints sound sources in part by analyzing minute differences between left-ear and right-ear arrival times, the same process that helps bats and owls find prey they can't see.
- North America > United States > Oregon > Multnomah County > Portland (0.24)
- North America > United States > New York (0.04)
- North America > United States > Connecticut > Hartford County > West Hartford (0.04)
- (2 more...)
- Leisure & Entertainment (1.00)
- Media (0.94)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.50)
Pruning-aware Loss Functions for STOI-Optimized Pruned Recurrent Autoencoders for the Compression of the Stimulation Patterns of Cochlear Implants at Zero Delay
Hinrichs, Reemt, Ostermann, Jörn
Cochlear implants (CIs) are surgically implanted hearing devices, which allow to restore a sense of hearing in people suffering from profound hearing loss. Wireless streaming of audio from external devices to CI signal processors has become common place. Specialized compression based on the stimulation patterns of a CI by deep recurrent autoencoders can decrease the power consumption in such a wireless streaming application through bit-rate reduction at zero latency. While previous research achieved considerable bit-rate reductions, model sizes were ignored, which can be of crucial importance in hearing-aids due to their limited computational resources. This work investigates maximizing objective speech intelligibility of the coded stimulation patterns of deep recurrent autoencoders while minimizing model size. For this purpose, a pruning-aware loss is proposed, which captures the impact of pruning during training. This training with a pruning-aware loss is compared to conventional magnitude-informed pruning and is found to yield considerable improvements in objective intelligibility, especially at higher pruning rates. After fine-tuning, little to no degradation of objective intelligibility is observed up to a pruning rate of about 55\,\%. The proposed pruning-aware loss yields substantial gains in objective speech intelligibility scores after pruning compared to the magnitude-informed baseline for pruning rates above 45\,\%.
Advanced Artificial Intelligence Algorithms in Cochlear Implants: Review of Healthcare Strategies, Challenges, and Perspectives
Essaid, Billel, Kheddar, Hamza, Batel, Noureddine, Lakas, Abderrahmane, Chowdhury, Muhammad E. H.
Automatic speech recognition (ASR) plays a pivotal role in our daily lives, offering utility not only for interacting with machines but also for facilitating communication for individuals with either partial or profound hearing impairments. The process involves receiving the speech signal in analogue form, followed by various signal processing algorithms to make it compatible with devices of limited capacity, such as cochlear implants (CIs). Unfortunately, these implants, equipped with a finite number of electrodes, often result in speech distortion during synthesis. Despite efforts by researchers to enhance received speech quality using various state-of-the-art signal processing techniques, challenges persist, especially in scenarios involving multiple sources of speech, environmental noise, and other circumstances. The advent of new artificial intelligence (AI) methods has ushered in cutting-edge strategies to address the limitations and difficulties associated with traditional signal processing techniques dedicated to CIs. This review aims to comprehensively review advancements in CI-based ASR and speech enhancement, among other related aspects. The primary objective is to provide a thorough overview of metrics and datasets, exploring the capabilities of AI algorithms in this biomedical field, summarizing and commenting on the best results obtained. Additionally, the review will delve into potential applications and suggest future directions to bridge existing research gaps in this domain.
- Asia > Middle East > UAE (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Cuba > La Habana Province > Havana (0.04)
- (7 more...)
- Research Report > Promising Solution (1.00)
- Overview (1.00)
- Research Report > New Finding (0.93)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- (4 more...)
ElectrodeNet -- A Deep Learning Based Sound Coding Strategy for Cochlear Implants
Huang, Enoch Hsin-Ho, Chao, Rong, Tsao, Yu, Wu, Chao-Min
ElectrodeNet, a deep learning based sound coding strategy for the cochlear implant (CI), is proposed to emulate the advanced combination encoder (ACE) strategy by replacing the conventional envelope detection using various artificial neural networks. The extended ElectrodeNet-CS strategy further incorporates the channel selection (CS). Network models of deep neural network (DNN), convolutional neural network (CNN), and long short-term memory (LSTM) were trained using the Fast Fourier Transformed bins and channel envelopes obtained from the processing of clean speech by the ACE strategy. Objective speech understanding using short-time objective intelligibility (STOI) and normalized covariance metric (NCM) was estimated for ElectrodeNet using CI simulations. Sentence recognition tests for vocoded Mandarin speech were conducted with normal-hearing listeners. DNN, CNN, and LSTM based ElectrodeNets exhibited strong correlations to ACE in objective and subjective scores using mean squared error (MSE), linear correlation coefficient (LCC) and Spearman's rank correlation coefficient (SRCC). The ElectrodeNet-CS strategy was capable of producing N-of-M compatible electrode patterns using a modified DNN network to embed maxima selection, and to perform in similar or even slightly higher average in STOI and sentence recognition compared to ACE. The methods and findings demonstrated the feasibility and potential of using deep learning in CI coding strategy.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Asia > Taiwan > Taiwan Province > Taipei (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.92)
Google Is Using AI to Make Hearing Aids More Personalized
Earlier this year, Cochlear, the manufacturer of cochlear implants, announced a collaboration with Google and Australian Hearing Hub members, the National Acoustic Laboratories (NAL), Macquarie University, the Shepherd Centre, and NextSense. The aim is to improve existing hearing-assistance technologies, like hearing aids and cochlear implants, and to develop new solutions for folks experiencing hearing loss. There's a growing awareness that it's important to protect our hearing. Nevertheless, the world faces a hearing loss crisis. According to the World Health Organization, more than 1.5 billion people worldwide live with hearing loss today (430 million with disabling hearing loss), but it predicts that by 2050, those figures will grow to 2.5 billion and 700 million, respectively.
A Novel Channel Selection System in Cochlear Implants Using Artificial Neural Network
State-of-the-art speech processors in cochlear implants perform channel selection using a spectral maxima strategy. This strategy can lead to confusions when high frequency features are needed to discriminate between sounds. We present in this paper a novel channel selection strategy based upon pattern recognition which al(cid:173) lows "smart" channel selections to be made. The proposed strategy is implemented using multi-layer perceptrons trained on a multi(cid:173) speaker labelled speech database. The input to the network are the energy coefficients of N energy channels.
'Mind-reading' device can analyse the brainwaves of non-verbal, paralysed patients
A new device has been created that can analyse the brainwaves of non-verbal, paralysed patients and turn them into sentences on a computer screen in real time. The'mind-reading' machine is capable of decoding brain activity as a person silently attempts to spell out words phonetically to create full sentences. Experts say their neuroprosthesis speech device has the potential to restore communication to people who cannot speak or type due to paralysis. Previous research had shown that a similar system was able to decode up to 50 words. However, this was limited to a specific vocabulary and the participant had to attempt to speak the words out loud, which required significant effort, given their paralysis.
A virtual reality-based method for examining audiovisual prosody perception
Meister, Hartmut, Winter, Isa Samira, Waeachtler, Moritz, Sandmann, Pascale, Abdellatif, Khaled
Prosody plays a vital role in verbal communication. Acoustic cues of prosody have been examined extensively. However, prosodic characteristics are not only perceived auditorily, but also visually based on head and facial movements. The purpose of this report is to present a method for examining audiovisual prosody using virtual reality. We show that animations based on a virtual human provide motion cues similar to those obtained from video recordings of a real talker. The use of virtual reality opens up new avenues for examining multimodal effects of verbal communication. We discuss the method in the framework of examining prosody perception in cochlear implant listeners.
Deaf education vote is the latest parents' rights battleground in L.A.
The Los Angeles Unified School District is poised to vote on a controversial proposal that could reshape education for thousands of deaf and hard-of-hearing students, a key battle in a long national fight over how such children learn language. Oscar winner Marlee Matlin and the American Civil Liberties Union are among those urging the Board of Education to pass Resolution 029-21/22 at its meeting Tuesday, inaugurating a new Department of Deaf and Hard of Hearing Education. Students would be eligible to receive the state seal of biliteracy on their diplomas, and ASL would be offered as a language course in some high schools. The resolution also would introduce ASL-English bilingual instruction for many of the district's youngest deaf learners -- a move supporters say is critical to language equity and opponents say robs parents of choice and runs afoul of federal education law. "For 400 years at least there's been a big battle between people who think children with hearing loss should speak, and people who think they should use sign language -- it's a very old argument," said Alison M. Grimes, director of audiology and newborn hearing at UCLA Health.
- North America > United States > California > Los Angeles County > Los Angeles (0.25)
- North America > United States > District of Columbia > Washington (0.05)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.95)
- Education > Educational Setting > K-12 Education (0.90)