Goto

Collaborating Authors

 hearing aids


Elehear Delight Hearing Aids Review: Good Fit, Poor Sound

WIRED

Even moderate volume settings led to blunt, distorted, and often painful amplification. App is clunky at best. "Delight" is a bold choice of name for any type of tech product, but it's especially ambitious in the world of hearing aids, where "begrudgingly tolerate" is the highest praise typically offered. Undaunted, Elehear's latest over-the-counter release aims to raise the bar on user satisfaction, featuring a major design change and leveraging a new AI algorithm (naturally) to improve noise reduction and reduce feedback. Designed as in-the-ear devices with discretion in mind, the Delight cuts a much different profile than the more traditional, behind-the-ear Beyond Pro and Beyond hearing aids. The big question: Can they perform as well as BTE offerings?


New Hearing Aid Company, Foretell, Brings in Steve Martin and Others as Fans

WIRED

Well, Who Do You Know? AI-powered startup Fortell has become a secret handshake for the privileged hearing-impaired crowd who swear by the product. Now, it wants to be in your ears. A secret is percolating at dinner parties, salons, and cocktail gatherings among the august New York City elite. It's whispered in the circles of financial masters of the universe, Hollywood stars, and owners of sports teams. Many haven't--or if they did hear, they might not have made out the words through noisy cross-conversations. Once they do know--particularly if they're boomers--they want it desperately. Fortell is a hearing aid, one that claims to use AI to provide a dramatically superior aural experience. The chosen few included in its beta test claim that it seems to top the performance of high-end devices they'd been unhappily using. These testers have made pilgrimages to Fortell's headquarters on the fifth floor of a WeWork facility in New York City's trendy SoHo neighborhood, where they were fitted for the hearing aids--which from the outside look pretty much like standard, over-the-ear, teardrop-shaped devices. But the big moment comes when a Fortell staffer takes them down to street level.


Speech Separation for Hearing-Impaired Children in the Classroom

Olalere, Feyisayo, van der Heijden, Kiki, Stronks, H. Christiaan, Briaire, Jeroen, Frijns, Johan H. M., Güçlütürk, Yagmur

arXiv.org Artificial Intelligence

The process includes simulating room and listener acoustic properties (A), modeling talkers' movement trajectories (B), and synthesizing classroom speech mixtures (C). The numbers (1) - (5) correspond to the steps itemized in section II-B more challenging and reflective of classroom acoustics. The separation model is trained to output time-domain waveforms for each speaker with no interference from the other speaker or background noise. This setup enables the model to not only separate overlapping speech, but also to preserve spatial distinctions associated with each moving source. B. Simulation of Overlapping Speech for Classroom Conditions To capture the reverberant and spatial characteristics typical of classroom environments, we developed a spatialization pipeline for generating training and evaluation data (see Fig.1). This pipeline consists of five main components, which are explained below in detail: 1) Simulation of room impulse responses (RIRs) 2) Application of head-related impulse responses (HRIRs) 3) Generation of binaural room impulse responses (BRIRs) 4) Modeling of talkers' movement trajectories 5) Synthesis of the classroom speech data 1) Room Impulse Responses: To simulate naturalistic reverberant classroom acoustics, we generated RIRs that capture direct sound, early reflections, and reverberation or echo. These RIRs were used to spatialize source signals in simulated classroom environments with varying geometry, reverberation, and source-listener distances. We used the Pyroomacoustics Python package [35], which implements the image source method to model sound propagation in rectangular (shoebox) rooms. A total of 30 classrooms were simulated, with dimensions randomly sampled from a range of 8.5 8.5 3 m to 10 10 3.5 m (length width height), reflecting typical U.S. classroom sizes [36], [37].


Advances in Intelligent Hearing Aids: Deep Learning Approaches to Selective Noise Cancellation

Khan, Haris, Asif, Shumaila, Nasir, Hassan, Bhatti, Kamran Aziz, Sheikh, Shahzad Amin

arXiv.org Artificial Intelligence

The integration of artificial intelligence into hearing assistance marks a paradigm shift from traditional amplification-based systems to intelligent, context-aware audio processing. This systematic literature review evaluates advances in AI-driven selective noise cancellation (SNC) for hearing aids, highlighting technological evolution, implementation challenges, and future research directions. We synthesize findings across deep learning architectures, hardware deployment strategies, clinical validation studies, and user-centric design. The review traces progress from early machine learning models to state-of-the-art deep networks, including Convolutional Recurrent Networks for real-time inference and Transformer-based architectures for high-accuracy separation. Key findings include significant gains over traditional methods, with recent models achieving up to 18.3 dB SI-SDR improvement on noisy-reverberant benchmarks, alongside sub-10 ms real-time implementations and promising clinical outcomes. Yet, challenges remain in bridging lab-grade models with real-world deployment - particularly around power constraints, environmental variability, and personalization. Identified research gaps include hardware-software co-design, standardized evaluation protocols, and regulatory considerations for AI-enhanced hearing devices. Future work must prioritize lightweight models, continual learning, contextual-based classification and clinical translation to realize transformative hearing solutions for millions globally.


Subtitling Your Life

The New Yorker

A little over thirty years ago, when he was in his mid-forties, my friend David Howorth lost all hearing in his left ear, a calamity known as single-sided deafness. "It happened literally overnight," he said. "My doctor told me, 'We really don't understand why.' " At the time, he was working as a litigator in the Portland, Oregon, office of a large law firm. His hearing loss had no impact on his job--"In a courtroom, you can get along fine with one ear"--but other parts of his life were upended. The brain pinpoints sound sources in part by analyzing minute differences between left-ear and right-ear arrival times, the same process that helps bats and owls find prey they can't see.


Windows 11's 2024 Update: 5 big changes I really like (and more)

PCWorld

The big Windows 11 2024 Update (also known as Windows 11 24H2) is both a brand-new operating system but also one that's been out for several months now. And its best features are really reserved for those who have invested in a next-gen Copilot PC powered by chips from Qualcomm, Intel, and AMD. These seeming contradictions are at the heart of Windows 11 24H2, which begins rolling out today in a "phased" rollout that will last several weeks. But when you get it and what you get with it will all depend on whether you own a Copilot PC. In other words, there's a set of basic features that everyone will receive (including new energy-saving features for laptops and desktops, improved smartphone integration, plus support for Wi-Fi 7 and the upgraded 80Gbps capabilities of USB4), along with more advanced features that are only available to Copilot PC users.


Towards sub-millisecond latency real-time speech enhancement models on hearables

Dementyev, Artem, Reddy, Chandan K. A., Wisdom, Scott, Chatlani, Navin, Hershey, John R., Lyon, Richard F.

arXiv.org Artificial Intelligence

Low latency models are critical for real-time speech enhancement applications, such as hearing aids and hearables. However, the sub-millisecond latency space for resource-constrained hearables remains underexplored. We demonstrate speech enhancement using a computationally efficient minimum-phase FIR filter, enabling sample-by-sample processing to achieve mean algorithmic latency of 0.32 ms to 1.25 ms. With a single microphone, we observe a mean SI-SDRi of 4.1 dB. The approach shows generalization with a DNSMOS increase of 0.2 on unseen audio recordings. We use a lightweight LSTM-based model of 644k parameters to generate FIR taps. We benchmark that our system can run on low-power DSP with 388 MIPS and mean end-to-end latency of 3.35 ms. We provide a comparison with baseline low-latency spectral masking techniques. We hope this work will enable a better understanding of latency and can be used to improve the comfort and usability of hearables.


Apple earphones will turn into hearing aids with new software update - here's how it works

Daily Mail - Science & tech

Apple revealed its new iPhone last night with typical fanfare, along with a nifty little feature for its earphones that are already on the market. A software update to the firm's 229 AirPods Pro 2, first released in 2022, will let wearers use them as a clinical-grade hearing aid. The feature boosts specific sounds in real time without a lag – such as another person's speech or environmental noises like traffic. Apple hopes it will shake up the market for hearing aids, which typically cost 400 or more. It will be available as part of iOS 18, Apple's new software update coming later this month.


The first Cadenza challenges: using machine learning competitions to improve music for listeners with a hearing loss

Dabike, Gerardo Roa, Akeroyd, Michael A., Bannister, Scott, Barker, Jon P., Cox, Trevor J., Fazenda, Bruno, Firth, Jennifer, Graetzer, Simone, Greasley, Alinka, Vos, Rebecca R., Whitmer, William M.

arXiv.org Artificial Intelligence

It is well established that listening to music is an issue for those with hearing loss, and hearing aids are not a universal solution. How can machine learning be used to address this? This paper details the first application of the open challenge methodology to use machine learning to improve audio quality of music for those with hearing loss. The first challenge was a stand-alone competition (CAD1) and had 9 entrants. The second was an 2024 ICASSP grand challenge (ICASSP24) and attracted 17 entrants. The challenge tasks concerned demixing and remixing pop/rock music to allow a personalised rebalancing of the instruments in the mix, along with amplification to correct for raised hearing thresholds. The software baselines provided for entrants to build upon used two state-of-the-art demix algorithms: Hybrid Demucs and Open-Unmix. Evaluation of systems was done using the objective metric HAAQI, the Hearing-Aid Audio Quality Index. No entrants improved on the best baseline in CAD1 because there was insufficient room for improvement. Consequently, for ICASSP24 the scenario was made more difficult by using loudspeaker reproduction and specified gains to be applied before remixing. This also made the scenario more useful for listening through hearing aids. 9 entrants scored better than the the best ICASSP24 baseline. Most entrants used a refined version of Hybrid Demucs and NAL-R amplification. The highest scoring system combined the outputs of several demixing algorithms in an ensemble approach. These challenges are now open benchmarks for future research with the software and data being freely available.


Signia Pure Charge&Go IX Hearing Aids Review: Great AI-Powered Audio, for a Price

WIRED

Signia was an early mover in the in-ear hearing aid world when it released its Active Pro line two years ago, and the industry has continued to evolve dramatically since. While there are plenty more in-ear aids on the market today, Signia's bread and butter is found in the more traditional side of the hearing aid world, with new behind-the-ear models launching regularly. The latest of these is the Pure Charge&Go IX. The IX in the name isn't a Roman number nine but rather shorthand for Integrated Xperience, which Signia claims is "the world's first hearing tech platform capable of pinpointing multiple conversation partners in real time, providing unprecedented sound clarity and definition for wearers in multi-speaker scenarios." The company says the IX is built around a wholly new platform focused on optimizing multiparty conversations in noisy environments.