Goto

Collaborating Authors

 earphone


Apple earphones will turn into hearing aids with new software update - here's how it works

Daily Mail - Science & tech

Apple revealed its new iPhone last night with typical fanfare, along with a nifty little feature for its earphones that are already on the market. A software update to the firm's 229 AirPods Pro 2, first released in 2022, will let wearers use them as a clinical-grade hearing aid. The feature boosts specific sounds in real time without a lag – such as another person's speech or environmental noises like traffic. Apple hopes it will shake up the market for hearing aids, which typically cost 400 or more. It will be available as part of iOS 18, Apple's new software update coming later this month.


Wireless Earphone-based Real-Time Monitoring of Breathing Exercises: A Deep Learning Approach

Wazir, Hassam Khan, Waghoo, Zaid, Kapila, Vikram

arXiv.org Artificial Intelligence

Several therapy routines require deep breathing exercises as a key component and patients undergoing such therapies must perform these exercises regularly. Assessing the outcome of a therapy and tailoring its course necessitates monitoring a patient's compliance with the therapy. While therapy compliance monitoring is routine in a clinical environment, it is challenging to do in an at-home setting. This is so because a home setting lacks access to specialized equipment and skilled professionals needed to effectively monitor the performance of a therapy routine by a patient. For some types of therapies, these challenges can be addressed with the use of consumer-grade hardware, such as earphones and smartphones, as practical solutions. To accurately monitor breathing exercises using wireless earphones, this paper proposes a framework that has the potential for assessing a patient's compliance with an at-home therapy. The proposed system performs real-time detection of breathing phases and channels with high accuracy by processing a $\mathbf{500}$ ms audio signal through two convolutional neural networks. The first network, called a channel classifier, distinguishes between nasal and oral breathing, and a pause. The second network, called a phase classifier, determines whether the audio segment is from inhalation or exhalation. According to $k$-fold cross-validation, the channel and phase classifiers achieved a maximum F1 score of $\mathbf{97.99\%}$ and $\mathbf{89.46\%}$, respectively. The results demonstrate the potential of using commodity earphones for real-time breathing channel and phase detection for breathing therapy compliance monitoring.


MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning

Sprague, Zayne, Ye, Xi, Bostrom, Kaj, Chaudhuri, Swarat, Durrett, Greg

arXiv.org Artificial Intelligence

While large language models (LLMs) equipped with techniques like chain-of-thought prompting have demonstrated impressive capabilities, they still fall short in their ability to reason robustly in complex settings. However, evaluating LLM reasoning is challenging because system capabilities continue to grow while benchmark datasets for tasks like logical deduction have remained static. We introduce MuSR, a dataset for evaluating language models on multistep soft reasoning tasks specified in a natural language narrative. This dataset has two crucial features. First, it is created through a novel neurosymbolic synthetic-to-natural generation algorithm, enabling the construction of complex reasoning instances that challenge GPT-4 (e.g., murder mysteries roughly 1000 words in length) and which can be scaled further as more capable LLMs are released. Second, our dataset instances are free text narratives corresponding to real-world domains of reasoning; this makes it simultaneously much more challenging than other synthetically-crafted benchmarks while remaining realistic and tractable for human annotators to solve with high accuracy. We evaluate a range of LLMs and prompting techniques on this dataset and characterize the gaps that remain for techniques like chain-of-thought to perform robust reasoning.


Personalized Audio Quality Preference Prediction

Wang, Chung-Che, Lin, Yu-Chun, Hsu, Yu-Teng, Jang, Jyh-Shing Roger

arXiv.org Artificial Intelligence

This paper proposes to use both audio input and subject information to predict the personalized preference of two audio segments with the same content in different qualities. A siamese network is used to compare the inputs and predict the preference. Several different structures for each side of the siamese network are investigated, and an LDNet with PANNs' CNN6 as the encoder and a multi-layer perceptron block as the decoder outperforms a baseline model using only audio input the most, where the overall accuracy grows from 77.56% to 78.04%. Experimental results also show that using all the subject information, including age, gender, and the specifications of headphones or earphones, is more effective than using only a part of them.


Klipsch launches wireless ANC earphones with artificial intelligence

#artificialintelligence

Founded in 1946 and known for quality speakers, Klipsch celebrates its 75th anniversary this year. Part of that blowout features the launch of two new sets of earphones offering not only active noise cancellation (ANC) but artificial intelligence (AI)-based gestures and a new sound-enhancement system. The iconic, high-end brand said its new Klipsch T5 II True Wireless ANC earphones will feature "truly hands-free operation" through Bragi embedded AI. Bragi itself once made headphones but now only makes software. With that kind of operating system installed, you can take a call by nodding your head or reject a call by shaking your head.


Global Big Data Conference

#artificialintelligence

Klipsch, the great American speaker maker, has been helping audio enthusiasts to annoy their neighbors since 1949. The brand has an iconic status in the USA and has been at the forefront of so much pioneering audio technology thanks to founder Paul W Klipsch. The company is celebrating its 75th anniversary and, as part of the celebrations, it has announced two new pairs of true wireless earphones with active noise cancellation (ANC) technology and artificial intelligence. Klipsch claims both models have been designed to provide ultimate comfort and performance. The new Klipsch T5 II True Wireless ANC earphones will be the first earphones to feature a frictionless experience powered by a built-in operating system powered by Bragi embedded AI.


Klipsch Announces True Wireless Earphones With Artificial Intelligence

#artificialintelligence

The new Klipsch T5II ANC earphones use AI for a seamless experience with advanced gestures. Klipsch, the great American speaker maker, has been helping audio enthusiasts to annoy their neighbors since 1949. The brand has an iconic status in the USA and has been at the forefront of so much pioneering audio technology thanks to founder Paul W Klipsch. The company is celebrating its 75th anniversary and, as part of the celebrations, it has announced two new pairs of true wireless earphones with active noise cancellation (ANC) technology and artificial intelligence. Klipsch claims both models have been designed to provide ultimate comfort and performance.


These Headphones Translate Foreign Languages on the Fly

WIRED

A few years ago, I spent a day at Suntory's Yamazaki Distillery outside of Kyoto, Japan. There's a bar at the end of the tour, and (pro tip) it's one of the only places in the world you can get Suntory's whiskeys at cost. When I purchased my first glass of whiskey, a pair of Japanese men who'd taken the Shinkansen in from Tokyo waved me over to their table. Through pantomime, one of them offered me a taste of the whisky in his glass, and we ended up spending hours sampling spirits and talking about Japanese whiskey through the magic of Google Translate on our phones. It was a halting, awkward way to have a conversation, but it was glorious, and it still stands as one of the best experiences of my life.


Cornell researchers created an earphone that can track facial expressions

Engadget

Researchers from Cornell University have created an earphone system that can track a wearer's facial expressions even when they're wearing a mask. C-Face can monitor cheek contours and convert the wearer's expression into an emoji. That could allow people to, for instance, convey their emotions during group calls without having to turn on their webcam. "This device is simpler, less obtrusive and more capable than any existing ear-mounted wearable technologies for tracking facial expressions," Cheng Zhang, director of Cornell's SciFi Lab and senior author of a paper on C-Face, said in a statement. "In previous wearable technology aiming to recognize facial expressions, most solutions needed to attach sensors on the face and even with so much instrumentation, they could only recognize a limited set of discrete facial expressions."


A wearable system to assist visually impaired people

#artificialintelligence

New technological advances could have important implications for those affected by disabilities, offering valuable assistance throughout their everyday lives. One key example of this is the guidance that technological tools could provide to the visually impaired (VI), individuals that are either partially or entirely blind. With this in mind, researchers at CloudMinds Technologies Inc., in China, have recently created a new deep learning-powered wearable assistive system for VI individuals. This system, presented in a paper pre-published on arXiv, consists of a wearable terminal, a powerful processor and a smartphone. The wearable terminal has two key components, an RGBD camera and an earphone.