wearer
Inside the labs where glasses are redesigned for a hyper-visual world
I went to EssilorLuxottica's Paris facilities to learn how the digital age is reshaping eyes and redefining eyewear. We may earn revenue from the products available on this page and participate in affiliate programs. Restaurants are surprisingly good age tests. When the menu lands, do you squint at the tiny fonts, tilt the page toward some inadequate candle, or blast it with your phone flashlight just to read it? Do you ask a friend to tell you the options because you refuse to wear the readers you know, in your heart, you probably need? And when did restaurants get so loud?
- Information Technology > Hardware (0.70)
- Information Technology > Communications > Mobile (0.47)
- Information Technology > Artificial Intelligence > Robots (0.47)
Proactive Hearing Assistants that Isolate Egocentric Conversations
Hu, Guilin, Itani, Malek, Chen, Tuochao, Gollakota, Shyamnath
We introduce proactive hearing assistants that automatically identify and separate the wearer's conversation partners, without requiring explicit prompts. Our system operates on egocentric binaural audio and uses the wearer's self-speech as an anchor, leveraging turn-taking behavior and dialogue dynamics to infer conversational partners and suppress others. To enable real-time, on-device operation, we propose a dual-model architecture: a lightweight streaming model runs every 12.5 ms for low-latency extraction of the conversation partners, while a slower model runs less frequently to capture longer-range conversational dynamics. Results on real-world 2- and 3-speaker conversation test sets, collected with binaural egocentric hardware from 11 participants totaling 6.8 hours, show generalization in identifying and isolating conversational partners in multi-conversation settings. Our work marks a step toward hearing assistants that adapt proactively to conversational dynamics and engagement. More information can be found on our website: https://proactivehearing.cs.washington.edu/
- Asia > China > Beijing > Beijing (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
Nike's Robotic Shoe Gets Humans One Step Closer to Cyborg
Nike's Robotic Shoe Gets Humans One Step Closer to Cyborg Project Amplify is Nike's latest attempt to put some spring in your step with help from a powered mechanism that enhances the natural movement of the human ankle and lower leg. If you want to run faster or farther, you have options. You can put in the work, getting up 40 minutes earlier to train, changing your diet, going harder and longer on each of your runs to build up strength. Or, you can strap on one of Nike's new robot shoes and mechanically boost your speed, your stamina, and your overall performance in a flash. Sounds way easier, and probably more fun too.
- North America > United States > Oregon > Washington County > Beaverton (0.05)
- North America > United States > Massachusetts (0.05)
- North America > United States > California (0.05)
- (3 more...)
- Leisure & Entertainment > Sports (0.70)
- Health & Medicine (0.49)
- Information Technology > Security & Privacy (0.48)
Aria Gen 2 Pilot Dataset
Kong, Chen, Fort, James, Kang, Aria, Wittmer, Jonathan, Green, Simon, Shen, Tianwei, Zhao, Yipu, Peng, Cheng, Solaira, Gustavo, Berkovich, Andrew, Raina, Nikhil, Baiyya, Vijay, Oleinik, Evgeniy, Huang, Eric, Zhang, Fan, Straub, Julian, Schwesinger, Mark, Pesqueira, Luis, Pan, Xiaqing, Engel, Jakob Julian, Ren, Carl, Yan, Mingfei, Newcombe, Richard
The Aria Gen 2 Pilot Dataset (A2PD) is an egocentric multimodal open dataset captured using the state-of-the-art Aria Gen 2 glasses. To facilitate timely access, A2PD is released incrementally with ongoing dataset enhancements. The initial release features Dia'ane, our primary subject, who records her daily activities alongside friends, each equipped with Aria Gen 2 glasses. It encompasses five primary scenarios: cleaning, cooking, eating, playing, and outdoor walking. In each of the scenarios, we provide comprehensive raw sensor data and output data from various machine perception algorithms. These data illustrate the device's ability to perceive the wearer, the surrounding environment, and interactions between the wearer and the environment, while maintaining robust performance across diverse users and conditions. The A2PD is publicly available at projectaria.com, with open-source tools and usage examples provided in Project Aria Tools.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (0.70)
A Semantic-Aware Framework for Safe and Intent-Integrative Assistance in Upper-Limb Exoskeletons
Chen, Yu, Miao, Shu, Wu, Chunyu, Mu, Jingsong, OuYang, Bo, Li, Xiang
Upper-limb exoskeletons are primarily designed to provide assistive support by accurately interpreting and responding to human intentions. In home-care scenarios, exoskeletons are expected to adapt their assistive configurations based on the semantic information of the task, adjusting appropriately in accordance with the nature of the object being manipulated. However, existing solutions often lack the ability to understand task semantics or collaboratively plan actions with the user, limiting their generalizability. To address this challenge, this paper introduces a semantic-aware framework that integrates large language models into the task planning framework, enabling the delivery of safe and intent-integrative assistance. The proposed approach begins with the exoskeleton operating in transparent mode to capture the wearer's intent during object grasping. Once semantic information is extracted from the task description, the system automatically configures appropriate assistive parameters. In addition, a diffusion-based anomaly detector is used to continuously monitor the state of human-robot interaction and trigger real-time replanning in response to detected anomalies. During task execution, online trajectory refinement and impedance control are used to ensure safety and regulate human-robot interaction. Experimental results demonstrate that the proposed method effectively aligns with the wearer's cognition, adapts to semantically varying tasks, and responds reliably to anomalies.
Projecting the New Body: How Body Image Evolves During Learning to Walk with a Wearable Robot
Advances in wearable robotics challenge the traditional definition of human motor systems, as wearable robots redefine body structure, movement capability, and perception of their own bodies. While these devices can empower the wearer's motor performance, there is limited understanding of how wearer s update their perception of body images, especially images in dynamic movements, while learning to use these modern devices. This study aimed to fill the gap by examining the changes of body image as individuals learned to walk with a robotic prosthetic l eg over multi - day training. We measured gait performance and perceived body images via Selected Coefficient of Perceived Motion (SCoMo) after each training session. Based on human motor learning theory extended to wearer - robot systems, w e hypothesized that learning the perceived body image when walking with a robotic leg co - evolves with the actual gait improvement and becomes more certain and more accurate to the actual motion. Our result confirmed that motor learning improved both physical and perceived ga it pattern towards normal, indicating that via practice the wearers incorporated the robotic leg into their sensorimotor systems to enable wearer - robot movement coordination. However, a persistent discrepancy between perceived and actual motion remained, l ikely due to the absence of direct sensation and control of the prosthesis from wearers. Additionally, the perceptual overestimation at the later training sessions might limit further motor improvement. These findings suggest that enhancing the human sense of wearable robots and frequent calibrating perception of body image are essential for effective training with lower limb wearable robots and for developing more embodied assistive technologies.
- North America > United States > North Carolina > Orange County > Chapel Hill (0.14)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- North America > United States > North Carolina > Wake County > Raleigh (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
Scientists reveal how humans will have superpowers by 2030
By 2030, rapid technological advancements are expected to reshape humanity, unlocking abilities once confined to science fiction--from superhuman strength to enhanced senses. Robotic exoskeletons may soon allow people to lift heavy objects with ease, while AI-powered wearables, such as smart glasses and earbuds, could provide real-time information and immersive augmented reality experiences. Healthcare may be revolutionized by microscopic nanobots capable of repairing tissue and fighting disease from within the bloodstream, potentially extending human lifespans. Developers are also working on contact lenses with infrared vision and devices that allow users to "feel" digital objects, paving the way for entirely new ways to experience the world. Tech pioneers like former Google engineer Ray Kurzweil believe these innovations are early steps toward the merging of humans and machines, with brain-computer interfaces offering direct access to digital intelligence.
- Health & Medicine > Therapeutic Area (0.55)
- Health & Medicine > Health Care Technology (0.40)
- Information Technology > Architecture > Real Time Systems (0.91)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.55)
- Information Technology > Human Computer Interaction > Interfaces > Virtual Reality (0.38)
- Information Technology > Artificial Intelligence > Cognitive Science > Neuroscience (0.37)
Spatial Speech Translation: Translating Across Space With Binaural Hearables
Chen, Tuochao, Wang, Qirui, He, Runlin, Gollakota, Shyam
Imagine being in a crowded space where people speak a different language and having hearables that transform the auditory space into your native language, while preserving the spatial cues for all speakers. We introduce spatial speech translation, a novel concept for hearables that translate speakers in the wearer's environment, while maintaining the direction and unique voice characteristics of each speaker in the binaural output. To achieve this, we tackle several technical challenges spanning blind source separation, localization, real-time expressive translation, and binaural rendering to preserve the speaker directions in the translated audio, while achieving real-time inference on the Apple M2 silicon. Our proof-of-concept evaluation with a prototype binaural headset shows that, unlike existing models, which fail in the presence of interference, we achieve a BLEU score of up to 22.01 when translating between languages, despite strong interference from other speakers in the environment. User studies further confirm the system's effectiveness in spatially rendering the translated speech in previously unseen real-world reverberant environments. Taking a step back, this work marks the first step towards integrating spatial perception into speech translation.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture > Yokohama (0.05)
- (6 more...)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.46)
EnchantedClothes: Visual and Tactile Feedback with an Abdomen-Attached Robot through Clothes
Yamamoto, Takumi, Yoshimura, Rin, Sugiura, Yuta
--- Wearable robots are designed to be worn on the human body. Taking advantage of their physical form, various applications for wearable robots are being considered. This study proposes a wearable robot worn on the abdomen and a new interaction with it. Our robot enables a variety of applications related to communication between the wearer and surrounding humans through visual and tactile feedback. The contributions of this research will be (1) the proposal of a novel wearable robot worn on the abdomen and (2) a new interaction with it.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > Canada > Quebec (0.04)
- (3 more...)
Wearable sensors monitor factory worker fatigue in real time
Manufacturing jobs have some of the highest injury rates of any industry, often due to workers' high levels of physical and mental fatigue. In an attempt to improve job sites, researchers have designed a system of wearable sensors that rely on machine learning to monitor workers for signs of physical strain and tiredness. In doing so, they hope their new devices will help prevent accidents and injuries. The design is detailed in a study published by a team at Northwestern University in the October issue of PNAS Nexus. To measure fatigue and physical health, researchers developed an interconnected array of six wearable sensors placed across a wearer's torso and arms.