apple vision
Meta Poached Apple's Top Design Guys to Fix Its Software UI
Meta wants to make its AI hardware slicker and more fashion-forward. It also needs to make its software more usable. The way to do all that appears to be hiring design maestros away from Apple. Meta has made a big move to hire two prominent designers away from rival tech giant Apple, likely putting them to work on designing Meta's next generation of AI hardware and the software that runs on it. Alan Dye, formerly Apple's vice president of Human Interface Design, will join Meta to head up a new design studio within Meta's Reality Labs.
- Asia > Nepal (0.15)
- North America > United States > Louisiana (0.05)
- North America > United States > Virginia (0.05)
- (5 more...)
- Information Technology > Hardware (0.55)
- Information Technology > Services (0.48)
- Government > Immigration & Customs (0.48)
- Energy > Renewable > Geothermal (0.48)
The Vision Pro Was An Expensive Misstep. Now Apple Has to Catch Up With Smart Glasses
The Vision Pro Was an Expensive Misstep. Having reportedly shelved work on a cheaper Vision Pro, Apple is apparently pivoting its focus to smart glasses--and hoping it's not too late. If Apple wants to match the grip Meta has on the smart glasses market, it just might have to simplify its face computers. According to an internal announcement reported in Bloomberg by serial Apple leaker Mark Gurman, Apple has deprioritized efforts to make a lighter, more affordable version of its Vision Pro headset in favor of focusing on AI-enabled smart glasses . Apple now seems to be aiming to launch a pair of Meta-style smart glasses in 2027, with another pair featuring a display on the lens aimed for release in 2028--if not before.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
This Startup Wants to Put Its Brain-Computer Interface in the Apple Vision Pro
California-based Cognixion is launching a clinical trial to allow paralyzed patients with speech disorders the ability to communicate without an invasive brain implant. The trials will be conducted with a modified version of the Apple Vision Pro headset. Startup Cognixion announced today that it is launching a clinical trial of its wearable brain-computer interface technology integrated with the Apple Vision Pro to help paralyzed people with speech disorders communicate with their thoughts. Cognixion is one of several companies, including Elon Musk's Neuralink, that is developing a brain-computer interface, or BCI, a system that captures brain signals and translates them into commands to control external devices. While Neuralink and others are working on implants that are surgically placed in the head, Cognixion's technology is noninvasive.
- North America > United States > California > Santa Barbara County > Santa Barbara (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
My Virtual Avatar No Longer Looks Terrible in the Apple Vision Pro
Remember Apple's Vision Pro? That's the 3,499 mixed reality headset the company launched early in 2024 that failed to garner much public interest. Apple has steamed ahead with updates for the platform over the past year, and soon there will be a new version upgrade: visionOS 26. I got a chance to try out a few of the new capabilities, but two stuck out to me more than the others. First is the upgrade to Personas. That's the spatial avatar the headset creates based on your likeness using the onboard cameras.
Beyond the Monitor: Mixed Reality Visualization and AI for Enhanced Digital Pathology Workflow
Veerla, Jai Prakash, Guttikonda, Partha Sai, Shang, Helen H., Nasr, Mohammad Sadegh, Torres, Cesar, Luber, Jacob M.
Pathologists rely on gigapixel whole-slide images (WSIs) to diagnose diseases like cancer, yet current digital pathology tools hinder diagnosis. The immense scale of WSIs, often exceeding 100,000 X 100,000 pixels, clashes with the limited views traditional monitors offer. This mismatch forces constant panning and zooming, increasing pathologist cognitive load, causing diagnostic fatigue, and slowing pathologists' adoption of digital methods. PathVis, our mixed-reality visualization platform for Apple Vision Pro, addresses these challenges. It transforms the pathologist's interaction with data, replacing cumbersome mouse-and-monitor navigation with intuitive exploration using natural hand gestures, eye gaze, and voice commands in an immersive workspace. PathVis integrates AI to enhance diagnosis. An AI-driven search function instantly retrieves and displays the top five similar patient cases side-by-side, improving diagnostic precision and efficiency through rapid comparison. Additionally, a multimodal conversational AI assistant offers real-time image interpretation support and aids collaboration among pathologists across multiple Apple devices. By merging the directness of traditional pathology with advanced mixed-reality visualization and AI, PathVis improves diagnostic workflows, reduces cognitive strain, and makes pathology practice more effective and engaging. The PathVis source code and a demo video are publicly available at: https://github.com/jaiprakash1824/Path_Vis
- North America > United States > Texas (0.04)
- Europe > Monaco (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
Synchron's Brain-Computer Interface Now Has Nvidia's AI
Neurotech company Synchron has unveiled the latest version of its brain-computer interface, which uses Nvidia technology and the Apple Vision Pro to enable individuals with paralysis to control digital and physical environments with their thoughts. In a video demonstration at the Nvidia GTC conference this week in San Jose, California, Synchron showed off how its system allows one of its trial participants, Rodney Gorham, who is paralyzed, to control multiple devices in his home. From his sun-filled living room in Melbourne, Australia, Gorham is able to play music from a smart speaker, adjust the lighting, turn on a fan, activate an automatic pet feeder, and run a robotic vacuum. Gorham has lost the use of his voice and much of his body due to having amyotrophic lateral sclerosis, or ALS. The degenerative disease weakens muscles over time and eventually leads to paralysis.
- Oceania > Australia > Victoria > Melbourne (0.27)
- North America > United States > California > Santa Clara County > San Jose (0.27)
The Humane Ai Pin Will Become E-Waste Next Week
The story of the infamous Humane Ai Pin is coming to an end. This week, the company announced that HP--known for its computers and printers that always seem to need a refill--will acquire several assets from Humane in a 116 million deal expected to close at the end of the month. HP will get more than 300 patents and patent applications, a few Humane employees--including founders Imran Chaudhri and Bethany Bongiorno--and Humane's Cosmos operating system. Late in 2024, Humane looked to license this operating system so that third parties could inject the AI voice assistant into other products, like cars. Humane became Silicon Valley's "next big thing" in late 2023 when it unveiled its AI wearable, equipped with a ChatGPT-powered assistant and a laser-projected display, that promised to replace your smartphone.
- Information Technology > Communications > Mobile (0.56)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.38)
LoXR: Performance Evaluation of Locally Executing LLMs on XR Devices
Khan, Dawar, Liu, Xinyu, Mena, Omar, Jia, Donggang, Kouyoumdjian, Alexandre, Viola, Ivan
Abstract--The deployment of large language models (LLMs) on extended reality (XR) devices has great potential to advance the field of human-AI interaction. In case of direct, on-device model inference, selecting the appropriate model and device for specific tasks remains challenging. In this paper, we deploy 17 LLMs across four XR devices--Magic Leap 2, Meta Quest 3, Vivo X100s Pro, and Apple Vision Pro--and conduct a comprehensive evaluation. We devise an experimental setup and evaluate performance on four key metrics: performance consistency, processing speed, memory usage, and battery consumption. For each of the 68 model-device pairs, we assess performance under varying string lengths, batch sizes, and thread counts, analyzing the tradeoffs for real-time XR applications. We finally propose a unified evaluation method based on the Pareto Optimality theory to select the optimal device-model pairs from the quality and speed objectives. We believe our findings offer valuable insight to guide future optimization efforts for LLM deployment on XR devices. Our evaluation method can be followed as standard groundwork for further research and development in this emerging field. All supplemental materials are available at nanovis.org/Loxr.html. These models are capable of describing a wide variety of topics, respond at various levels of abstraction, and communicate effectively in multiple languages. They have proven capable of providing users with accurate and contextually appropriate responses. LLMs have quickly found applications in tasks such as spelling and grammar correction [2], generating text on specified topics [3], integration into automated chatbot services, and even generating source code from loosely defined software specifications [4]. Research on language models, and on their multimodal variants integrating language and vision or other technologies has recently experienced rapid growth. For instance, in computer vision, language models are combined with visual signals to achieve tasks such as verbal scene description and even open-world scenegraph generation [5]. These technologies enable detailed interpretation of everyday objects, inference of relationships among them, and estimates of physical properties like size, weight, distance, and speed. In user interaction and visualization research, LLMs serve as verbal interfaces to control software functionality or adjust visualization parameters [6], [7]. Through prompt engineering or fine-tuning, loosely defined text can be translated into specific commands that execute desired actions within a system, supported by language model APIs. The capabilities of language models continue to improve significantly from one version to the next. Xinyu Liu is with King Abdullah University of Science and T echnology (KAUST), Saudi Arabia, and also with University of Electronic Science and T echnology of China, Chengdu, China.
- Asia > Middle East > Saudi Arabia (0.34)
- Asia > China > Sichuan Province > Chengdu (0.24)
- Asia > China > Beijing > Beijing (0.04)
- (7 more...)
- Education (1.00)
- Information Technology > Security & Privacy (0.67)
- Energy (0.67)
The future of Apple Vision Pro is in medicine
Apple's 3,500 Vision Pro sounds like a bargain compared to the price of a fresh, medical-grade cadaver. And some medical institutions have started practicing surgery using the spatial-computing headset, which doesn't require a physical human body. Replacing cadavers is just one example of how the Vision Pro has made its way into the medical field since it hit the market in February 2024. On January 30-31, 2025, Sharp Healthcare hosted the inaugural Spatial Computing Health Care Summit, where medical providers gathered to discuss their use of spatial computing, which embeds digital objects into a live feed of the real world. The same tech that allows people to play virtual Battleship with each other has moved into applications that include everything from training and education to full-fledged operations on human patients.
- Health & Medicine > Surgery (0.52)
- Education > Curriculum > Subject-Specific Education (0.36)
Revisiting the 3 Biggest Hardware Flops of 2024: Apple Vision Pro, Rabbit R1, Humane Ai Pin
The year began with such promise. Back in January, I remember sitting in a presentation hall at a Las Vegas hotel during CES 2024 as Rabbit CEO Jesse Lyu unveiled the R1. This colorful and fun pocket-sized AI companion promised to do everything, from ordering an Uber to answering all your vexing questions. My story on the R1 had just gone live and within hours--I'm not trying to pat myself on the back here--there were a lot of eyeballs on it. The device was unlike anything that had come before, and showed us a novel vision of how these newfangled AI agents would fit into our lives.