Goto

Collaborating Authors

 google glass


Google unveils plans to try again with smart glasses in 2026

BBC News

Google plans to launch smart glasses powered by artificial intelligence (AI) in 2026, after its previous high-profile attempt to enter the market ended in failure. The tech giant set expectations high in 2013 when it unveiled Google Glass, billed by some as the future of technology despite its odd appearance with a bulky screen positioned above the right eye. Google pulled the product in 2015 less than seven months after its UK release, but is now planning on re-entering the market with smart glasses with a cleaner look. But it comes after Meta has already made waves with its smart specs, which have sold two million pairs as of February. Google's new tech will let users interact with its own AI products, such as its chatbot Gemini.


A Smart-Glasses for Emergency Medical Services via Multimodal Multitask Learning

Jin, Liuyi, Gunawardena, Pasan, Haroon, Amran, Wang, Runzhi, Lee, Sangwoo, Stoleru, Radu, Middleton, Michael, Huo, Zepeng, Kim, Jeeeun, Moats, Jason

arXiv.org Artificial Intelligence

Emergency Medical Technicians (EMTs) operate in high-pressure environments, making rapid, life-critical decisions under heavy cognitive and operational loads. We present EMSGlass, a smart-glasses system powered by EMSNet, the first multimodal multitask model for Emergency Medical Services (EMS), and EMSServe, a low-latency multimodal serving framework tailored to EMS scenarios. EMSNet integrates text, vital signs, and scene images to construct a unified real-time understanding of EMS incidents. Trained on real-world multimodal EMS datasets, EMSNet simultaneously supports up to five critical EMS tasks with superior accuracy compared to state-of-the-art unimodal baselines. Built on top of PyTorch, EMSServe introduces a modality-aware model splitter and a feature caching mechanism, achieving adaptive and efficient inference across heterogeneous hardware while addressing the challenge of asynchronous modality arrival in the field. By optimizing multimodal inference execution in EMS scenarios, EMSServe achieves 1.9x -- 11.7x speedup over direct PyTorch multimodal inference. A user study evaluation with six professional EMTs demonstrates that EMSGlass enhances real-time situational awareness, decision-making speed, and operational efficiency through intuitive on-glass interaction. In addition, qualitative insights from the user study provide actionable directions for extending EMSGlass toward next-generation AI-enabled EMS systems, bridging multimodal intelligence with real-world emergency response workflows.


Smartphones Are So Over

The Atlantic - Technology

Today, Snap, the parent company of Snapchat, one of the most popular social-media apps for teenage users, is announcing a new computer that you wear directly on your face. The latest in its Spectacles line of smart glasses, which the company has been working on for about a decade, shows you interactive imagery through its lenses, placing plants or imaginary pets or even a golf-putting range into the real world around you. So-called augmented reality (or AR) is nothing new, and neither is wearable tech. Meta makes a pair of smart glasses in partnership with Ray-Ban, and claims they're so popular that the company can't make them fast enough. Amazon sells an Alexa-infused version of the famous Carrera frames, which make you look like a mob boss with access to an AI assistant (Alexa, where's the best place to hide a body?).


AI Could Change How Blind People See the World

WIRED

For her 38th birthday, Chela Robles and her family made a trek to One House, her favorite bakery in Benicia, California, for a brisket sandwich and brownies. On the car ride home, she tapped a small touchscreen on her temple and asked for a description of the world outside. "A cloudy sky," the response came back through her Google Glass. Robles lost the ability to see in her left eye when she was 28, and in her right eye a year later. Blindness, she says, denies you small details that help people connect with one another, like facial cues and expressions.


Time to Put Humans Deeper into the AI Design Process - RTInsights

#artificialintelligence

An important part of the process is to bring in people from across disciplines, even if they have conflicting perspectives. A few years back, experts and pundits alike were predicting the highways of the 2020s would be packed full of autonomous vehicles. One glance and it's clear there are still, for better or worse, mainly human drivers out there on the roads, as driverless vehicles have hit many roadblocks. Their ability to make judgements in unforeseen events is still questionable, as is the ability of human riders to adapt and trust their robot drivers. Autonomous vehicles are just one example of the greater need for human-centered design, the theme of the recent Stanford Human-Centered Artificial Intelligence fall conference, in which experts urged more human involvement from the very start of AI development efforts.


Smart Headset, Computer Vision and Machine Learning for Efficient Prawn Farm Management

Xi, Mingze, Rahman, Ashfaqur, Nguyen, Chuong, Arnold, Stuart, McCulloch, John

arXiv.org Artificial Intelligence

Understanding the growth and distribution of the prawns is critical for optimising the feed and harvest strategies. An inadequate understanding of prawn growth can lead to reduced financial gain, for example, crops are harvested too early. The key to maintaining a good understanding of prawn growth is frequent sampling. However, the most commonly adopted sampling practice, the cast net approach, is unable to sample the prawns at a high frequency as it is expensive and laborious. An alternative approach is to sample prawns from feed trays that farm workers inspect each day. This will allow growth data collection at a high frequency (each day). But measuring prawns manually each day is a laborious task. In this article, we propose a new approach that utilises smart glasses, depth camera, computer vision and machine learning to detect prawn distribution and growth from feed trays. A smart headset was built to allow farmers to collect prawn data while performing daily feed tray checks. A computer vision + machine learning pipeline was developed and demonstrated to detect the growth trends of prawns in 4 prawn ponds over a growing season.


Google Says New Eyeglasses Can Translate Languages in Real Time

#artificialintelligence

Ten years after introducing "Google Glass," Alphabet Inc. has created a new kind of smart eyeglasses. The company says the wearable computer device can translate different languages in real time. A working model, or prototype, of the yet-unnamed device was presented to the public this week during the yearly Google I/O developer conference. Google did not say when the glasses might go on sale to the public. The first Google Glass device included a wearable camera that could film what the wearers saw.


Science Fiction or Science Fact: Is the Metaverse the Future of Reality?

#artificialintelligence

Thanks to science fiction movies and literature, many people are already familiar with the idea of virtual worlds One only needs to think of various popular films and books to imagine how people might navigate a different reality via an avatar. Increasingly, however, companies want to turn fiction into reality, with Facebook (now known as Meta) just one example of a brand leading the charge, evolving from a pure social media platform into a tech company pushing for an entirely virtual universe known as the metaverse. But what exactly is the metaverse? What opportunities does it offer for everyday life?


Google's second try at computer glasses translates conversations in real time

#artificialintelligence

May 11 (Reuters) - The science-fiction is harder to see in Google's second try at glasses with a built-in computer. A decade after the debut of Google Glass, a nubby, sci-fi-looking pair of specs that filmed what wearers saw but raised concerns about privacy and received low marks for design, the Alphabet Inc (GOOGL.O) unit on Wednesday previewed a yet-unnamed pair of standard-looking glasses that display translations of conversations in real time and showed no hint of a camera. The new augmented-reality pair of glasses was just one of several longer-term products Google unveiled at its annual Google I/O developer conference aimed at bridging the real world and the company's digital universe of search, Maps and other services using the latest advances in artificial intelligence. "What we're working on is technology that enables us to break down language barriers, taking years of research in Google Translate and bringing that to glasses," said Eddie Chung, a director of product management at Google, calling the capability "subtitles for the world." Selling more hardware could help Google increase profit by keeping users in its network of technology, where it does not have to split ad sales with device makers such as Apple Inc (AAPL.O)and Samsung Electronics CO (005930.KS)that help distribute its services.


The use of Augmented Reality in Real Estate is no longer a gimmick

#artificialintelligence

Technology has completely infiltrated the built environment. Between IoT connectivity in buildings, indoor wayfinding and virtual tours, real estate is no longer the tech-averse industry it once was. In the context of digitization, there are new uses for Augmented Reality (AR). The technology has evolved from a marketing gimmick to a solid strategy for asset managers to improve their built environments. The premise of AR technology is the real-time integration of digital information to "augment" the user's physical experience.