Goto

Collaborating Authors

 human eye


A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

Robohub

Inspired by the human eye, our biomedical engineering lab at Georgia Tech has designed an adaptive lens made of soft, light-responsive, tissuelike materials. Adjustable camera systems usually require a set of bulky, moving, solid lenses and a pupil in front of a camera chip to adjust focus and intensity. In contrast, human eyes perform these same functions using soft, flexible tissues in a highly compact form. Our lens, called the photo-responsive hydrogel soft lens, or PHySL, replaces rigid components with soft polymers acting as artificial muscles. The polymers are composed of a hydrogel a water-based polymer material.


Apple snails can regrow their eyeballs

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. If you step on a snail, you'll know it. Despite their slow speeds, and simple bodies, apple snails (Pomacea canaliculata) have eyes that are anatomically similar to human eyes. Both species have complex camera-like eyes with a lens, cornea, and retina that visually capture the world around them. Unlike humans, apple snails can regrow their peepers if they are injured or amputated.


Why don't we trust technology in sport?

BBC News

For a few minutes on Sunday afternoon, Wimbledon's Centre Court became the perfect encapsulation of the current tensions between humans and machines. When Britain's Sonay Kartal hit a backhand long on a crucial point, her opponent Anastasia Pavlyuchenkova knew it had landed out. She said the umpire did too. But the electronic line-calling system - which means humans have been fully replaced this year following earlier trials - remained silent. The human umpire eventually declared the point should be replayed.


Have scientists discovered a new colour called 'olo'?

Al Jazeera

A team of scientists claims to have discovered a new colour that humans cannot see without the help of technology. The researchers based in the United States said they were able to "experience" the colour, which they named "olo", by firing laser pulses into their eyes using a device named after the Wizard of Oz. Olo cannot be seen with the naked eye, but the five people who have seen it describe it as being similar to teal. Professors from the University of California, Berkeley and the University of Washington School of Medicine published an article in the journal, Science Advances, on April 18 in which they put forth their discovery of a hue beyond the gamut of human vision. They explained that they had devised a technique called Oz, which can "trick" the human eye into seeing olo.


TCL announces the NXTPAPER 11 Plus tablet at CES 2025, featuring a new nano-etched display

Engadget

Similar to last year, TCL is showing off a new generation of NXTPAPER tech this week at CES 2025. The new NXTPAPER 11 Plus tablet is built around the (also new) NXTPAPER 4.0 screen, which uses "nano-matrix lithography" to improve clarity and sharpness. Color accuracy has improved too, with the new output measuring just one on the Delta-E scale (the metric measures how the human eye perceives color differences and any value of one or lower is said to be imperceptible to the human eye). Since 2021, TCL has been trying to craft a screen that's easier on the eyes with its NXTPAPER tech, and now it's throwing AI into the mix (perhaps unsurprisingly). The new AI-powered Smart Eye Comfort Mode on the 11 Plus tablet adjusts output based on different usage scenarios, and the Personalized Eye Comfort Modes let users adjust eye-comfort settings to their liking.


TCL's NXTPAPER 11 Plus tablet packs a new nano-etched display

Engadget

As it did last year, TCL is showing off a new generation of NXTPAPER tech this week at CES. The new NXTPAPER 11 Plus tablet is built around the also new NXTPAPER 4.0 screen which uses a "nano-matrix lithography" to improve clarity and sharpness. The color accuracy is better as well, with the new output measuring just one on the Delta-E scale (the metric measures how the human eye perceives color differences and any value of one or lower is said to be imperceptible to the human eye). Since its launch in 2021, the goal of NXTPAPER tech has been to create a screen that's easier on the eyes and now, unsurprisingly, AI is here to help with that. The NXTPAPER 11 Plus has an AI-powered Smart Eye Comfort Mode that adjusts output based on different usage scenario and the Personalized Eye Comfort Modes let users adjust eye-comfort settings to their liking.


AI detects woman's breast cancer after routine screening missed it: 'Deeply grateful'

FOX News

There are less obvious early signs of the disease that all women should be aware of -- here's what to know. A U.K. woman is thanking artificial intelligence for saving her life. Sheila Tooth of Littlehampton, West Sussex, had her breast cancer successfully detected by AI after routine testing came back "normal," according to a report by SWNS. Tooth, 68, was told she was clear of breast cancer after her last mammogram was reviewed by two radiologists. Her mammogram was then analyzed by an AI system, Mammography Intelligent Assessment, as part of a system being tested by University Hospitals Sussex.


Staring at your phone before bed DOESN'T make it harder to fall asleep, claims new study that contradicts official health advice

Daily Mail - Science & tech

We're often told by health experts not to look at our phone just before bedtime as it affects our sleep. But according to a new study, there may not be much scientific basis to this at all. Experts say there's no decent evidence that exposing our eyes to'blue light' from a screen makes it harder to fall asleep. This contradicts official advice from health experts including the NHS, which tells people to avoid using phones an hour before bedtime due to blue light. Instead, the researchers think smartphones are interfering with sleep simply because we can't put them down at night.


Transparency Attacks: How Imperceptible Image Layers Can Fool AI Perception

McKee, Forrest, Noever, David

arXiv.org Artificial Intelligence

This paper investigates a novel algorithmic vulnerability when imperceptible image layers confound multiple vision models into arbitrary label assignments and captions. We explore image preprocessing methods to introduce stealth transparency, which triggers AI misinterpretation of what the human eye perceives. The research compiles a broad attack surface to investigate the consequences ranging from traditional watermarking, steganography, and background-foreground miscues. We demonstrate dataset poisoning using the attack to mislabel a collection of grayscale landscapes and logos using either a single attack layer or randomly selected poisoning classes. For example, a military tank to the human eye is a mislabeled bridge to object classifiers based on convolutional networks (YOLO, etc.) and vision transformers (ViT, GPT-Vision, etc.). A notable attack limitation stems from its dependency on the background (hidden) layer in grayscale as a rough match to the transparent foreground image that the human eye perceives. This dependency limits the practical success rate without manual tuning and exposes the hidden layers when placed on the opposite display theme (e.g., light background, light transparent foreground visible, works best against a light theme image viewer or browser). The stealth transparency confounds established vision systems, including evading facial recognition and surveillance, digital watermarking, content filtering, dataset curating, automotive and drone autonomy, forensic evidence tampering, and retail product misclassifying. This method stands in contrast to traditional adversarial attacks that typically focus on modifying pixel values in ways that are either slightly perceptible or entirely imperceptible for both humans and machines.


Imperceptible CMOS camera dazzle for adversarial attacks on deep neural networks

Stein, Zvi, Stern, Adrian

arXiv.org Artificial Intelligence

Despite the outstanding performance of deep neural networks, they are vulnerable to adversarial attacks. While there are many invisible attacks in the digital domain, most physical world adversarial attacks are visible. Here we present an invisible optical adversarial attack that uses a light source to dazzle a CMOS camera with a rolling shutter. We present the photopic conditions required to keep the attacking light source completely invisible while sufficiently jamming the captured image so that a deep neural network applied to it is deceived.