Goto

Collaborating Authors

 cornea


Safe Robotic Capsule Cleaning with Integrated Transpupillary and Intraocular Optical Coherence Tomography

Lai, Yu-Ting, Foroutani, Yasamin, Barzelay, Aya, Tsao, Tsu-Chin

arXiv.org Artificial Intelligence

--Secondary cataract is one of the most common complications of vision loss due to the proliferation of residual lens materials that naturally grow on the lens capsule after cataract surgery. A potential treatment is capsule cleaning, a surgical procedure that requires enhanced visualization of the entire capsule and tool manipulation on the thin membrane. This article presents a robotic system capable of performing the capsule cleaning procedure by integrating a standard transpupillary and an intraocular optical coherence tomography probe on a surgical instrument for equatorial capsule visualization and real-time tool-to-tissue distance feedback. Using robot precision, the developed system enables complete capsule mapping in the pupillary and equatorial regions with in-situ calibration of refractive index and fiber offset, which are still current challenges in obtaining an accurate capsule model. T o demonstrate effectiveness, the capsule mapping strategy was validated through five experimental trials on an eye phantom that showed reduced root-mean-square errors in the constructed capsule model, while the cleaning strategy was performed in three ex-vivo pig eyes without tissue damage. Capsule cleaning is a potential treatment for eliminating blindness due to residual lens materials that develop around the capsular bag after cataract surgery [1]. The procedure requires precise instrument maneuvers and timely sensing of the environment to obtain successful surgical outcomes. Although transpupillary optical coherence tomography (OCT) and the digital microscope exhibit sufficient resolution to visualize the posterior capsule (PC) and other tissues, the shadowing effect created by the iris limits the visibility of the equatorial region and the amount of residual lens or tissue location remain unknown (Figure 1) [2]. Although polishing is theoretically feasible, many surgeons choose to skip it to avoid increased risks of capsule rupture [3], possibly due to uncharacterized equatorial regions and inaccurate manual manipulation on the thin capsule membrane (error approximately 200-350 µm) [4], [5]. Unlike human intervention, accurate tooltip positioning and enhanced sensing can be achieved with a robotic system that has the potential to assist and enable the polishing procedure. This work was supported by U.S. NIH/R01EY029689 and NIH/R01EY030595. Y u-Ting Lai, Y asamin Fouroutani, and Tsu-Chin Tsao are with the Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA, USA.


A portable diagnosis model for Keratoconus using a smartphone

Li, Yifan, Ho, Peter, Chong, Jo Woon

arXiv.org Artificial Intelligence

Keratoconus (KC) is a corneal disorder that results in blurry and distorted vision. Traditional diagnostic tools, while effective, are often bulky, costly, and require professional operation. In this paper, we present a portable and innovative methodology for diagnosing. Our proposed approach first captures the image reflected on the eye's cornea when a smartphone screen-generated Placido disc sheds its light on an eye, then utilizes a two-stage diagnosis for identifying the KC cornea and pinpointing the location of the KC on the cornea. The first stage estimates the height and width of the Placido disc extracted from the captured image to identify whether it has KC. In this KC identification, k-means clustering is implemented to discern statistical characteristics, such as height and width values of extracted Placido discs, from non-KC (control) and KC-affected groups. The second stage involves the creation of a distance matrix, providing a precise localization of KC on the cornea, which is critical for efficient treatment planning. The analysis of these distance matrices, paired with a logistic regression model and robust statistical analysis, reveals a clear distinction between control and KC groups. The logistic regression model, which classifies small areas on the cornea as either control or KC-affected based on the corresponding inter-disc distances in the distance matrix, reported a classification accuracy of 96.94%, which indicates that we can effectively pinpoint the protrusion caused by KC. This comprehensive, smartphone-based method is expected to detect KC and streamline timely treatment.


Reimagining partial thickness keratoplasty: An eye mountable robot for autonomous big bubble needle insertion

Wang, Y., Opfermann, J. D., Yu, J., Yi, H., Kaluna, J., Biswas, R., Zuo, R., Gensheimer, W., Krieger, A., Kang, J. U.

arXiv.org Artificial Intelligence

Autonomous surgical robots have demonstrated significant potential to standardize surgical outcomes, driving innovations that enhance safety and consistency regardless of individual surgeon experience. Deep anterior lamellar keratoplasty (DALK), a partial thickness corneal transplant surgery aimed at replacing the anterior part of cornea above Descemet membrane (DM), would greatly benefit from an autonomous surgical approach as it highly relies on surgeon skill with high perforation rates. In this study, we proposed a novel autonomous surgical robotic system (AUTO-DALK) based on a customized neural network capable of precise needle control and consistent big bubble demarcation on cadaver and live rabbit models. We demonstrate the feasibility of an AI-based image-guided vertical drilling approach for big bubble generation, in contrast to the conventional horizontal needle approach. Our system integrates an optical coherence tomography (OCT) fiber optic distal sensor into the eye-mountable micro robotic system, which automatically segments OCT M-mode depth signals to identify corneal layers using a custom deep learning algorithm. It enables the robot to autonomously guide the needle to targeted tissue layers via a depth-controlled feedback loop. We compared autonomous needle insertion performance and resulting pneumo-dissection using AUTO-DALK against 1) freehand insertion, 2) OCT sensor guided manual insertion, and 3) teleoperated robotic insertion, reporting significant improvements in insertion depth, pneumo-dissection depth, task completion time, and big bubble formation. Ex vivo and in vivo results indicate that the AI-driven, AUTO-DALK system, is a promising solution to standardize pneumo-dissection outcomes for partial thickness keratoplasty.


What You See is What You Grasp: User-Friendly Grasping Guided by Near-eye-tracking

Wang, Shaochen, Zhang, Wei, Zhou, Zhangli, Cao, Jiaxi, Chen, Ziyang, Chen, Kang, Li, Bin, Kan, Zhen

arXiv.org Artificial Intelligence

This work presents a next-generation human-robot interface that can infer and realize the user's manipulation intention via sight only. Specifically, we develop a system that integrates near-eye-tracking and robotic manipulation to enable user-specified actions (e.g., grasp, pick-and-place, etc), where visual information is merged with human attention to create a mapping for desired robot actions. To enable sight guided manipulation, a head-mounted near-eye-tracking device is developed to track the eyeball movements in real-time, so that the user's visual attention can be identified. To improve the grasping performance, a transformer based grasp model is then developed. Stacked transformer blocks are used to extract hierarchical features where the volumes of channels are expanded at each stage while squeezing the resolution of feature maps. Experimental validation demonstrates that the eye-tracking system yields low gaze estimation error and the grasping system yields promising results on multiple grasping datasets. This work is a proof of concept for gaze interaction-based assistive robot, which holds great promise to help the elder or upper limb disabilities in their daily lives. A demo video is available at https://www.youtube.com/watch?v=yuZ1hukYUrM


REGISTRATION FORM*

#artificialintelligence

PUBLIC TALK & PANEL DISCUSSION Organized by: National Centre for Cell Science (DBT-NCCS), Pune, India 28 February 2022 (Monday) 02.30 - 04.00 p.m. (including Q&A) ** Link to join the webinar: https://meet.goto.com/135934909 Summary: The theme for the National Science Day 2022 is "Integrated Approach in Science & Technology for a Sustainable Future". The time has come to recognize that the integration of diverse disciplines in science and technology is key to solving societal problems. This holds true not only for research oriented towards immediate application, but also for addressing crucial fundamental questions in science. This public talk and panel discussion aims to highlight the power of interdisciplinary collaborations exemplified by'CASSPER' and'CoRNeA', two tools created using AI to unravel the secrets of the structures and interactions of protein molecules.


How to spot deepfakes? Look at light reflection in the eyes

#artificialintelligence

University at Buffalo computer scientists have developed a tool that automatically identifies deepfake photos by analyzing light reflections in the eyes. The tool proved 94% effective with portrait-like photos in experiments described in a paper accepted at the IEEE International Conference on Acoustics, Speech and Signal Processing to be held in June in Toronto, Canada. "The cornea is almost like a perfect semisphere and is very reflective," says the paper's lead author, Siwei Lyu, Ph.D., SUNY Empire Innovation Professor in the Department of Computer Science and Engineering. "So, anything that is coming to the eye with a light emitting from those sources will have an image on the cornea. "The two eyes should have very similar reflective patterns because they're seeing the same thing.


Computer program has near-perfect record spotting deepfakes by examining reflection in the eyes

Daily Mail - Science & tech

Computer scientists have developed a tool that detects deepfake photos with near-perfect accuracy. The system, which analyzes light reflections in a subject's eyes, proved 94 percent effective in experiments. In real portraits, the light reflected in our eyes is generally in the same shape and color, because both eyes are looking at the same thing. Since deepfakes are composites made from many different photos, most omit this crucial detail. Deepfakes became a particular concern during the 2020 US presidential election, raising concerns they'd be use to discredit candidates and spread disinformation.


New Deepfake Spotting Tool Proves 94% Effective – Here's the Secret of Its Success

#artificialintelligence

Question: Which of these people are fake? University at Buffalo deepfake spotting tool proves 94% effective with portrait-like photos, according to study. University at Buffalo computer scientists have developed a tool that automatically identifies deepfake photos by analyzing light reflections in the eyes. The tool proved 94% effective with portrait-like photos in experiments described in a paper accepted at the IEEE International Conference on Acoustics, Speech and Signal Processing to be held in June in Toronto, Canada. "The cornea is almost like a perfect semisphere and is very reflective," says the paper's lead author, Siwei Lyu, PhD, SUNY Empire Innovation Professor in the Department of Computer Science and Engineering.


Working out the mystery of ectasia risk with artificial intelligence

#artificialintelligence

This article was reviewed by Renato Ambrósio, Jr, MD, PhD Ectasia is an intriguing and mysterious complication of laser-vision-correction (LVC) procedures. The potentially devastating problem underscores the importance of determining the susceptibility of the cornea for developing progressive ectasia, and of going beyond detecting just mild or subclinical keratoconus. The corneal structure as well as the potential impact of LVC should be considered to predict ectasia risk in every patient. "The LVC procedure and eye rubbing are the primary environmental culprits in the development of ectasia in any cornea," said Renato Ambrósio, Jr, MD, PhD. "So, a basic factor for avoiding ectasia is educating the patient not to rub the eye."


Using artificial intelligence for early detection of keratoconus

#artificialintelligence

CERA researchers are investigating the use of artificial intelligence to help detect signs of keratoconus, thanks to new funding from the Perpetual 2020 IMPACT Philanthropy Application Program. CERA Senior Research Fellow Dr Srujana Sahebjada is devoted to improving the quality of life for people with keratoconus, a condition that affects the cornea, the clear front window of the eye. Keratoconus usually affects teenagers and young adults. For people with this condition, the cornea gets thinner over time and develops a bulging cone-like shape, which causes vision problems. In advanced cases, a corneal transplant will be required to correct or restore vision.