Goto

Collaborating Authors

 phosphene



Why you 'see' things in the dark, according to an ophthalmologist

Popular Science

Why you'see' things in the dark, according to an ophthalmologist Science explains why we see flickers of light and patterns in the darkness. Our eyes sometimes really do play tricks on us at night. Breakthroughs, discoveries, and DIY tips sent every weekday. In 1999, Daniel Myrick and Eduardo Sánchez shot one of the definitive horror films of the era on a budget of roughly $60,000. is a study in omission, in the conspicuous absence of the visual effects characteristic to the genre. In lieu of baroque prosthetic gore and over-the-top CGI effects, the movie leans into silence and darkness for much of its 81-minute run time.



Evaluating Deep Human-in-the-Loop Optimization for Retinal Implants Using Sighted Participants

Schoinas, Eirini, Rastogi, Adyah, Carter, Anissa, Granley, Jacob, Beyeler, Michael

arXiv.org Artificial Intelligence

Human-in-the-loop optimization (HILO) is a promising approach for personalizing visual prostheses by iteratively refining stimulus parameters based on user feedback. Previous work demonstrated HILO's efficacy in simulation, but its performance with human participants remains untested. Here we evaluate HILO using sighted participants viewing simulated prosthetic vision to assess its ability to optimize stimulation strategies under realistic conditions. Participants selected between phosphenes generated by competing encoders to iteratively refine a deep stimulus encoder (DSE). We tested HILO in three conditions: standard optimization, threshold misspecifications, and out-of-distribution parameter sampling. Participants consistently preferred HILO-generated stimuli over both a na\"ive encoder and the DSE alone, with log odds favoring HILO across all conditions. We also observed key differences between human and simulated decision-making, highlighting the importance of validating optimization strategies with human participants. These findings support HILO as a viable approach for adapting visual prostheses to individuals.


Hybrid Neural Autoencoders for Stimulus Encoding in Visual and Other Sensory Neuroprostheses

Granley, Jacob, Relic, Lucas, Beyeler, Michael

arXiv.org Artificial Intelligence

Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capabilities. However, sensations elicited by current devices often appear artificial and distorted. Although current models can predict the neural or perceptual response to an electrical stimulus, an optimal stimulation strategy solves the inverse problem: what is the required stimulus to produce a desired response? Here, we frame this as an end-to-end optimization problem, where a deep neural network stimulus encoder is trained to invert a known and fixed forward model that approximates the underlying biological system. As a proof of concept, we demonstrate the effectiveness of this Hybrid Neural Autoencoder (HNA) in visual neuroprostheses. We find that HNA produces high-fidelity patient-specific stimuli representing handwritten digits and segmented images of everyday objects, and significantly outperforms conventional encoding strategies across all simulated patients. Overall this is an important step towards the long-standing challenge of restoring high-quality vision to people living with incurable blindness and may prove a promising solution for a variety of neuroprosthetic technologies.


Adapting Brain-Like Neural Networks for Modeling Cortical Visual Prostheses

Granley, Jacob, Riedel, Alexander, Beyeler, Michael

arXiv.org Artificial Intelligence

Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons. Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge. We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system. To investigate the feasibility of adapting brain-like CNNs for modeling visual prostheses, we developed a proof-of-concept model to predict the perceptions resulting from electrical stimulation. We show that a neurologically-inspired decoding of CNN activations produces qualitatively accurate phosphenes, comparable to phosphenes reported by real patients. Overall, this is an essential first step towards building brain-like models of electrical stimulation, which may not just improve the quality of vision provided by cortical prostheses but could also further our understanding of the neural code of vision.