Goto

Collaborating Authors

 pupil size


Gradient-Guided Exploration of Generative Model's Latent Space for Controlled Iris Image Augmentations

Mitcheff, Mahsa, Khan, Siamul Karim, Czajka, Adam

arXiv.org Artificial Intelligence

Developing reliable iris recognition and presentation attack detection methods requires diverse datasets that capture realistic variations in iris features and a wide spectrum of anomalies. Because of the rich texture of iris images, which spans a wide range of spatial frequencies, synthesizing same-identity iris images while controlling specific attributes remains challenging. In this work, we introduce a new iris image augmentation strategy by traversing a generative model's latent space toward latent codes that represent same-identity samples but with some desired iris image properties manipulated. The latent space traversal is guided by a gradient of specific geometrical, textural, or quality-related iris image features (e.g., sharpness, pupil size, iris size, or pupil-to-iris ratio) and preserves the identity represented by the image being manipulated. The proposed approach can be easily extended to manipulate any attribute for which a differentiable loss term can be formulated. Additionally, our approach can use either randomly generated images using either a pre-train GAN model or real-world iris images. W e can utilize GAN inversion to project any given iris image into the latent space and obtain its corresponding latent code.


Modelling the Interplay of Eye-Tracking Temporal Dynamics and Personality for Emotion Detection in Face-to-Face Settings

Seikavandi, Meisam J., Fimland, Jostein, Narcizo, Fabricio Batista, Barrett, Maria, Vucurevich, Ted, Boldt, Jesper Bünsow, Dittberner, Andrew Burke, Burelli, Paolo

arXiv.org Artificial Intelligence

Accurate recognition of human emotions is critical for adaptive human-computer interaction, yet remains challenging in dynamic, conversation-like settings. This work presents a personality-aware multimodal framework that integrates eye-tracking sequences, Big Five personality traits, and contextual stimulus cues to predict both perceived and felt emotions. Seventy-three participants viewed speech-containing clips from the CREMA-D dataset while providing eye-tracking signals, personality assessments, and emotion ratings. Our neural models captured temporal gaze dynamics and fused them with trait and stimulus information, yielding consistent gains over SVM and literature baselines. Results show that (i) stimulus cues strongly enhance perceived-emotion predictions (macro F1 up to 0.77), while (ii) personality traits provide the largest improvements for felt emotion recognition (macro F1 up to 0.58). These findings highlight the benefit of combining physiological, trait-level, and contextual information to address the inherent subjectivity of emotion. By distinguishing between perceived and felt responses, our approach advances multimodal affective computing and points toward more personalized and ecologically valid emotion-aware systems.


AI-guided digital intervention with physiological monitoring reduces intrusive memories after experimental trauma

deBettencourt, Megan T., Sakthivel, Sruthi, Holmes, Emily A., Chevillet, Mark

arXiv.org Artificial Intelligence

Trauma prevalence is vast globally. Evidence-based digital treatments can help, but most require human guidance. Human guides provide tailored instructions and responsiveness to internal cognitive states, but limit scalability. Can generative AI and neurotechnology provide a scalable alternative? Here we test ANTIDOTE, combining AI guidance and pupillometry to automatically deliver and monitor an evidence-based digital treatment, specifically the Imagery Competing Task Intervention (ICTI), to reduce intrusive memories after psychological trauma. One hundred healthy volunteers were exposed to videos of traumatic events and randomly assigned to an intervention or active control condition. As predicted, intervention participants reported significantly fewer intrusive memories over the following week. Post-hoc assessment against clinical rubrics confirmed the AI guide delivered the intervention successfully. Additionally, pupil size tracked intervention engagement and predicted symptom reduction, providing a candidate biomarker of intervention effectiveness. These findings open a path toward rigorous AI-guided digital interventions that can scale to trauma prevalence.


Eye Movements as Indicators of Deception: A Machine Learning Approach

Foucher, Valentin, de Leon-Martinez, Santiago, Moro, Robert

arXiv.org Artificial Intelligence

Gaze may enhance the robustness of lie detectors but remains under-studied. This study evaluated the efficacy of AI models (using fixations, saccades, blinks, and pupil size) for detecting deception in Concealed Information Tests across two datasets. The first, collected with Eyelink 1000, contains gaze data from a computerized experiment where 87 participants revealed, concealed, or faked the value of a previously selected card. The second, collected with Pupil Neon, involved 36 participants performing a similar task but facing an experimenter. XGBoost achieved accuracies up to 74% in a binary classification task (Revealing vs. Concealing) and 49% in a more challenging three-classification task (Revealing vs. Concealing vs. Faking). Feature analysis identified saccade number, duration, amplitude, and maximum pupil size as the most important for deception prediction. These results demonstrate the feasibility of using gaze and AI to enhance lie detectors and encourage future research that may improve on this.


Modelling Emotions in Face-to-Face Setting: The Interplay of Eye-Tracking, Personality, and Temporal Dynamics

Seikavandi, Meisam Jamshidi, Fimland, Jostein, Barrett, Maria, Burelli, Paolo

arXiv.org Artificial Intelligence

Accurate emotion recognition is pivotal for nuanced and engaging human-computer interactions, yet remains difficult to achieve, especially in dynamic, conversation-like settings. In this study, we showcase how integrating eye-tracking data, temporal dynamics, and personality traits can substantially enhance the detection of both perceived and felt emotions. Seventy-three participants viewed short, speech-containing videos from the CREMA-D dataset, while being recorded for eye-tracking signals (pupil size, fixation patterns), Big Five personality assessments, and self-reported emotional states. Our neural network models combined these diverse inputs--including stimulus emotion labels for contextual cues--and yielded marked performance gains compared to the state-of-the-art. Specifically, perceived valence predictions reached a macro F1-score of 0.76, and models incorporating personality traits and stimulus information demonstrated significant improvements in felt emotion accuracy. These results highlight the benefit of unifying physiological, individual and contextual factors to address the subjectivity and complexity of emotional expression. Beyond validating the role of user-specific data in capturing subtle internal states, our findings inform the design of future affective computing and human-agent systems, paving the way for more adaptive and cross-individual emotional intelligence in real-world interactions.


EyePreserve: Identity-Preserving Iris Synthesis

Khan, Siamul Karim, Tinsley, Patrick, Mitcheff, Mahsa, Flynn, Patrick, Bowyer, Kevin W., Czajka, Adam

arXiv.org Artificial Intelligence

Synthesis of same-identity biometric iris images, both for existing and non-existing identities while preserving the identity across a wide range of pupil sizes, is complex due to intricate iris muscle constriction mechanism, requiring a precise model of iris non-linear texture deformations to be embedded into the synthesis pipeline. This paper presents the first method of fully data-driven, identity-preserving, pupil size-varying s ynthesis of iris images. This approach is capable of synthesizing images of irises with different pupil sizes representing non-existing identities as well as non-linearly deforming the texture of iris images of existing subjects given the segmentation mask of the target iris image. Iris recognition experiments suggest that the proposed deformation model not only preserves the identity when changing the pupil size but offers better similarity between same-identity iris samples with significant differences in pupil size, compared to state-of-the-art linear and non-linear (bio-mechanical-based) iris deformation models. Two immediate applications of the proposed approach are: (a) synthesis of, or enhancement of the existing biometric datasets for iris recognition, mimicking those acquired with iris sensors, and (b) helping forensic human experts in examining iris image pairs with significant differences in pupil dilation. Source codes and weights of the models are made available with the paper.


AI-powered glaucoma screening test delivers rapid results

#artificialintelligence

A new rapid screening test for glaucoma could help advance early detection of the disease, a leading cause of irreversible blindness. Developed by a research team of engineers and ophthalmologists led by RMIT University in Melbourne, Australia, the test uses infrared sensors to monitor eye movement and can produce accurate results within seconds. About 80 million people worldwide have glaucoma, with more than 111 million expected to be living with the disease by 2040. The loss of sight is usually gradual and 50% of people with glaucoma do not know they have it. Currently, glaucoma is diagnosed through a 30-minute eye pressure test delivered by an ophthalmologist.


It's in your eyes! People with large pupils are more INTELLIGENT, study finds

Daily Mail - Science & tech

People who have larger pupils in their eyes are more intelligent than those with smaller pupils, according to a new study. Volunteers sat reasoning, attention and memory tests so the Georgia Institute of Technology team could investigate the link between pupil size and intelligence. They found that as well as being linked to arousal and exhaustion, pupil dilation can be used to understand the individual differences in intelligence, discovering that the larger the pupils, the higher the intelligence. Differences in the baseline pupil size between those scoring highest and those scoring lowest on intelligence tests could be seen with the unaided eye. The team say this could be due to people with larger pupils having better results regulation of brain activity in a region linked to intelligence and memory.


End-to-End Models for the Analysis of Pupil Size Variations and Diagnosis of Parkinson's Disease

Zanca, Dario, Rufa, Alessandra, Canessa, Andrea, Sabatini, Silvio

arXiv.org Machine Learning

It is well known that a systematic analysis of the pupil size variations, recorded by means of an eye-tracker, is a rich source of information about a subject's cognitive state. In this work we present end-to-end models for the diagnosis of Parkinson's disease (PD) based on the raw pupil size signal. Long-range registration (10 minutes) of the pupil size were collected in scotopic conditions (complete darkness, 0 lux) on 21 healthy subjects and 15 subjects diagnosed with PD. 1-D convolutional neural network models are trained for classification of short-range sequences (10 to 60 seconds of registration). The model provides prediction with high average accuracy on a hold out test set. A temporal analysis of the model performance allowed the characterization of pupil's size variations in PD and healthy subjects during a resting state. Dataset and codes are released for reproducibility and benchmarking purposes.


The AI that can predict your personality simply by looking into your eyes

Daily Mail - Science & tech

This technology could be put in smartphones that understand and predict our behaviour, potentially offering personalised support. They could also be used by robot companions for older people, or in self-driving cars and interactive video games. Dr Loetscher says the findings also provide an important bridge between tightly controlled laboratory studies and the study of natural eye movements in real-world environments. 'This research has tracked and measured the visual behaviour of people going about their everyday tasks, providing more natural responses than if they were in a lab. 'And thanks to our machine-learning approach, we not only validate the role of personality in explaining eye movement in everyday life, but also reveal new eye movement characteristics as predictors of personality traits.' 'Personality traits characterise an individual's patterns of behaviour, thinking, and feeling', researchers wrote previously in their paper published in Frontiers in Human Neuroscience. 'Studies reporting relationships between personality traits and eye movements suggest that people with similar traits tend to move their eyes in similar ways.' Researchers found that people who were neurotic usually blinked faster while people who were open to new experiences moved their eyes more from side-to-side. People who had high levels of conscientiousness had greater fluctuations in their pupil size. Optimists spent less time looking at negative emotional stimuli (such as image of skin cancer) than people who were pessimistic.