facial feature
Actress sues Avatar director for 'theft' of facial features
Film-maker James Cameron and Disney are being sued by an actress who has accused the director of using her likeness as the basis for one of the lead characters in his hit film series Avatar. German-born US actress Q'orianka Kilcher, who is of indigenous Peruvian descent, alleged that in 2005 - when she was 14 - Cameron extracted her facial features from a photograph of her portraying Pocahontas in another film, The New World. In court documents filed on Tuesday in California, her team claimed Cameron directed his design team to use it as the foundation for the character of Neytiri, depicted on screen by Zoe Saldaña. BBC News has contacted Cameron and Disney for a comment. The Avatar movies contain a hybrid of live-action performance mixed with computer-generated characters.
- Europe > United Kingdom (0.52)
- North America > United States > California (0.25)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Gamers Hate Nvidia's DLSS 5. Developers Aren't Crazy About It, Either
Nvidia's new AI upscaling gaming technology struck gamers as uncanny and off-putting. Developers don't seem to like it, either, but it could be "the default" in a few years. Nvidia announced a new version of its DLSS AI upscaling technology for its graphics cards earlier this week at its GPU Technology Conference (GTC), which it calls the Super Bowl of AI . But unlike previous versions of DLSS that used AI to improve frame rates in video games, DLSS 5 has a much more ambitious calling: using generative AI to make character faces in games look more realistic and detailed. The demonstration received sharp blowback on social media, with many finding the effect off-putting, reacting with outright disgust, and calling it yet another example of AI slop .
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > Mexico (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology (1.00)
- Information Technology > Communications > Social Media (0.51)
- Information Technology > Artificial Intelligence > Games (0.51)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.35)
- Information Technology > Artificial Intelligence > Applied AI (0.34)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Beijing > Beijing (0.04)
CSGaze: Context-aware Social Gaze Prediction
Madan, Surbhi, Ghosh, Shreya, Subramanian, Ramanathan, Dhall, Abhinav, Gedeon, Tom
A person's gaze offers valuable insights into their focus of attention, level of social engagement, and confidence. In this work, we investigate how contextual cues combined with visual scene and facial information can be effectively utilized to predict and interpret social gaze patterns during conversational interactions. We introduce CSGaze, a context aware multimodal approach that leverages facial, scene information as complementary inputs to enhance social gaze pattern prediction from multi-person images. The model also incorporates a fine-grained attention mechanism centered on the principal speaker, which helps in better modeling social gaze dynamics. Experimental results show that CSGaze performs competitively with state-of-the-art methods on GP-Static, UCO-LAEO and AVA-LAEO. Our findings highlight the role of contextual cues in improving social gaze prediction. Additionally, we provide initial explainability through generated attention scores, offering insights into the model's decision-making process. We also demonstrate our model's generalizability by testing our model on open set datasets that demonstrating its robustness across diverse scenarios.
- Research Report > New Finding (0.68)
- Research Report > Promising Solution (0.48)
Cross-Enhanced Multimodal Fusion of Eye-Tracking and Facial Features for Alzheimer's Disease Diagnosis
Nie, Yujie, Ni, Jianzhang, Ye, Yonglong, Zhang, Yuan-Ting, Wing, Yun Kwok, Xu, Xiangqing, Ma, Xin, Fan, Lizhou
Accurate diagnosis of Alzheimer's disease (AD) is essential for enabling timely intervention and slowing disease progression. Multimodal diagnostic approaches offer considerable promise by integrating complementary information across behavioral and perceptual domains. Eye-tracking and facial features, in particular, are important indicators of cognitive function, reflecting attentional distribution and neurocognitive state. However, few studies have explored their joint integration for auxiliary AD diagnosis. In this study, we propose a multimodal cross-enhanced fusion framework that synergistically leverages eye-tracking and facial features for AD detection. The framework incorporates two key modules: (a) a Cross-Enhanced Fusion Attention Module (CEF AM), which models inter-modal interactions through cross-attention and global enhancement, and (b) a Direction-Aware Convolution Module (DACM), which captures fine-grained directional facial features via horizontal-vertical receptive fields. To support this work, we constructed a synchronized multimodal dataset, including 25 patients with AD and 25 healthy controls (HC), by recording aligned facial video and eye-tracking sequences during a visual memory-search paradigm, providing an ecologically valid resource for evaluating integration strategies. Extensive experiments on this dataset demonstrate that our framework outperforms traditional late fusion and feature concatenation methods, achieving a classification accuracy of 95.11% in distinguishing AD from HC, highlighting superior robustness and diagnostic performance by explicitly modeling inter-modal dependencies and modality-specific contributions. Introduction Alzheimer's disease (AD), a progressive and irreversible neurodegenera-tive disorder, represents the primary cause of dementia in older adults [1]. It typically begins with mild memory loss and gradually progresses to severe impairments in executive and cognitive functions [2]. Within the global aging population, more than 150 million people worldwide will be affected by AD or other forms of dementia [3], imposing a substantial burden on both families and healthcare systems. Early and accurate identification of Alzheimer's disease is vital to initiate interventions that may slow progression and improve quality of life. Clinically, the diagnosis of AD primarily relies on biomarker analysis, neu-roimaging techniques, and neuropsychological assessments.
- Asia > China > Hong Kong (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Sweden (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
When Face Recognition Doesn't Know Your Face Is a Face
When Face Recognition Doesn't Know Your Face Is a Face An estimated 100 million people live with facial differences. As face recognition tech becomes widespread, some say they're getting blocked from accessing essential systems and services. Autumn Gardiner thought updating her driving license would be straightforward. After getting married last year, she headed to the local Department of Motor Vehicles office in Connecticut to get her name changed on her license. While she was there, Gardiner recalls, officials said she needed to update her photo.
- North America > United States > Connecticut (0.25)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)
- North America > United States > Oregon (0.04)
- (7 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Transportation > Ground > Road (0.68)
The Influence of Facial Features on the Perceived Trustworthiness of a Social Robot
Barrow, Benedict, Moore, Roger K.
Abstract-- Trust and the perception of trustworthiness play an important role in decision-making and our behaviour towards others, and this is true not only of human-human interactions but also of human-robot interactions. While significant advances have been made in recent years in the field of social robotics, there is still some way to go before we fully understand the factors that influence human trust in robots. This paper presents the results of a study into the first impressions created by a social robot's facial features, based on the hypothesis that a'babyface' engenders trust. By manipulating the back-projected face of a Furhat robot, the study confirms that eye shape and size have a significant impact on the perception of trustworthiness. The work thus contributes to an understanding of the design choices that need to be made when developing social robots so as to optimise the effectiveness of human-robot interaction. Trust is a fundamental building block for any society to function properly.
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- Oceania > New Zealand > South Island > Canterbury Region > Christchurch (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (3 more...)
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Beijing > Beijing (0.04)
The Importance of Facial Features in Vision-based Sign Language Recognition: Eyes, Mouth or Full Face?
Pham, Dinh Nam, Avramidis, Eleftherios
Non-manual facial features play a crucial role in sign language communication, yet their importance in automatic sign language recognition (ASLR) remains underexplored. While prior studies have shown that incorporating facial features can improve recognition, related work often relies on hand-crafted feature extraction and fails to go beyond the comparison of manual features versus the combination of manual and facial features. In this work, we systematically investigate the contribution of distinct facial regionseyes, mouth, and full faceusing two different deep learning models (a CNN-based model and a transformer-based model) trained on an SLR dataset of isolated signs with randomly selected classes. Through quantitative performance and qualitative saliency map evaluation, we reveal that the mouth is the most important non-manual facial feature, significantly improving accuracy. Our findings highlight the necessity of incorporating facial features in ASLR.
Deepfake Detection Via Facial Feature Extraction and Modeling
Carter, Benjamin, Dilla, Nathan, Callahan, Micheal, Ambala, Atuhaire
The rise of deepfake technology brings forth new questions about the authenticity of various forms of media found online today. Videos and images generated by artificial intelligence (AI) have become increasingly more difficult to differentiate from genuine media, resulting in the need for new models to detect artificially-generated media. While many models have attempted to solve this, most focus on direct image processing, adapting a convolutional neural network (CNN) or a recurrent neural network (RNN) that directly interacts with the video image data. This paper introduces an approach of using solely facial landmarks for deepfake detection. Using a dataset consisting of both deepfake and genuine videos of human faces, this paper describes an approach for extracting facial landmarks for deepfake detection, focusing on identifying subtle inconsistencies in facial movements instead of raw image processing. Experimental results demonstrated that this feature extraction technique is effective in various neural network models, with the same facial landmarks tested on three neural network models, with promising performance metrics indicating its potential for real-world applications. The findings discussed in this paper include RNN and artificial neural network (ANN) models with accuracy between 96% and 93%, respectively, with a CNN model hovering around 78%. This research challenges the assumption that raw image processing is necessary to identify deepfake videos by presenting a facial feature extraction approach compatible with various neural network models while requiring fewer parameters.