Goto

Collaborating Authors

Artificial intelligence can guess your personality based on a SELFIE

Daily Mail - Science & tech

Facial recognition technology can determine a person's personality by analysing an emotionless selfie, a study claims. Researchers built an artificial neural network that assessed 128 different factors of a person's face, such as the width of the mouth and the height of the lips or eyes. It used the data from these readings to categorise a person based on five personality traits: conscientiousness, neuroticism, extraversion, agreeableness, and openness. When compared to questionnaires filled in by the volunteers, the AI was accurate 58 per cent of the time. Researchers say pure chance would get this right 50 per cent of the time and humans are less consistent than the facial recognition method.


AI just got way creepier because it can now read your personality from a selfie

#artificialintelligence

Can you tell what a completely random person is like just by staring at a selfie? Selfies can't automatically give away personality traits to the human eye. This is where AI has just gotten more fascinating or downright scarier, depending on how you see it. Artificial neural networks are now able to figure out what your next potential date (or anyone else) could be like just by being creepy. AI developed by a team of Russian scientists can predict traits like agreeableness, neuroticism, openness and extraversion just by scanning photos. It could be revolutionary for finding optimal matches not just for dating, but also customer service and online tutoring, among other things.


Analyzing Personality through Social Media Profile Picture Choice

AAAI Conferences

The content of images users post to their social media is driven in part by personality. In this study, we analyze how Twitter profile images vary with the personality of the users posting them. In our main analysis, we use profile images from over 66,000 users whose personality we estimate based on their tweets. To facilitate interpretability, we focus our analysis on aesthetic and facial features and control for demographic variation in image features and personality. Our results show significant differences in profile picture choice between personality traits, and that these can be harnessed to predict personality traits with robust accuracy. For example, agreeable and conscientious users display more positive emotions in their profile pictures, while users high in openness prefer more aesthetic photos.


Is Your Profile Picture Worth 1000 Words? Photo Characteristics Associated with Personality Impression Agreement

AAAI Conferences

Social-Networking Websites (SNWs) are rapidly becoming a central media for social exchange. A basic question is how well are people able to get to know each other through these websites? In this study, we explore characteristics of the profile photographs and their association with impression agreement. Using a specially designed social networking website, we examined 1,316 first-impressions of profile owners who had posted photographs as part of a complete profile. The results suggest that photographs in which the profile owners were smiling, outdoors, and shown with others were associated with higher impression agreement. Several gender interactions suggested that other aspects of the photographs, including head covering and apparent weight, also affected impression agreement depending on the gender of the profile owner and visitors. These results were interpreted in light of the literature on interpersonal perception.


Machines Learn Appearance Bias in Face Recognition

arXiv.org Artificial Intelligence

We seek to determine whether state-of-the-art, black box face recognition techniques can learn first-impression appearance bias from human annotations. With FaceNet, a popular face recognition architecture, we train a transfer learning model on human subjects' first impressions of personality traits in other faces. We measure the extent to which this appearance bias is embedded and benchmark learning performance for six different perceived traits. In particular, we find that our model is better at judging a person's dominance based on their face than other traits like trustworthiness or likeability, even for emotionally neutral faces. We also find that our model tends to predict emotions for deliberately manipulated faces with higher accuracy than for randomly generated faces, just like a human subject. Our results lend insight into the manner in which appearance biases may be propagated by standard face recognition models.