Collaborating Authors

Artificial Intelligence knows when you feel lonely


Researchers at the University of California San Diego have devised an artificial intelligence (AI) tool to predict the level of loneliness in adults, with 94% accuracy. The tool used Natural Language Processing (NLP) developed by IBM to process large amounts of unstructured natural speech and text data. It analysed factors like cognition, mobility, sleep and physical activity to understand the process of aging. This tool is an example of how AI can be used in devices to detect mental health conditions. Market research firm Gartner predicts, by 2022, your personal device will know more about your emotional state than your own family members.

Prediction of Human Empathy based on EEG Cortical Asymmetry Artificial Intelligence

Humans constantly interact with digital devices that disregard their feelings. However, the synergy between human and technology can be strengthened if the technology is able to distinguish and react to human emotions. Models that rely on unconscious indications of human emotions, such as (neuro)physiological signals, hold promise in personalization of feedback and adaptation of the interaction. The current study elaborated on adopting a predictive approach in studying human emotional processing based on brain activity. More specifically, we investigated the proposition of predicting self-reported human empathy based on EEG cortical asymmetry in different areas of the brain. Different types of predictive models i.e. multiple linear regression analyses as well as binary and multiclass classifications were evaluated. Results showed that lateralization of brain oscillations at specific frequency bands is an important predictor of self-reported empathy scores. Additionally, prominent classification performance was found during resting-state which suggests that emotional stimulation is not required for accurate prediction of empathy -- as a personality trait -- based on EEG data. Our findings not only contribute to the general understanding of the mechanisms of empathy, but also facilitate a better grasp on the advantages of applying a predictive approach compared to hypothesis-driven studies in neuropsychological research. More importantly, our results could be employed in the development of brain-computer interfaces that assist people with difficulties in expressing or recognizing emotions.

Towards a Human-Centred Cognitive Model of Visuospatial Complexity in Everyday Driving Artificial Intelligence

We develop a human-centred, cognitive model of visuospatial complexity in everyday, naturalistic driving conditions. With a focus on visual perception, the model incorporates quantitative, structural, and dynamic attributes identifiable in the chosen context; the human-centred basis of the model lies in its behavioural evaluation with human subjects with respect to psychophysical measures pertaining to embodied visuoauditory attention. We report preliminary steps to apply the developed cognitive model of visuospatial complexity for human-factors guided dataset creation and benchmarking, and for its use as a semantic template for the (explainable) computational analysis of visuospatial complexity.

A method to introduce emotion recognition in gaming


Virtual Reality (VR) is opening up exciting new frontiers in the development of video games, paving the way for increasingly realistic, interactive and immersive gaming experiences. VR consoles, in fact, allow gamers to feel like they are almost inside the game, overcoming limitations associated with display resolution and latency issues. An interesting further integration for VR would be emotion recognition, as this could enable the development of games that respond to a user's emotions in real time. With this in mind, a team of researchers at Yonsei University and Motion Device Inc. have recently proposed a deep-learning-based technique that could enable emotion recognition during VR gaming experiences. Their paper was presented at the 2019 IEEE Conference on Virtual Reality and 3-D User Interfaces.

How Facebook's brain-machine interface measures up


Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs -- Facebook's Pittsburgh-based division devoted to augmented reality and virtual reality R&D -- described a prototypical system capable of reading and decoding study subjects' brain activity while they speak. It's impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) -- the direct recording of electrical potentials associated with activity from the cerebral cortex -- to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.