Japanese scientists use AI to read minds

#artificialintelligence

Imagine a reality where computers can visualise what you are thinking. In late December, Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani released the results of their recent research on using artificial intelligence to decode thoughts on the scientific platform, BioRxiv. Machine learning has previously been used to study brain scans (MRIs, or magnetic resonance imaging) and generate visualisations of what a person is thinking when referring to simple, binary images like black and white letters or simple geographic shapes (as shown in Figure 2 here). But the scientists from Kyoto developed new techniques of "decoding" thoughts using deep neural networks (artificial intelligence). The new technique allows the scientists to decode more sophisticated "hierarchical" images, which have multiple layers of colour and structure, like a picture of a bird or a man wearing a cowboy hat, for example.


New artificial intelligence system can detect dementia

#artificialintelligence

Japanese scientists say they have developed a novel machine-learning technique that can detect dementia with over 90 per cent accuracy from conversations between humans and avatars on a computer. This technique developed by a team from Osaka University and Nara Institute of Science and Technology involves learning characteristics of sounds of elderly people who answer easy questions. Dementia could be distinguished by combining features of the disorder, such as delay in response to questions from avatars depending on the content of questions, intonation, articulation rate of the voice, and the percentage of nouns and verbs in utterance. An avatar is the graphical representation of the user or a character in an internet forum. "If this technology is further developed, it will become possible to know whether or not an elderly individual is in the early stages of dementia through conversation with computer avatars at home on a daily basis," said researcher Takashi Kudo.


A Review on Neural Network Models of Schizophrenia and Autism Spectrum Disorder

arXiv.org Artificial Intelligence

This survey presents the most relevant neural network models of autism spectrum disorder and schizophrenia, from the first connectionist models to recent deep network architectures. We analyzed and compared the most representative symptoms with its neural model counterpart, detailing the alteration introduced in the network that generates each of the symptoms, and identifying their strengths and weaknesses. For completeness we additionally cross-compared Bayesian and free-energy approaches. Models of schizophrenia mainly focused on hallucinations and delusional thoughts using neural disconnections or inhibitory imbalance as the predominating alteration. Models of autism rather focused on perceptual difficulties, mainly excessive attention to environment details, implemented as excessive inhibitory connections or increased sensory precision. We found an excessive tight view of the psychopathologies around one specific and simplified effect, usually constrained to the technical idiosyncrasy of the network used. Recent theories and evidence on sensorimotor integration and body perception combined with modern neural network architectures offer a broader and novel spectrum to approach these psychopathologies, outlining the future research on neural networks computational psychiatry, a powerful asset for understanding the inner processes of the human brain.


A stochastic model of human visual attention with a dynamic Bayesian network

arXiv.org Machine Learning

Recent studies in the field of human vision science suggest that the human responses to the stimuli on a visual display are non-deterministic. People may attend to different locations on the same visual input at the same time. Based on this knowledge, we propose a new stochastic model of visual attention by introducing a dynamic Bayesian network to predict the likelihood of where humans typically focus on a video scene. The proposed model is composed of a dynamic Bayesian network with 4 layers. Our model provides a framework that simulates and combines the visual saliency response and the cognitive state of a person to estimate the most probable attended regions. Sample-based inference with Markov chain Monte-Carlo based particle filter and stream processing with multi-core processors enable us to estimate human visual attention in near real time. Experimental results have demonstrated that our model performs significantly better in predicting human visual attention compared to the previous deterministic models.


A Visualization of Dementia Care Skills Based on Multimodal Communication Features

AAAI Conferences

We have developed a visualization system of dementia care skills based on multimodal communication features. The purpose of our system is to provide effective learning of dementia care to trainees. As dementia care skills are difficult to visualize and describe, they are hard to acquire for trainees. We focus on HumanitudeR; a non-pharmacological comprehensive intervention with verbal and non-verbal communication, which is a care methodology of French-origin for the vulnerable elderlies. The multimodal methodology utilizes four techniques to relate to elderly with dementia (i.e., gaze, speak, touch, opportunities to stand on their feet). We analyzed the care videos of Humanitude instructors to extract multimodal communication features. We designed and filmed video contents demonstrating the extracted features. These have shown to be effective, in combination with practice and reflection, to acquire dementia care skills. The trainees could use the system for self-reflection and teaching.