Reinforcement Learning Models of Human Behavior: Reward Processing in Mental Disorders

arXiv.org Artificial Intelligence

For AI community, the development of agents that react differently to different types of rewards can enable us to understand a wide spectrum of multi-agent interactions in complex real-world socioeconomic systems. Empirically, the proposed model outperforms Q-Learning and Double Q-Learning in artificial scenarios with certain reward distributions and real-world human decision making gambling tasks. Moreover, from the behavioral modeling perspective, our parametric framework can be viewed as a first step towards a unifying computational model capturing reward processing abnormalities across multiple mental conditions and user preferences in long-term recommendation systems.


A rational decision making framework for inhibitory control

Neural Information Processing Systems

Intelligent agents are often faced with the need to choose actions with uncertain consequences, and to modify those actions according to ongoing sensory processing and changing task demands. The requisite ability to dynamically modify or cancel planned actions is known as inhibitory control in psychology. We formalize inhibitory control as a rational decision-making problem, and apply to it to the classical stop-signal task. Using Bayesian inference and stochastic control tools, we show that the optimal policy systematically depends on various parameters of the problem, such as the relative costs of different action choices, the noise level of sensory inputs, and the dynamics of changing environmental demands. Our normative model accounts for a range of behavioral data in humans and animals in the stop-signal task, suggesting that the brain implements statistically optimal, dynamically adaptive, and reward-sensitive decision-making in the context of inhibitory control problems.


Researchers hope voice assistants can spot signs of dementia

#artificialintelligence

An effort to use voice-assistant devices like Amazon's Alexa to detect signs of memory problems in people has gotten a boost with a grant from the federal government. Researchers from Dartmouth-Hitchcock and the University of Massachusetts Boston will get a four-year $1.2 million grant from the National Institute on Aging. The team hopes to develop a system that would use machine and deep learning techniques to detect changes in speech patterns to determine if someone is a risk of developing dementia or Alzheimer's. "We are tackling a significant and complicated data-science question: whether the collection of long-term speech patterns of individuals at home will enable us to develop new speech-analysis methods for early detection of this challenging disease," Xiaohui Liang, an assistant professor of computer science from the University of Massachusetts Boston, said in a statement. "Our team envisions that the changes in the speech patterns of individuals using the voice assistant systems may be sensitive to their decline in memory and function over time."


Doctor Alexa Will See You Now: Is Amazon Primed To Come To Your Rescue?

#artificialintelligence

Now that it's upending the way you play music, cook, shop, hear the news and check the weather, the friendly voice emanating from your Amazon Alexa-enabled smart speaker is poised to wriggle its way into all things health care. Amazon has big ambitions for its devices. It thinks Alexa, the virtual assistant inside them, could help doctors diagnose mental illness, autism, concussions and Parkinson's disease. It even hopes Alexa will detect when you're having a heart attack. At present, Alexa can perform a handful of health care-related tasks: "She" can track blood glucose levels, describe symptoms, access post-surgical care instructions, monitor home prescription deliveries and make same-day appointments at the nearest urgent care center.


Personalized 'deep learning' equips robots for autism therapy: Machine learning network offers personalized estimates of children's behavior

#artificialintelligence

This type of therapy works best, however, if the robot can smoothly interpret the child's own behavior -- whether he or she is interested and excited or paying attention -- during the therapy. Researchers at the MIT Media Lab have now developed a type of personalized machine learning that helps robots estimate the engagement and interest of each child during these interactions, using data that are unique to that child. Armed with this personalized "deep learning" network, the robots' perception of the children's responses agreed with assessments by human experts, with a correlation score of 60 percent, the scientists report June 27 in Science Robotics. It can be challenging for human observers to reach high levels of agreement about a child's engagement and behavior. Their correlation scores are usually between 50 and 55 percent.