Goto

Collaborating Authors

therapist


Council Post: How AI Can Help Manage Mental Health In Times Of Crisis

#artificialintelligence

Much has been written in the past few weeks about the COVID-19 crisis and the ripple effects that will impact human society. Beyond the immediate effect of the virus on health and mortality, it is clear that we are also facing a global, massive financial crisis that is likely to affect our lives for years to come. These changes, along with the expected prolonged social isolation, are bound to have a devastating effect on our mental health, collectively and individually, and, in turn, cause a dramatic deterioration in overall health and an increase in the prevalence of chronic illness. From research conducted by the World Health Organization, we know that most people affected by emergency situations experience immediate psychological distress, hopelessness and sleep issues -- and that 22% of people are expected to develop depression, anxiety, post-traumatic stress disorder, bipolar disorder or schizophrenia. This escalation comes on top of a baseline of 19.1% of U.S. adults experiencing mental illness (47.6 million people in 2018, according to the Substance Abuse and Mental Health Services Administration).


The Boundaries of Artificial Emotional Intelligence

#artificialintelligence

I'm told I should prepare for the day an artificial intelligence takes my job. This will leave me either destitute and rootless or overwhelmed by a plenitude of time and existential terror, depending on whom you ask. It's apparently time to consider what kind of work only humans can do, and frantically reorient ourselves toward those roles -- lest we be left standing helplessly, as if at the end of some game of robot musical chairs. Emotional labor is a form of work less often considered in these automated future projections. Perhaps this is because the work it takes to smile at a rude customer or to manage their distress is intangible, difficult to quantify and monetize.


COVID-19 Hangover -- Part II

#artificialintelligence

In part I of the blog I wrote about the most acute problem our society faces today - the Climate Crisis, and how we can leverage the pandemic-caused lockdown to analyze the consequences as data points for the "what-if" scenario to make better decisions in the future. While Climate Crisis should be addressed timely and aggressively, COVID-19 posed a health crisis for the governments to deal with the projection of 80% of the population being infected in the short term. How will health systems manage the prevention, diagnostics, and treatment of the pandemic in parallel to provide the ongoing services and treatments? In this part, I will present a few developments in telemedicine, personalized medicine and drug development powered by AI/ML and how they better equipped us in this fight and could be used routinely in the future. Telemedicine is a buzzword we used to hear in the context of highly populated countries with a lack of trained personnel trying to bridge the supply and demand with remote resourcing.


Mental Health Apps: AI Surveillance Enters Our World - Mad In America

#artificialintelligence

In 2018, California's state government began rolling out a new "mental health" initiative. The tech companies of Silicon Valley were creating smartphone apps that could prompt users to seek mental health care, and the state wanted to provide support. After all, researchers claim that more than half of Americans with mental health problems don't receive treatment, and one reason for that might be that treatment is expensive or unavailable in certain regions. Of the thousands of mental health apps in existence today, the state selected two. The first app is called 7 Cups, by a company called 7 Cups of Tea. They're focused on connecting mental health service users, in text-based chat sessions, with what they call "listeners"--volunteers who are trained in "active listening." But, according to The New York Times, the company has been plagued with issues, including listeners having inappropriate conversations with their clients and investigations of its alleged financial misconduct. The other company partnering with the state of California is Mindstrong Health. Their app (branded Mindstrong on March 17, 2020, previously known as Health) is available on the Google Play Store and the Apple App Store. However, you can only use the app if you have been given a code to participate by one of the health insurance companies they've partnered with.


Mental Health Apps: AI Surveillance Enters Our World - Mad In America

#artificialintelligence

In 2018, California's state government began rolling out a new "mental health" initiative. The tech companies of Silicon Valley were creating smartphone apps that could prompt users to seek mental health care, and the state wanted to provide support. After all, researchers claim that more than half of Americans with mental health problems don't receive treatment, and one reason for that might be that treatment is expensive or unavailable in certain regions. Of the thousands of mental health apps in existence today, the state selected two. The first app is called 7 Cups, by a company called 7 Cups of Tea. They're focused on connecting mental health service users, in text-based chat sessions, with what they call "listeners"--volunteers who are trained in "active listening." But, according to The New York Times, the company has been plagued with issues, including listeners having inappropriate conversations with their clients and investigations of its alleged financial misconduct. The other company partnering with the state of California is Mindstrong Health. Their app (branded Mindstrong on March 17, 2020, previously known as Health) is available on the Google Play Store and the Apple App Store. However, you can only use the app if you have been given a code to participate by one of the health insurance companies they've partnered with.


Opportunities of a Machine Learning-based Decision Support System for Stroke Rehabilitation Assessment

arXiv.org Artificial Intelligence

Rehabilitation assessment is critical to determine an adequate intervention for a patient. However, the current practices of assessment mainly rely on therapist's experience, and assessment is infrequently executed due to the limited availability of a therapist. In this paper, we identified the needs of therapists to assess patient's functional abilities (e.g. alternative perspective on assessment with quantitative information on patient's exercise motions). As a result, we developed an intelligent decision support system that can identify salient features of assessment using reinforcement learning to assess the quality of motion and summarize patient specific analysis. We evaluated this system with seven therapists using the dataset from 15 patient performing three exercises. The evaluation demonstrates that our system is preferred over a traditional system without analysis while presenting more useful information and significantly increasing the agreement over therapists' evaluation from 0.6600 to 0.7108 F1-scores ($p <0.05$). We discuss the importance of presenting contextually relevant and salient information and adaptation to develop a human and machine collaborative decision making system.


Wearable tech uses machine learning and signal processing to provide data-driven mental health therapy

#artificialintelligence

Often, when Feel detects an emotion, the app will ask users to describe what's happening and how they feel. That feedback serves three purposes: It helps the algorithm improve, it provides the therapist with richer information, and it prompts journaling, which brings greater self-insight. Chryssoula says the app "challenged me to be specific and analytical in the ways of improving myself, my negative thoughts, and my circling fears." The app might also suggest one of several exercises. For example, users might be asked to recall the key message from their last therapy session and describe how they plan to use that takeaway in their daily lives.


Suicide Research Could Be the Mortality Breakthrough of the 2020s

#artificialintelligence

We need better ways to help people. What's the medical breakthrough that could save the most lives in the U.S. over the next ten years? In the 2020s, medical research will likely inch forward when it comes to major killers like heart disease and cancer. But the biggest potential to save lives could lie in learning to prevent suicide. The rates of reported suicides have been creeping up over the last two decades.


The ELIZA Effect - 99% Invisible

#artificialintelligence

Throughout Joseph Weizenbaum's life, he liked to tell this story about a computer program he'd created back in the 1960s as a professor at MIT. It was a simple chatbot named ELIZA that could interact with users in a typed conversation. As he enlisted people to try it out, Weizenbaum saw similar reactions again and again -- people were entranced by the program. They would reveal very intimate details about their lives. It was as if they'd just been waiting for someone (or something) to ask.


Artificial intelligence moves into health care

#artificialintelligence

The next time you get sick, your care may involve a form of the technology people use to navigate road trips or pick the right vacuum cleaner online. Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients. It already detects an eye disease tied to diabetes and does other behind-the-scenes work like helping doctors interpret MRI scans and other imaging tests for some forms of cancer. Now, parts of the health system are starting to use it directly with patients. During some clinic and telemedicine appointments, AI-powered software asks patients initial questions about their symptoms that physicians or nurses normally pose.