New AI Mental Health Tools Beat Human Doctors at Assessing Patients

#artificialintelligence

About 20 percent of youth in the United States live with a mental health condition, according to the National Institute of Mental Health. The good news is that mental health professionals have smarter tools than ever before, with artificial intelligence-related technology coming to the forefront to help diagnose patients, often with much greater accuracy than humans. A new study published in the journal Suicide and Life-Threatening Behavior, for example, showed that machine learning is up to 93 percent accurate in identifying a suicidal person. The research, led by John Pestian, a professor at Cincinnati Children's Hospital Medical Center, involved 379 teenage patients from three area hospitals. Each patient completed standardized behavioral rating scales and participated in a semi-structured interview, answering five open-ended questions such as "Are you angry?" to stimulate conversation, according to a press release from the university.


New AI Mental Health Tools Beat Human Doctors at Assessing Patients

#artificialintelligence

About 20 percent of youth in the United States live with a mental health condition, according to the National Institute of Mental Health. The good news is that mental health professionals have smarter tools than ever before, with artificial intelligence-related technology coming to the forefront to help diagnose patients, often with much greater accuracy than humans. A new study published in the journal Suicide and Life-Threatening Behavior, for example, showed that machine learning is up to 93 percent accurate in identifying a suicidal person. The research, led by John Pestian, a professor at Cincinnati Children's Hospital Medical Center, involved 379 teenage patients from three area hospitals. Each patient completed standardized behavioral rating scales and participated in a semi-structured interview, answering five open-ended questions such as "Are you angry?" to stimulate conversation, according to a press release from the university.


Evaluating Older Users' Experiences with Commercial Dialogue Systems: Implications for Future Design and Development

arXiv.org Artificial Intelligence

Understanding the needs of a variety of distinct user groups is vital in designing effective, desirable dialogue systems that will be adopted by the largest possible segment of the population. Despite the increasing popularity of dialogue systems in both mobile and home formats, user studies remain relatively infrequent and often sample a segment of the user population that is not representative of the needs of the potential user population as a whole. This is especially the case for users who may be more reluctant adopters, such as older adults. In this paper we discuss the results of a recent user study performed over a large population of age 50 and over adults in the Midwestern United States that have experience using a variety of commercial dialogue systems. We show the common preferences, use cases, and feature gaps identified by older adult users in interacting with these systems. Based on these results, we propose a new, robust user modeling framework that addresses common issues facing older adult users, which can then be generalized to the wider user population.


Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

arXiv.org Artificial Intelligence

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.


Artificial intelligence software confirms the results of a large scale comparison of ProHance (Gadoteridol) Injection, 279.3 mg/mL and Gadavist (gadobutrol) Injection in MRI of the brain (the TRUTH study)

#artificialintelligence

Bracco Diagnostics Inc., the U.S. subsidiary of Bracco Imaging S.p.A., a leading global company in the diagnostic imaging business, announced the results of an experimental artificial intelligence (AI) study of two gadolinium-based contrast agents (GBCAs) which found that ProHance (Gadoteridol) Injection, 279.3 mg/mL and Gadavist provided similar degree and pattern of contrast enhancement in brain magnetic resonance imaging (MRI) of patients with glioblastoma multiforme (GBM) previously enrolled in a large scale, multicenter, randomized, double blinded controlled clinical study (the TRUTH study).1 Full study results will be presented at the Radiological Society of North America (RSNA) Annual Meeting on Wednesday, December 4, in Chicago, IL. GBCAs are widely used imaging agents with a favorable safety profile. While recent research has shown that the gadolinium from these agents may remain in the body for months to years after injection,2 the American College of Radiology and the Food and Drug Administration agree that there are no known adverse clinical consequences associated with gadolinium retention in the brain based on the available data.3,4 Nevertheless, some practitioners have concerns, and questions have been raised over whether using a GBCA that retains less would come with a tradeoff in the effectiveness of the contrast enhancement. The purpose of this study was to use AI to determine the effectiveness of standard concentration ProHance (0.5mmol/ml) compared to double concentration Gadavist (1.0 mmol/ml), since animal studies have shown that Gadavist retains two to seven times more in the brain versus ProHance, at up to 4 weeks after injection5-6.