therapist
'At 2am, it feels like someone's there': why Nigerians are choosing chatbots to give them advice and therapy
AI platforms offering first-line mental health support have proliferated in Nigeria, where health services are sparse and underfunded. AI platforms offering first-line mental health support have proliferated in Nigeria, where health services are sparse and underfunded. 'At 2am, it feels like someone's there': why Nigerians are choosing chatbots to give them advice and therapy O n a quiet evening in her Abuja hotel, Joy Adeboye, 23, sits on her bed clutching her phone, her mind racing and chest tightening. On her screen is yet another abusive message from her stalker - a man she had met nine months earlier at her church. He had asked Adeboye out; when she declined, he began sending her intimidating, insulting and blackmailing messages on social media, as well as spreading false information about her online.
- North America > United States (0.69)
- Africa > Nigeria > Federal Capital Territory > Abuja (0.26)
- Oceania > Australia (0.06)
- (4 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
The big AI job swap: why white-collar workers are ditching their careers
Have you retrained or moved careers due to your previous career path being at risk of an artificial intelligence takeover? Please include as much detail as possible. Did you have a dream profession that you have decided not to pursue because of fears it will be thwarted by AI? Optional Please include as much detail as possible.
- Europe > United Kingdom (0.14)
- Europe > Sweden > Skåne County > Malmö (0.04)
- Oceania > Australia (0.04)
- (4 more...)
- Education (1.00)
- Banking & Finance (0.94)
- Leisure & Entertainment > Sports (0.68)
- (2 more...)
- Information Technology > Communications > Social Media (0.95)
- Information Technology > Artificial Intelligence > Robots (0.68)
The Download: US immigration agencies' AI videos, and inside the Vitalism movement
Plus: French company Capgemini has confirmed it's no longer working with ICE The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. It comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda--some of which appears to be made with AI--and as workers in tech have put pressure on their employers to denounce the agencies' activities. For the last couple of years, I've been following the progress of a group of individuals who believe death is humanity's "core problem." Put simply, they say death is wrong--for everyone. They've even said it's morally wrong.
- Asia > China (0.08)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Massachusetts (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.51)
My friends in Italy are using AI therapists. But is that so bad, when a stigma surrounds mental health? Viola Di Grado
An estimated 5 million Italians are in need of mental health support but are unable to afford it. An estimated 5 million Italians are in need of mental health support but are unable to afford it. My friends in Italy are using AI therapists. But is that so bad, when a stigma surrounds mental health? State provision for psychological health services is lamentable.
- North America > United States (0.15)
- Oceania > Australia (0.06)
- Europe > Italy > Sicily > Palermo (0.05)
- (5 more...)
- Information Technology > Communications > Social Media (0.73)
- Information Technology > Artificial Intelligence > Applied AI (0.71)
We're getting intimate with chatbots. A new book asks what this means
AI chatbots can take on many roles in our lives. James Muldoon's Love Machines looks into the relationships we're forging with them Artificial intelligence is now unavoidable - although there are those among us who try. Even if you don't seek out a chatbot, you will see new icons in your current apps to bring them within a single click: WhatsApp, Google Drive, even Microsoft Notepad, the simplest program imaginable. The tech industry is making an enormous and costly bet on AI, and, in turn, is forcing it on users to make good on this investment. Many are embracing it to take over writing, admin or planning, and a minority are going a step further and forming intimate relationships with it.
- Information Technology > Services (0.55)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.49)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
The ascent of the AI therapist
Four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy. A technician adjusts the wiring inside the Mark I Perceptron. This early AI system was designed not by a mathematician but by a psychologist. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year. Given the clear demand for accessible and affordable mental-health services, it's no wonder that people have looked to artificial intelligence for possible relief.
- North America > United States > Massachusetts (0.04)
- Asia > China (0.04)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.75)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
It's One of the Hardest Confrontations Anyone Can Have. It Might Be One Good Use of a Controversial Technology.
Technology "Why Did You Do It?" A radical new use of deepfake technology is allowing survivors of abuse to confront their perpetrators. Marina vd Roest hadn't faced the man who abused her in decades when she first sat down in front of the laptop. Confronted with his realistic, blinking, speaking face, she felt "scared like a little child again." "Sometimes I had to close the laptop and get my breath back before opening it and continuing with the conversation," she says. Vd Roest is one of the first people to have tried out a radical new form of therapy that involves putting survivors face-to-face with A.I.-generated deepfakes of their attackers as a means of healing unresolved trauma.
Clinician-Directed Large Language Model Software Generation for Therapeutic Interventions in Physical Rehabilitation
Kim, Edward, Cho, Yuri, Lima, Jose Eduardo E., Muccini, Julie, Jindal, Jenelle, Scheid, Alison, Nelson, Erik, Park, Seong Hyun, Zeng, Yuchen, Sturgis, Alton, Li, Caesar, Dai, Jackie, Kim, Sun Min, Prakash, Yash, Sun, Liwen, Hu, Isabella, Wu, Hongxuan, He, Daniel, Rajca, Wiktor, Halabi, Cathra, Lansberg, Maarten, Hartmann, Bjoern, Seshia, Sanjit A.
Digital health interventions increasingly deliver home exercise programs via sensor-equipped devices such as smartphones, enabling remote monitoring of adherence and performance. However, current software is usually authored before clinical encounters as libraries of modules for broad impairment categories. At the point of care, clinicians can only choose from these modules and adjust a few parameters (for example, duration or repetitions). As a result, individual limitations, goals, and environmental constraints are often not reflected, limiting personalization and benefit. We propose a paradigm in which large language models (LLMs) act as constrained translators that convert clinicians' exercise prescriptions into intervention software. Clinicians remain the decision makers: they design exercises during the encounter, tailored to each patient's impairments, goals, and environment, and the LLM generates matching software. We conducted a prospective single-arm feasibility study with 20 licensed physical and occupational therapists who created 40 individualized upper extremity programs for a standardized patient; 100% of prescriptions were translated into executable software, compared with 55% under a representative template-based digital health intervention (p < 0.01). LLM-generated software correctly delivered 99.7% of instructions and monitored performance with 88.4% accuracy (95% confidence interval, 0.843-0.915). Overall, 90% of therapists judged the system safe for patient interaction and 75% expressed willingness to adopt it in practice. To our knowledge, this is the first prospective evaluation of clinician-directed intervention software generation with an LLM in health care, demonstrating feasibility and motivating larger trials in real patient populations.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > California > Alameda County > Berkeley (0.14)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Musculoskeletal (1.00)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Health Care Technology (0.87)
LLM-as-a-Supervisor: Mistaken Therapeutic Behaviors Trigger Targeted Supervisory Feedback
Xu, Chen, Lv, Zhenyu, Lan, Tian, Wang, Xianyang, Ji, Luyao, Cui, Leyang, Yang, Minqiang, Shen, Jian, Dong, Qunxi, Liu, Xiuling, Wang, Juan, Hu, Bin
Although large language models (LLMs) hold significant promise in psychotherapy, their direct application in patient-facing scenarios raises ethical and safety concerns. Therefore, this work shifts towards developing an LLM as a supervisor to train real therapists. In addition to the privacy of clinical therapist training data, a fundamental contradiction complicates the training of therapeutic behaviors: clear feedback standards are necessary to ensure a controlled training system, yet there is no absolute "gold standard" for appropriate therapeutic behaviors in practice. In contrast, many common therapeutic mistakes are universal and identifiable, making them effective triggers for targeted feedback that can serve as clearer evidence. Motivated by this, we create a novel therapist-training paradigm: (1) guidelines for mistaken behaviors and targeted correction strategies are first established as standards; (2) a human-in-the-loop dialogue-feedback dataset is then constructed, where a mistake-prone agent intentionally makes standard mistakes during interviews naturally, and a supervisor agent locates and identifies mistakes and provides targeted feedback; (3) after fine-tuning on this dataset, the final supervisor model is provided for real therapist training. The detailed experimental results of automated, human and downstream assessments demonstrate that models fine-tuned on our dataset MATE, can provide high-quality feedback according to the clinical guideline, showing significant potential for the therapist training scenario.
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > China > Gansu Province > Lanzhou (0.04)
Robot-mediated physical Human-Human Interaction in Neurorehabilitation: a position paper
Vianello, Lorenzo, Short, Matthew, Manczurowsky, Julia, Küçüktabak, Emek Barış, Di Tommaso, Francesco, Noccaro, Alessia, Bandini, Laura, Clark, Shoshana, Fiorenza, Alaina, Lunardini, Francesca, Canton, Alberto, Gandolla, Marta, Pedrocchi, Alessandra L. G., Ambrosini, Emilia, Murie-Fernandez, Manuel, Roman, Carmen B., Tornero, Jesus, Leon, Natacha, Sawers, Andrew, Patton, Jim, Formica, Domenico, Tagliamonte, Nevio Luigi, Rauter, Georg, Baur, Kilian, Just, Fabian, Hasson, Christopher J., Novak, Vesna D., Pons, Jose L.
Neurorehabilitation conventionally relies on the interaction between a patient and a physical therapist. Robotic systems can improve and enrich the physical feedback provided to patients after neurological injury, but they under-utilize the adaptability and clinical expertise of trained therapists. In this position paper, we advocate for a novel approach that integrates the therapist's clinical expertise and nuanced decision-making with the strength, accuracy, and repeatability of robotics: Robot-mediated physical Human-Human Interaction. This framework, which enables two individuals to physically interact through robotic devices, has been studied across diverse research groups and has recently emerged as a promising link between conventional manual therapy and rehabilitation robotics, harmonizing the strengths of both approaches. This paper presents the rationale of a multidisciplinary team-including engineers, doctors, and physical therapists-for conducting research that utilizes: a unified taxonomy to describe robot-mediated rehabilitation, a framework of interaction based on social psychology, and a technological approach that makes robotic systems seamless facilitators of natural human-human interaction.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (11 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.93)