Goto

Collaborating Authors

 helpfulness




A Potential Negative Societal Impacts

Neural Information Processing Systems

In addition, users may become overly dependent on the model's outputs For the feedback, we ask the person "Please consider the quality of the Given a score (1-5). 1 means its quality is bad, and 5 means its quality is very good". The interface of the user study is shown in Fig. A1. We report the average scores in Tab. We have a total of 1.1M training data in FIRE. In Fig. A2, we present the curves of A T, A TR, A TR, and RR using different Results show that more data leads to better performance.







A Principle-based Framework for the Development and Evaluation of Large Language Models for Health and Wellness

Winslow, Brent, Shreibati, Jacqueline, Perez, Javier, Su, Hao-Wei, Young-Lin, Nichole, Hammerquist, Nova, McDuff, Daniel, Guss, Jason, Vafeiadou, Jenny, Cain, Nick, Lin, Alex, Schenck, Erik, Rajagopal, Shiva, Chung, Jia-Ru, Venkatakrishnan, Anusha, Lee, Amy Armento, Karimzadehgan, Maryam, Meng, Qingyou, Agarwal, Rythm, Natarajan, Aravind, Giest, Tracy

arXiv.org Artificial Intelligence

The incorporation of generative artificial intelligence into personal health applications presents a transformative opportunity for personalized, data-driven health and fitness guidance, yet also poses challenges related to user safety, model accuracy, and personal privacy. To address these challenges, a novel, principle-based framework was developed and validated for the systematic evaluation of LLMs applied to personal health and wellness. First, the development of the Fitbit Insights explorer, a large language model (LLM)-powered system designed to help users interpret their personal health data, is described. Subsequently, the safety, helpfulness, accuracy, relevance, and personalization (SHARP) principle-based framework is introduced as an end-to-end operational methodology that integrates comprehensive evaluation techniques including human evaluation by generalists and clinical specialists, autorater assessments, and adversarial testing, into an iterative development lifecycle. Through the application of this framework to the Fitbit Insights explorer in a staged deployment involving over 13,000 consented users, challenges not apparent during initial testing were systematically identified. This process guided targeted improvements to the system and demonstrated the necessity of combining isolated technical evaluations with real-world user feedback. Finally, a comprehensive, actionable approach is established for the responsible development and deployment of LLM-powered health applications, providing a standardized methodology to foster innovation while ensuring emerging technologies are safe, effective, and trustworthy for users.


Human-Centred Evaluation of Text-to-Image Generation Models for Self-expression of Mental Distress: A Dataset Based on GPT-4o

He, Sui, Qian, Shenbin

arXiv.org Artificial Intelligence

Effective communication is central to achieving positive healthcare outcomes in mental health contexts, yet international students often face linguistic and cultural barriers that hinder their communication of mental distress. In this study, we evaluate the effectiveness of AI-generated images in supporting self-expression of mental distress. To achieve this, twenty Chinese international students studying at UK universities were invited to describe their personal experiences of mental distress. These descriptions were elaborated using GPT-4o with four persona-based prompt templates rooted in contemporary counselling practice to generate corresponding images. Participants then evaluated the helpfulness of generated images in facilitating the expression of their feelings based on their original descriptions. The resulting dataset comprises 100 textual descriptions of mental distress, 400 generated images, and corresponding human evaluation scores. Findings indicate that prompt design substantially affects perceived helpfulness, with the illustrator persona achieving the highest ratings. This work introduces the first publicly available text-to-image evaluation dataset with human judgment scores in the mental health domain, offering valuable resources for image evaluation, reinforcement learning with human feedback, and multi-modal research on mental health communication.