receptiveness
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Georgia > Chatham County > Savannah (0.04)
- (2 more...)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Georgia > Chatham County > Savannah (0.04)
- (2 more...)
Persuasion Dynamics in LLMs: Investigating Robustness and Adaptability in Knowledge and Safety with DuET-PD
Tan, Bryan Chen Zhengyu, Chin, Daniel Wai Kit, Liu, Zhengyuan, Chen, Nancy F., Lee, Roy Ka-Wei
Large Language Models (LLMs) can struggle to balance gullibility to misinformation and resistance to valid corrections in persuasive dialogues, a critical challenge for reliable deployment. We introduce DuET-PD (Dual Evaluation for Trust in Persuasive Dialogues), a framework evaluating multi-turn stance-change dynamics across dual dimensions: persuasion type (corrective/misleading) and domain (knowledge via MMLU-Pro, and safety via SALAD-Bench). We find that even a state-of-the-art model like GPT-4o achieves only 27.32% accuracy in MMLU-Pro under sustained misleading persuasions. Moreover, results reveal a concerning trend of increasing sycophancy in newer open-source models. To address this, we introduce Holistic DPO, a training approach balancing positive and negative persuasion examples. Unlike prompting or resist-only training, Holistic DPO enhances both robustness to misinformation and receptiveness to corrections, improving Llama-3.1-8B-Instruct's accuracy under misleading persuasion in safety contexts from 4.21% to 76.54%. These contributions offer a pathway to developing more reliable and adaptable LLMs for multi-turn dialogue. Code is available at https://github.com/Social-AI-Studio/DuET-PD.
- North America > United States > Florida > Miami-Dade County > Miami (0.14)
- Europe > Austria > Vienna (0.14)
- Asia > Singapore (0.04)
- (11 more...)
- Law (1.00)
- Banking & Finance (0.93)
- Education (0.93)
- (4 more...)
Investigating Context-Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style
Li, Yuepei, Zhou, Kang, Qiao, Qiao, Nguyen, Bach, Wang, Qing, Li, Qi
Retrieval-augmented generation (RAG) improves Large Language Models (LLMs) by incorporating external information into the response generation process. However, how context-faithful LLMs are and what factors influence LLMs' context-faithfulness remain largely unexplored. In this study, we investigate the impact of memory strength and evidence presentation on LLMs' receptiveness to external evidence. We introduce a method to quantify the memory strength of LLMs by measuring the divergence in LLMs' responses to different paraphrases of the same question, which is not considered by previous works. We also generate evidence in various styles to evaluate the effects of evidence in different styles. Two datasets are used for evaluation: Natural Questions (NQ) with popular questions and popQA featuring long-tail questions. Our results show that for questions with high memory strength, LLMs are more likely to rely on internal memory, particularly for larger LLMs such as GPT-4. On the other hand, presenting paraphrased evidence significantly increases LLMs' receptiveness compared to simple repetition or adding details.
- North America > United States > Michigan (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > Ohio (0.04)
- (10 more...)
Promoting Constructive Deliberation: Reframing for Receptiveness
Kambhatla, Gauri, Lease, Matthew, Rajadesingan, Ashwin
To promote constructive discussion of controversial topics online, we propose automatic reframing of disagreeing responses to signal receptiveness to a preceding comment. Drawing on research from psychology, communications, and linguistics, we identify six strategies for reframing. We automatically reframe replies to comments according to each strategy, using a Reddit dataset. Through human-centered experiments, we find that the replies generated with our framework are perceived to be significantly more receptive than the original replies and a generic receptiveness baseline. We illustrate how transforming receptiveness, a particular social science construct, into a computational framework, can make LLM generations more aligned with human perceptions. We analyze and discuss the implications of our results, and highlight how a tool based on our framework might be used for more teachable and creative content moderation.
- North America > United States > Alabama (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
- Law (0.94)
- Government (0.93)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
Personalising Digital Health Behaviour Change Interventions using Machine Learning and Domain Knowledge
Lisowska, Aneta, Wilk, Szymon, Peleg, Mor
We are developing a virtual coaching system that helps patients adhere to behaviour change interventions (BCI). Our proposed system predicts whether a patient will perform the targeted behaviour and uses counterfactual examples with feature control to guide personalisation of BCI. We use simulated patient data with varying levels of receptivity to intervention to arrive at the study design which would enable evaluation of our system.
- Europe > Poland > Greater Poland Province > Poznań (0.05)
- Asia > Middle East > Israel > Haifa District > Haifa (0.05)
- Europe > Norway > Norwegian Sea (0.04)
- Research Report > Experimental Study (0.48)
- Research Report > New Finding (0.47)
The rise of AI in medicine
By now, it's almost old news that artificial intelligence (AI) will have a transformative role in medicine. Algorithms have the potential to work tirelessly, at faster rates and now with potentially greater accuracy than clinicians. In 2016, it was predicted that'machine learning will displace much of the work of radiologists and anatomical pathologists'. In the same year, a University of Toronto professor controversially announced that'we should stop training radiologists now'. But is it really the beginning of the end for some medical specialties?
- North America > Canada > Ontario > Toronto (0.55)
- North America > United States (0.49)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- North America > Canada > Ontario > Middlesex County > London (0.05)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.95)
- Health & Medicine > Nuclear Medicine (0.95)
- Health & Medicine > Therapeutic Area > Dermatology (0.73)