reframe
ReFrame: Layer Caching for Accelerated Inference in Real-Time Rendering
Graphics rendering applications increasingly leverage neural networks in tasks such as denoising, supersampling, and frame extrapolation to improve image quality while maintaining frame rates. The temporal coherence inherent in these tasks presents an opportunity to reuse intermediate results from previous frames and avoid redundant computations. Recent work has shown that caching intermediate features to be reused in subsequent inferences is an effective method to reduce latency in diffusion models. We extend this idea to real-time rendering and present ReFrame, which explores different caching policies to optimize trade-offs between quality and performance in rendering workloads. ReFrame can be applied to a variety of encoder-decoder style networks commonly found in rendering pipelines. Experimental results show that we achieve 1.4x speedup on average with negligible quality loss in three real-time rendering tasks. Code available: https://ubc-aamodt-group.github.io/reframe-layer-caching/
- Asia (0.04)
- North America > Canada > British Columbia (0.04)
Does "Reasoning" with Large Language Models Improve Recognizing, Generating, and Reframing Unhelpful Thoughts?
Qi, Yilin, Lee, Dong Won, Breazeal, Cynthia, Park, Hae Won
Cognitive Reframing, a core element of Cognitive Behavioral Therapy (CBT), helps individuals reinterpret negative experiences by finding positive meaning. Recent advances in Large Language Models (LLMs) have demonstrated improved performance through reasoning-based strategies. This inspires a promising direction of leveraging the reasoning capabilities of LLMs to improve CBT and mental reframing by simulating the process of critical thinking, potentially enabling more effective recognition, generation, and reframing of cognitive distortions. In this work, we investigate the role of various reasoning methods, including pre-trained reasoning LLMs and augmented reasoning strategies such as CoT and self-consistency in enhancing LLMs' ability to perform cognitive reframing tasks. We find that augmented reasoning methods, even when applied to "outdated" LLMs like GPT-3.5, consistently outperform state-of-the-art pretrained reasoning models on recognizing, generating and reframing unhelpful thoughts.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > Middle East > Jordan (0.04)
Review for NeurIPS paper: Gibbs Sampling with People
Weaknesses: Overall, I thought this was a strong paper. The main concerns I had were as follows: (1) Mode-seeking versus showing the distribution: The aggregated results in the first experiment seem to show much more homogeneity than the results for GSP or MCMCP. It seems like one limitation of this approach might be that there is limited exploration of the space, perhaps making it hard to move between modes, and also makes it more difficult to see the full shape of the distribution, which I have often taken to be a goal in work using MCMCP. The movement between optimization and seeking a distribution is discussed to some extent in the paper, but I would be interested in seeing this discussed more (and perhaps whether GP without aggregation is likely to lead to more optimization than MCMCP). In the author response, they have shown additional information suggesting that GSP is more mode-seeking but also does a better job of capturing the distribution.
Change My Frame: Reframing in the Wild in r/ChangeMyView
Peguero, Arturo Martínez, Watanabe, Taro
Recent work in reframing, within the scope of text style transfer, has so far made use of out-of-context, task-prompted utterances in order to produce neutralizing or optimistic reframes. Our work aims to generalize reframing based on the subreddit r/ChangeMyView (CMV). We build a dataset that leverages CMV's community's interactions and conventions to identify high-value, community-recognized utterances that produce changes of perspective. With this data, we widen the scope of the direction of reframing since the changes in perspective do not only occur in neutral or positive directions. We fine tune transformer-based models, make use of a modern LLM to refine our dataset, and explore challenges in the dataset creation and evaluation around this type of reframing.
- North America > United States (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Europe > Monaco (0.04)
- Asia > Middle East > Jordan (0.04)
Promoting Constructive Deliberation: Reframing for Receptiveness
Kambhatla, Gauri, Lease, Matthew, Rajadesingan, Ashwin
To promote constructive discussion of controversial topics online, we propose automatic reframing of disagreeing responses to signal receptiveness to a preceding comment. Drawing on research from psychology, communications, and linguistics, we identify six strategies for reframing. We automatically reframe replies to comments according to each strategy, using a Reddit dataset. Through human-centered experiments, we find that the replies generated with our framework are perceived to be significantly more receptive than the original replies and a generic receptiveness baseline. We illustrate how transforming receptiveness, a particular social science construct, into a computational framework, can make LLM generations more aligned with human perceptions. We analyze and discuss the implications of our results, and highlight how a tool based on our framework might be used for more teachable and creative content moderation.
- North America > United States > Alabama (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
- Law (0.94)
- Government (0.93)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
Socratic Reasoning Improves Positive Text Rewriting
Goel, Anmol, Daheim, Nico, Gurevych, Iryna
Reframing a negative into a positive thought is at the crux of several cognitive approaches to mental health and psychotherapy that could be made more accessible by large language model-based solutions. Such reframing is typically non-trivial and requires multiple rationalization steps to uncover the underlying issue of a negative thought and transform it to be more positive. However, this rationalization process is currently neglected by both datasets and models which reframe thoughts in one step. In this work, we address this gap by augmenting open-source datasets for positive text rewriting with synthetically-generated Socratic rationales using a novel framework called \textsc{SocraticReframe}. \textsc{SocraticReframe} uses a sequence of question-answer pairs to rationalize the thought rewriting process. We show that such Socratic rationales significantly improve positive text rewriting for different open-source LLMs according to both automatic and human evaluations guided by criteria from psychotherapy research.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Canada > Ontario > Toronto (0.05)
- Asia > Singapore (0.05)
- (18 more...)
- Research Report (1.00)
- Personal > Interview (0.93)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.36)
Facilitating Self-Guided Mental Health Interventions Through Human-Language Model Interaction: A Case Study of Cognitive Restructuring
Sharma, Ashish, Rushton, Kevin, Lin, Inna Wanyin, Nguyen, Theresa, Althoff, Tim
Self-guided mental health interventions, such as "do-it-yourself" tools to learn and practice coping strategies, show great promise to improve access to mental health care. However, these interventions are often cognitively demanding and emotionally triggering, creating accessibility barriers that limit their wide-scale implementation and adoption. In this paper, we study how human-language model interaction can support self-guided mental health interventions. We take cognitive restructuring, an evidence-based therapeutic technique to overcome negative thinking, as a case study. In an IRB-approved randomized field study on a large mental health website with 15,531 participants, we design and evaluate a system that uses language models to support people through various steps of cognitive restructuring. Our findings reveal that our system positively impacts emotional intensity for 67% of participants and helps 65% overcome negative thoughts. Although adolescents report relatively worse outcomes, we find that tailored interventions that simplify language model generations improve overall effectiveness and equity.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Alaska (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
AI went to Washington and here's what you need to know about this mind-blowing technology
CEO says OpenAI CEO Sam Altman said language and cultural inclusivity is "very important" to his company's mission as it builds and trains powerful artificial intelligence systems. On Tuesday, May 16, Mr. Altman went to Washington. And today, the world feels a little scarier. There's rarely a day when we don't hear some new report about the groundbreaking impact – and potential danger – of this technology. Large learning models like ChatGPT have caught the world by surprise based on the speed of their learning and what they are now able to do.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.60)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.60)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.39)
Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction
Sharma, Ashish, Rushton, Kevin, Lin, Inna Wanyin, Wadden, David, Lucas, Khendra G., Miner, Adam S., Nguyen, Theresa, Althoff, Tim
A proven therapeutic technique to overcome negative thoughts is to replace them with a more hopeful "reframed thought." Although therapy can help people practice and learn this Cognitive Reframing of Negative Thoughts, clinician shortages and mental health stigma commonly limit people's access to therapy. In this paper, we conduct a human-centered study of how language models may assist people in reframing negative thoughts. Based on psychology literature, we define a framework of seven linguistic attributes that can be used to reframe a thought. We develop automated metrics to measure these attributes and validate them with expert judgements from mental health practitioners. We collect a dataset of 600 situations, thoughts and reframes from practitioners and use it to train a retrieval-enhanced in-context learning model that effectively generates reframed thoughts and controls their linguistic attributes. To investigate what constitutes a "high-quality" reframe, we conduct an IRB-approved randomized field study on a large mental health website with over 2,000 participants. Amongst other findings, we show that people prefer highly empathic or specific reframes, as opposed to reframes that are overly positive. Our findings provide key implications for the use of LMs to assist people in overcoming negative thoughts.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.88)
Why AI Will Never Replace Managers
Of all the tools managers use to lead their businesses, thinking is the most crucial. It involves two distinct ways of processing information: intuitive and conscious, which the Nobel laureate Daniel Kahneman labeled thinking fast and slow. Today computers increasingly outperform people in both. With their raw calculative power, computers easily beat humans in conscious-reasoning tasks, as long as the rules and parameters of the situation are known. Managers routinely turn to mathematical optimization and simulation to build investment portfolios, make pricing decisions, and understand supply-chain risks.