caffeine
Black Friday Protein Powder Deals and Supplement Steals (2025)
From protein supplements and electrolytes to greens powders and energy drinks, these are the discounted picks worth snagging. The wellness industry is a wild marketplace. You can't trust the marketing alone, and FDA regulation on protein powder deals is quite limited. It pays to be cautious. So for this year's Black Friday, we sifted through the markdowns, cross-checked claims, verified third-party tests, and sampled the supplements so you don't have to.
- North America > United States > California (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Health & Medicine > Government Relations & Public Policy (1.00)
- Health & Medicine > Consumer Health (1.00)
- Education > Health & Safety > School Nutrition (1.00)
- Government > Regional Government > North America Government > United States Government > FDA (0.89)
Black Friday Protein Powder Deals and Supplement Steals (2025)
From protein supplements and electrolytes to greens powders and energy drinks, these are the discounted picks worth snagging. The wellness industry is a wild marketplace. You can't trust the marketing alone, and FDA regulation is quite limited. It pays to be cautious. So for this year's Black Friday, we sifted through the markdowns, cross-checked claims, verified third-party tests, and sampled the supplements so you don't have to.
- North America > United States > California (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
3 common alcohol myths, debunked
Breakthroughs, discoveries, and DIY tips sent every weekday. Humans have a long history with alcohol--we've been making and consuming it for over ten thousand years, about as long as we've had agriculture. That's a long time for people to come up with all kinds of ideas about the drug and how it works. So, not surprisingly, some of them are wrong. Here are a few common myths about alcohol, debunked by scientific research.
- Europe > United Kingdom (0.05)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.05)
- Asia > Middle East > Jordan (0.05)
- Asia > Japan (0.05)
Best Adaptogen Drinks and Functional Drinks of 2025: Get Clear
We drank adaptogen drinks for weeks, and taste-tested with a trained sommelier. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. The best adaptogen drinks promise not just to wake you up in the morning, but offer focus and clarity and maybe even a warm wash of well-being. A different drink might tuck you gently in at night, or sub in for alcohol as a mindful party drink. I've spent months trying some of the most popular functional drinks on the market, bedding down with kava or tryptophan-laced xicha morada, and waking up with caffeine and L-theanine. Many of the new school of nootropic and functional drinks are like kissing cousins of mushroom coffee, except in refreshing soda form. Functional sodas might be chockablock with mushroom adaptogens such as reishi and cordyceps, alongside traditional home anxiety remedies such as ashwagandha or L-theanine. I both logged the effects of each soda, and held a large taste test with Portland, Oregon, sommelier Sami Gaston, owner of an excellent wine bar and shop called Bar Diane and Negociant, respectively--to determine how happy you'd be to drink them even if they didn't help you focus better on endless spreadsheets or the hunt for a job. Also check out WIRED's guide to mushroom gummies, or take your wellness in powdered form with the best greens powders and the best protein powders .
- North America > United States > Oregon > Multnomah County > Portland (0.24)
- North America > United States > California (0.04)
- Oceania > Vanuatu (0.04)
- (7 more...)
- Government (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.93)
- Health & Medicine > Consumer Health (0.93)
- (2 more...)
Pushing LLMs to Their Logical Reasoning Bound: The Role of Data Reasoning Intensity
Bi, Zhen, Hu, Zhenlin, Yang, Jinnan, Chen, Mingyang, Deng, Cheng, Xue, Yida, Yang, Zeyu, Shen, Qing, Liu, Zhenfang, Zhao, Kang, Zhang, Ningyu, Lou, Jungang
Recent advances in large language models (LLMs) highlight the importance of training data structure and quality in shaping reasoning behavior. However, most existing approaches focus on transforming data formats while neglecting the internal reasoning complexity of training samples, leaving the reasoning potential of data under-explored and underutilized. In this work, we posit that LLM logical reasoning performance is jointly constrained by the potential of the training data and the cognitive capacity of the model. To make this relationship measurable, we introduce Data Reasoning Intensity (DRI), a novel metric that quantifies the latent logical reasoning complexity of samples by decomposing and aggregating their logical structures. This allows us to analyze how well current LLMs utilize logical reasoning signals and identify performance gaps relative to data potential. Based on this insight, we introduce a re-cognizing optimization strategy that systematically enhances the logical reasoning intensity of training data. Rather than increasing data volume, our method re-optimizes existing samples to better align with the LLM's logical reasoning boundary. Extensive experiments show that our approach significantly improves performance and generalization over data-centric strategies. We further validate our method under a reinforcement learning framework. Our results indicate that prioritizing reasoning complexity in data rather than sheer scale or superficial form is essential to realizing LLMs' full cognitive potential.
- Europe > Austria > Vienna (0.14)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Europe > Netherlands > South Holland > Rotterdam (0.04)
- (3 more...)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Musculoskeletal (0.68)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
Enhancing Logical Reasoning in Language Models via Symbolically-Guided Monte Carlo Process Supervision
Tan, Xingwei, Valentino, Marco, Akhter, Mahmud, Liakata, Maria, Aletras, Nikolaos
Large language models (LLMs) have shown strong performance in many reasoning benchmarks. However, recent studies have pointed to memorization, rather than generalization, as one of the leading causes for such performance. LLMs, in fact, are susceptible to content variations, demonstrating a lack of robust planning or symbolic abstractions supporting their reasoning process. To improve reliability, many attempts have been made to combine LLMs with symbolic methods. Nevertheless, existing approaches fail to effectively leverage symbolic representations due to the challenges involved in developing reliable and scalable verification mechanisms. In this paper, we propose to overcome such limitations by synthesizing high-quality symbolic reasoning trajectories with stepwise pseudo-labels at scale via Monte Carlo estimation. A Process Reward Model (PRM) can be efficiently trained based on the synthesized data and then used to select more symbolic trajectories. The trajectories are then employed with Direct Preference Optimization (DPO) and Supervised Fine-Tuning (SFT) to improve logical reasoning and generalization. Our results on benchmarks (i.e., FOLIO and LogicAsker) show the effectiveness of the proposed method with gains on frontier and open-weight models. Moreover, additional experiments on claim verification data reveal that fine-tuning on the generated symbolic reasoning trajectories enhances out-of-domain generalizability, suggesting the potential impact of the proposed method in enhancing planning and logical reasoning.
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Asia > Singapore (0.04)
- (5 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.50)
CRANE: Reasoning with constrained LLM generation
Banerjee, Debangshu, Suresh, Tarun, Ugare, Shubham, Misailovic, Sasa, Singh, Gagandeep
Code generation, symbolic math reasoning, and other tasks require LLMs to produce outputs that are both syntactically and semantically correct. Constrained LLM generation is a promising direction to enforce adherence to formal grammar, but prior works have empirically observed that strict enforcement of formal constraints often diminishes the reasoning capabilities of LLMs. In this work, we first provide a theoretical explanation for why constraining LLM outputs to very restrictive grammars that only allow syntactically valid final answers reduces the reasoning capabilities of the model. Second, we demonstrate that by augmenting the output grammar with carefully designed additional rules, it is always possible to preserve the reasoning capabilities of the LLM while ensuring syntactic and semantic correctness in its outputs. Building on these theoretical insights, we propose a reasoning-augmented constrained decoding algorithm, CRANE, which effectively balances the correctness of constrained generation with the flexibility of unconstrained generation. Experiments on multiple open-source LLMs and benchmarks show that CRANE significantly outperforms both state-of-the-art constrained decoding strategies and standard unconstrained decoding, showing up to 10% points accuracy improvement over baselines on challenging symbolic reasoning benchmarks GSM-symbolic and FOLIO.
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (2 more...)
FoRAG: Factuality-optimized Retrieval Augmented Generation for Web-enhanced Long-form Question Answering
Cai, Tianchi, Tan, Zhiwen, Song, Xierui, Sun, Tao, Jiang, Jiyan, Xu, Yunqi, Zhang, Yinger, Gu, Jinjie
Retrieval Augmented Generation (RAG) has become prevalent in question-answering (QA) tasks due to its ability of utilizing search engine to enhance the quality of long-form question-answering (LFQA). Despite the emergence of various open source methods and web-enhanced commercial systems such as Bing Chat, two critical problems remain unsolved, i.e., the lack of factuality and clear logic in the generated long-form answers. In this paper, we remedy these issues via a systematic study on answer generation in web-enhanced LFQA. Specifically, we first propose a novel outline-enhanced generator to achieve clear logic in the generation of multifaceted answers and construct two datasets accordingly. Then we propose a factuality optimization method based on a carefully designed doubly fine-grained RLHF framework, which contains automatic evaluation and reward modeling in different levels of granularity. Our generic framework comprises conventional fine-grained RLHF methods as special cases. Extensive experiments verify the superiority of our proposed \textit{Factuality-optimized RAG (FoRAG)} method on both English and Chinese benchmarks. In particular, when applying our method to Llama2-7B-chat, the derived model FoRAG-L-7B outperforms WebGPT-175B in terms of three commonly used metrics (i.e., coherence, helpfulness, and factuality), while the number of parameters is much smaller (only 1/24 of that of WebGPT-175B). Our datasets and models are made publicly available for better reproducibility: https://huggingface.co/forag.
- Europe > United Kingdom (0.14)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.06)
- Asia > Middle East > Syria (0.04)
- (21 more...)
- Research Report (1.00)
- Personal > Honors (0.68)
- Education (1.00)
- Energy > Power Industry > Utilities > Nuclear (0.46)
- Health & Medicine > Health Care Technology > Telehealth (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Question Answering (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs
Ahmadian, Arash, Cremer, Chris, Gallé, Matthias, Fadaee, Marzieh, Kreutzer, Julia, Pietquin, Olivier, Üstün, Ahmet, Hooker, Sara
AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. Proximal Policy Optimization (PPO) has been positioned by recent literature as the canonical method for the RL part of RLHF. However, it involves both high computational cost and sensitive hyperparameter tuning. We posit that most of the motivational principles that led to the development of PPO are less of a practical concern in RLHF and advocate for a less computationally expensive method that preserves and even increases performance. We revisit the formulation of alignment from human preferences in the context of RL. Keeping simplicity as a guiding principle, we show that many components of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style optimization variants outperform both PPO and newly proposed "RL-free" methods such as DPO and RAFT. Our work suggests that careful adaptation to LLMs alignment characteristics enables benefiting from online RL optimization at low cost.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (7 more...)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
The Year We Embraced Our Destruction
The sounds came out of my mouth with an unexpected urgency. The cadence was deliberate--more befitting of an incantation than an order: one large strawberry-lemon-mint Charged Lemonade. The words hung in the air for a moment, giving way to a stillness punctuated only by the soft whir of distant fluorescent lights and the gentle hum of a Muzak cover of Bruce Hornsby's "Mandolin Rain." The time was 9:03 a.m.; the sun had been up for only one hour. I watched the kind woman behind the counter stifle an eye roll, a small mercy for which I will be eternally grateful.
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.05)
- North America > United States > California (0.05)
- Asia > Myanmar (0.05)
- Consumer Products & Services > Restaurants (1.00)
- Health & Medicine > Therapeutic Area (0.71)