Goto

Collaborating Authors

 inflammation


Viome Full Body Intelligence Test Review: Little Clarity, Pricey Supplements

WIRED

Virtually every aspect of your health can be traced back to your microbiome. But some tests are better than others. Some of the recipes look tasty. I admit it: I'm a sucker for metrics. Fitness trackers that keep tabs on my steps and sleep? A DEXA scan to give me too much information about my body composition?


NAD Supplement 101: Possible Benefits and Precautions Explained (2026)

WIRED

What NAD+? Here's how it works in your body, why it matters, and if supplementation is worth the hype. It's more than likely that the NAD+ supplement craze has already crossed your path. The Biebers have infused it. Joe Rogan has podcasted about it. Gwyneth Paltrow swears by it and, of course, sells her own Youth-Boost NAD+ Peptide Rich Cream . NAD+ (short for nicotinamide adenine dinucleotide) is a coenzyme that your body makes naturally--it contributes to energy production and immune function, among other things. It reflects a broader shift in how people think about healthy aging and extending their healthspan overall .


Poor Sleep Quality Accelerates Brain Aging

WIRED

Research shows that people who sleep poorly tend to have brain age that is older than their actual age. Chronic inflammation in the body caused by poor sleep likely plays a part. While the link between poor sleep and dementia has long been known, it was unclear whether poor sleep habits could cause dementia or whether poor sleep was an early symptom of dementia. However, new research has revealed that sleep quality may have a direct impact on the rate at which the brain ages . Our findings provide evidence that poor sleep may contribute to accelerated brain aging, explains Abigail Dove, a neuroepidemiologist at the Karolinska Institute in Sweden, and point to inflammation as one of the underlying mechanisms.


Data Holds the Key in Slowing Age-Related Illnesses

WIRED

More accurate and individualized health predictions will allow for preventative factors to be implemented well in advance. In 2026, we will see the beginning of precision medical forecasting. Just as there have been remarkable advances in weather forecasting with the use of large language models, so will there be for determining an individual's risk of the major age-related diseases (cancer, cardiovascular, and neurodegenerative). These diseases share common threads, such as a long incubation phase before any symptoms are manifest, usually two decades or more. They also have the same biologic underpinnings of immunosenescence and inflammaging, terms that characterize an immune system that has lost some of its functionality and protective power, and the accompanying heightened inflammation.


The BEAT-CF Causal Model: A model for guiding the design of trials and observational analyses of cystic fibrosis exacerbations

Mascaro, Steven, Woodberry, Owen, McLeod, Charlie, Messer, Mitch, Selvadurai, Hiran, Wu, Yue, Schultz, Andre, Snelling, Thomas L

arXiv.org Artificial Intelligence

Loss of lung function in cystic fibrosis (CF) occurs progressively, punctuated by acute pulmonary exacerbations (PEx) in which abrupt declines in lung function are not fully recovered. A key component of CF management over the past half century has been the treatment of PEx to slow lung function decline. This has been credited with improvements in survival for people with CF (PwCF), but there is no consensus on the optimal approach to PEx management. BEAT-CF (Bayesian evidence-adaptive treatment of CF) was established to build an evidence-informed knowledge base for CF management. The BEAT-CF causal model is a directed acyclic graph (DAG) and Bayesian network (BN) for PEx that aims to inform the design and analysis of clinical trials comparing the effectiveness of alternative approaches to PEx management. The causal model describes relationships between background risk factors, treatments, and pathogen colonisation of the airways that affect the outcome of an individual PEx episode. The key factors, outcomes, and causal relationships were elicited from CF clinical experts and together represent current expert understanding of the pathophysiology of a PEx episode, guiding the design of data collection and studies and enabling causal inference. Here, we present the DAG that documents this understanding, along with the processes used in its development, providing transparency around our trial design and study processes, as well as a reusable framework for others.




MACD: Multi-Agent Clinical Diagnosis with Self-Learned Knowledge for LLM

Li, Wenliang, Yan, Rui, Zhang, Xu, Chen, Li, Zhu, Hongji, Zhao, Jing, Li, Junjun, Li, Mengru, Cao, Wei, Jiang, Zihang, Wei, Wei, Zhang, Kun, Zhou, Shaohua Kevin

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated notable potential in medical applications, yet they face substantial challenges in handling complex real-world clinical diagnoses using conventional prompting methods. Current prompt engineering and multi-agent approaches typically optimize isolated inferences, neglecting the accumulation of reusable clinical experience. To address this, this study proposes a novel Multi-Agent Clinical Diagnosis (MACD) framework, which allows LLMs to self-learn clinical knowledge via a multi-agent pipeline that summarizes, refines, and applies diagnostic insights. It mirrors how physicians develop expertise through experience, enabling more focused and accurate diagnosis on key disease-specific cues. We further extend it to a MACD-human collaborative workflow, where multiple LLM-based diagnostician agents engage in iterative consultations, supported by an evaluator agent and human oversight for cases where agreement is not reached. Evaluated on 4,390 real-world patient cases across seven diseases using diverse open-source LLMs (Llama-3.1 8B/70B, DeepSeek-R1-Distill-Llama 70B), MACD significantly improves primary diagnostic accuracy, outperforming established clinical guidelines with gains up to 22.3% (MACD). In direct comparison with physician-only diagnosis under the same evaluation protocol, MACD achieves comparable or superior performance, with improvements up to 16%. Furthermore, the MACD-human workflow yields an 18.6% improvement over physician-only diagnosis, demonstrating the synergistic potential of human-AI collaboration. Notably, the self-learned clinical knowledge exhibits strong cross-model stability, transferability across LLMs, and capacity for model-specific personalization.This work thus presents a scalable self-learning paradigm that bridges the gap between the intrinsic knowledge of LLMs.


Does vagus nerve stimulation work? A scientific cure-all explained.

Popular Science

Does vagus nerve stimulation work? From treating seizures to depression, stimulating the body's longest nerve has real benefits. We all have two vagus nerves--one on the left side of the body and one on the right--both of which connect the brain to the intestines. Breakthroughs, discoveries, and DIY tips sent every weekday. On TikTok, vagus nerve stimulation sounds like a miracle cure.


Evaluation of Causal Reasoning for Large Language Models in Contextualized Clinical Scenarios of Laboratory Test Interpretation

Bhasuran, Balu, Prosperi, Mattia, Hanna, Karim, Petrilli, John, Washington, Caretia JeLayne, He, Zhe

arXiv.org Artificial Intelligence

This study evaluates causal reasoning in large language models (LLMs) using 99 clinically grounded laboratory test scenarios aligned with Pearl's Ladder of Causation: association, intervention, and counterfactual reasoning. We examined common laboratory tests such as hemoglobin A1c, creatinine, and vitamin D, and paired them with relevant causal factors including age, gender, obesity, and smoking. Two LLMs - GPT-o1 and Llama-3.2-8b-instruct - were tested, with responses evaluated by four medically trained human experts. GPT-o1 demonstrated stronger discriminative performance (AUROC overall = 0.80 +/- 0.12) compared to Llama-3.2-8b-instruct (0.73 +/- 0.15), with higher scores across association (0.75 vs 0.72), intervention (0.84 vs 0.70), and counterfactual reasoning (0.84 vs 0.69). Sensitivity (0.90 vs 0.84) and specificity (0.93 vs 0.80) were also greater for GPT-o1, with reasoning ratings showing similar trends. Both models performed best on intervention questions and worst on counterfactuals, particularly in altered outcome scenarios. These findings suggest GPT-o1 provides more consistent causal reasoning, but refinement is required before adoption in high-stakes clinical applications.