particle
Testing for 'Bad Cholesterol' Doesn't Tell the Whole Story
Testing for'Bad Cholesterol' Doesn't Tell the Whole Story So why don't more doctors use it? For decades, assessing cholesterol risk has been built around a simple idea: Lower "bad" cholesterol, lower your chance of a heart attack . The test at the center of that approach measures how much low-density lipoprotein, or LDL cholesterol, is circulating in part of the blood. It has shaped everything from clinical guidelines to the widespread use of statins, medications that reduce LDL. Lowering LDL cholesterol reduces heart attacks, strokes, and early death.
- Europe (0.47)
- North America > United States (0.29)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.96)
How time travel could work: Scientists have uncovered a way to send messages into the PAST
TPUSA issues blistering response to Hollywood nepo baby who called Erika Kirk a'sociopath' and urged Trump to'kill' organization Who's The Boss? star Judith Light, 77, has fans concerned with strange poses on red carpet Shock as Home Depot rival closes all 15 of its stores and declares bankruptcy thanks to consumers' reluctance to spend ROBERT HARDMAN: What Trump told me about the King and William. Men everywhere secretly have the same complaint about their sex lives. It's NOT about looks or frequency... Spirit Airlines prepares to shut down as Trump's rescue deal falls apart I'm the REAL Emily from Devil Wears Prada: Anna Wintour's assistant played by Emily Blunt reveals herself... and cutthroat behind-scenes details that the movie did NOT include The Devil Wears Prada 2 review: Searingly silly, ridiculous sequel is a complete disgrace to fashion... and guilty of the biggest sin of all: JANE TIPPETT The ultimate Ozempic survival kit: Experts reveal cheap drugstore remedies and one miracle food every GLP-1 user needs to ease side effects... meaning you can take a HIGHER dose and lose MORE weight Mom stunned to discover she is pregnant with twins just WEEKS after giving birth: 'I was in denial' Alleged JPMorgan sex slave unmasked as crisis sparks drama at America's biggest bank: 'Everyone's wondering what Jamie thinks' Time machines may seem better suited to science fiction than the physics lab, but experts say this futuristic technology could become a reality. Researchers have revealed how time travel could really work by using the laws of quantum physics. While their method won't let you hop back to the time of the dinosaurs, scientists say it could be possible to send messages into the past.
- North America > United States (1.00)
- Europe > United Kingdom (1.00)
- Personal (0.93)
- Research Report > New Finding (0.34)
- Transportation (1.00)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- (3 more...)
- Information Technology > Communications > Social Media (0.96)
- Information Technology > Communications > Mobile (0.68)
- Information Technology > Artificial Intelligence (0.68)
#AAAI2026 invited talk: machine learning for particle physics
Daniel Whiteson is a particle physicist, who uses machine learning and statistical tools to analyze high-energy particle collisions. He is also a dedicated science communicator, having published books and comics, and is co-host of a science podcast. In his invited talk at the Fortieth AAAI Conference on Artificial Intelligence (AAAI-26), Daniel shared insights on both these aspects of his career. Daniel works at the Large Hadron Collider (LHC) at CERN, primarily looking at proton-proton collisions, which occur at 13 TeV, a massive 13,000 times the energy stored in a single proton. The majority of collisions result in known particles, such as electrons or muons.
Sampling for Quality: Training-Free Reward-Guided LLM Decoding via Sequential Monte Carlo
Markovic-Voronov, Jelena, Zhu, Wenhui, Long, Bo, Wang, Zhipeng, Gupta, Suyash, Behdin, Kayhan, Chen, Bee-Chung, Agarwal, Deepak
We introduce a principled probabilistic framework for reward-guided decoding in large language models, addressing the limitations of standard decoding methods that optimize token-level likelihood rather than sequence-level quality. Our method defines a reward-augmented target distribution over complete sequences by combining model transition probabilities with prefix-dependent reward potentials. Importantly, the approach is training-free: it leaves model weights unchanged and instead modifies the inference distribution via reward potentials, with all gains arising purely from inference-time sampling. To sample from this distribution, we develop Sequential Monte Carlo algorithms, including a computationally efficient prefix-only variant and a lookahead variant whose intermediate targets match the exact marginals of the full sequence distribution. The framework also integrates resample-move updates with Metropolis-Hastings rejuvenation and supports block-wise generation, subsuming common decoding strategies such as temperature sampling and power-tempered objectives. Empirical results across three 7B models show significant gains. On code generation (HumanEval), our method improves base performance by up to 54.9% and surpasses the strongest sampling baselines by 9.1%-15.3%. On mathematical reasoning (MATH500), it achieves gains of up to 8.8%. Notably, it reaches 87.8% on HumanEval and 78.4% on MATH500 with Qwen2.5-7B, consistently outperforming the reinforcement learning method GRPO.
Scalable Model-Based Clustering with Sequential Monte Carlo
Trojan, Connie, Myshkov, Pavel, Fearnhead, Paul, Hensman, James, Minka, Tom, Nemeth, Christopher
In online clustering problems, there is often a large amount of uncertainty over possible cluster assignments that cannot be resolved until more data are observed. This difficulty is compounded when clusters follow complex distributions, as is the case with text data. Sequential Monte Carlo (SMC) methods give a natural way of representing and updating this uncertainty over time, but have prohibitive memory requirements for large-scale problems. We propose a novel SMC algorithm that decomposes clustering problems into approximately independent subproblems, allowing a more compact representation of the algorithm state. Our approach is motivated by the knowledge base construction problem, and we show that our method is able to accurately and efficiently solve clustering problems in this setting and others where traditional SMC struggles.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom > England (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Calorimeter Shower Superresolution with Conditional Normalizing Flows: Implementation and Statistical Evaluation
In High Energy Physics, detailed calorimeter simulations and reconstructions are essential for accurate energy measurements and particle identification, but their high granularity makes them computationally expensive. Developing data-driven techniques capable of recovering fine-grained information from coarser readouts, a task known as calorimeter superresolution, offers a promising way to reduce both computational and hardware costs while preserving detector performance. This thesis investigates whether a generative model originally designed for fast simulation can be effectively applied to calorimeter superresolution. Specifically, the model proposed in arXiv:2308.11700 is re-implemented independently and trained on the CaloChallenge 2022 dataset based on the Geant4 Par04 calorimeter geometry. Finally, the model's performance is assessed through a rigorous statistical evaluation framework, following the methodology introduced in arXiv:2409.16336, to quantitatively test its ability to reproduce the reference distributions.
- Workflow (1.00)
- Research Report > New Finding (0.45)
Alternating Diffusion for Proximal Sampling with Zeroth Order Queries
Takagi, Hirohane, Nitanda, Atsushi
This work introduces a new approximate proximal sampler that operates solely with zeroth-order information of the potential function. Prior theoretical analyses have revealed that proximal sampling corresponds to alternating forward and backward iterations of the heat flow. The backward step was originally implemented by rejection sampling, whereas we directly simulate the dynamics. Unlike diffusion-based sampling methods that estimate scores via learned models or by invoking auxiliary samplers, our method treats the intermediate particle distribution as a Gaussian mixture, thereby yielding a Monte Carlo score estimator from directly samplable distributions. Theoretically, when the score estimation error is sufficiently controlled, our method inherits the exponential convergence of proximal sampling under isoperimetric conditions on the target distribution. In practice, the algorithm avoids rejection sampling, permits flexible step sizes, and runs with a deterministic runtime budget. Numerical experiments demonstrate that our approach converges rapidly to the target distribution, driven by interactions among multiple particles and by exploiting parallel computation.
Variational PDEs for Acceleration on Manifolds and Application to Diffeomorphisms
We consider the optimization of cost functionals on manifolds and derive a variational approach to accelerated methods on manifolds. We demonstrate the methodology on the infinite-dimensional manifold of diffeomorphisms, motivated by registration problems in computer vision. We build on the variational approach to accelerated optimization by Wibisono, Wilson and Jordan, which applies in finite dimensions, and generalize that approach to infinite dimensional manifolds. We derive the continuum evolution equations, which are partial differential equations (PDE), and relate them to simple mechanical principles. Our approach can also be viewed as a generalization of the $L^2$ optimal mass transport problem. Our approach evolves an infinite number of particles endowed with mass, represented as a mass density.
Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks
The performance of neural networks on high-dimensional data distributions suggests that it may be possible to parameterize a representation of a given high-dimensional function with controllably small errors, potentially outperforming standard interpolation methods. We demonstrate, both theoretically and numerically, that this is indeed the case. We map the parameters of a neural network to a system of particles relaxing with an interaction potential determined by the loss function. We show that in the limit that the number of parameters $n$ is large, the landscape of the mean-squared error becomes convex and the representation error in the function scales as $O(n^{-1})$. In this limit, we prove a dynamical variant of the universal approximation theorem showing that the optimal representation can be attained by stochastic gradient descent, the algorithm ubiquitously used for parameter optimization in machine learning. In the asymptotic regime, we study the fluctuations around the optimal representation and show that they arise at a scale $O(n^{-1})$. These fluctuations in the landscape identify the natural scale for the noise in stochastic gradient descent. Our results apply to both single and multi-layer neural networks, as well as standard kernel methods like radial basis functions.
Stein Variational Gradient Descent as Moment Matching
Stein variational gradient descent (SVGD) is a non-parametric inference algorithm that evolves a set of particles to fit a given distribution of interest. We analyze the non-asymptotic properties of SVGD, showing that there exists a set of functions, which we call the Stein matching set, whose expectations are exactly estimated by any set of particles that satisfies the fixed point equation of SVGD. This set is the image of Stein operator applied on the feature maps of the positive definite kernel used in SVGD. Our results provide a theoretical framework for analyzing the properties of SVGD with different kernels, shedding insight into optimal kernel choice. In particular, we show that SVGD with linear kernels yields exact estimation of means and variances on Gaussian distributions, while random Fourier features enable probabilistic bounds for distributional approximation. Our results offer a refreshing view of the classical inference problem as fitting Stein's identity or solving the Stein equation, which may motivate more efficient algorithms.