trainability
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Asia > Middle East > Jordan (0.04)
A Unified Noise-Curvature View of Loss of Trainability
Baveja, Gunbir Singh, Lewandowski, Alex, Schmidt, Mark
Loss of trainability refers to a phenomenon in continual learning where parameter updates no longer make progress on the optimization objective, so accuracy stalls or degrades as the learning problem changes over time. In this paper, we analyze loss of trainability through an optimization lens and find that the phenomenon is not reliably predicted by existing individual indicators such as Hessian rank, sharpness level, weight or gradient norms, gradient-to-parameter ratios, and unit-sign entropy. Motivated by our analysis, we introduce two complementary indicators: a batch-size-aware gradient-noise bound and a curvature volatility-controlled bound. We then combine these two indicators into a per-layer adaptive noise threshold on the effective step-size that anticipates trainability behavior. Using this insight, we propose a step-size scheduler that keeps each layer's effective parameter update below this bound, thereby avoiding loss of trainability. We demonstrate that our scheduler can improve the accuracy maintained by previously proposed approaches, such as concatenated ReLU (CReLU), Wasserstein regularizer, and L2 weight decay. Surprisingly, our scheduler produces adaptive step-size trajectories that, without tuning, mirror the manually engineered step-size decay schedules.
- North America > Canada > Alberta (0.14)
- North America > Canada > British Columbia (0.04)
- Asia > Middle East > Jordan (0.04)
Golden retrievers and humans share 'striking' genetic similarities
Science Biology Golden retrievers and humans share'striking' genetic similarities The same genes influence intelligence, anxiety, and depression in both species. Breakthroughs, discoveries, and DIY tips sent every weekday. You're likely not reading too much into your dog's mood: according to researchers at the University of Cambridge, certain genes influencing golden retriever behavior are also traceable to human emotions including intelligence, depression, and anxiety. "The findings are really striking," Eleanor Raffan, a neuroscience researcher and coauthor of a study published in the, said in a statement . "They provide strong evidence that humans and golden retrievers have shared genetic roots for their behavior."
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.25)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.05)
- Asia > South Korea (0.05)
- Retail (0.72)
- Health & Medicine > Therapeutic Area > Neurology (0.36)
Dissecting Quantum Reinforcement Learning: A Systematic Evaluation of Key Components
Lazaro, Javier, Vazquez, Juan-Ignacio, Garcia-Bringas, Pablo
Parameterised quantum circuit (PQC) based Quantum Reinforcement Learning (QRL) has emerged as a promising paradigm at the intersection of quantum computing and reinforcement learning (RL). By design, PQCs create hybrid quantum-classical models, but their practical applicability remains uncertain due to training instabilities, barren plateaus (BPs), and the difficulty of isolating the contribution of individual pipeline components. In this work, we dissect PQC based QRL architectures through a systematic experimental evaluation of three aspects recurrently identified as critical: (i) data embedding strategies, with Data Reuploading (DR) as an advanced approach; (ii) ansatz design, particularly the role of entanglement; and (iii) post-processing blocks after quantum measurement, with a focus on the underexplored Output Reuse (OR) technique. Using a unified PPO-CartPole framework, we perform controlled comparisons between hybrid and classical agents under identical conditions. Our results show that OR, though purely classical, exhibits distinct behaviour in hybrid pipelines, that DR improves trainability and stability, and that stronger entanglement can degrade optimisation, offsetting classical gains. Together, these findings provide controlled empirical evidence of the interplay between quantum and classical contributions, and establish a reproducible framework for systematic benchmarking and component-wise analysis in QRL.
- Europe > Spain > Basque Country > Biscay Province > Bilbao (0.40)
- North America > United States (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- (2 more...)
- Oceania > Australia > New South Wales > Sydney (0.14)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (9 more...)
- North America > United States > Wisconsin (0.04)
- North America > United States > Texas (0.04)
- Europe > Germany > Saarland > Saarbrücken (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.04)
When Expressivity Meets Trainability: Fewer than n Neurons Can Work
Modern neural networks are often quite wide, causing large memory and computation costs. It is thus of great interest to train a narrower network. However, training narrow neural nets remains a challenging task. We ask two theoretical questions: Can narrow networks have as strong expressivity as wide ones?
- Asia > Myanmar > Tanintharyi Region > Dawei (0.05)
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Hong Kong (0.04)
- (2 more...)
- North America > United States > Wisconsin (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Wisconsin (0.04)
- North America > United States > Texas (0.04)
- Europe > Germany > Saarland > Saarbrücken (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Cologne (0.04)