population structure
A Theoretical details A.1 A note about the assumptions Note about the assumptions
A.2 Proof of Theorem 1 We restate the theorem for completeness: Theorem 1. Assume Any ODE's solution, if it exists and converges, converges to an's estimate of the conditional effect is We now bound the remaining term. 's computation of the surrogate intervention involved Thus, such error does not accumulate even with large step sizes. Theorem 4. Effect Connectivity is necessary for nonparametric effect estimation in Let Effect Connectivity be violated, i.e. there exists a Thus, nonparametric effect estimation is impossible. The effect threshold here is 0.1.Figure 7: True positive vs. False negative rate as we vary the threshold on average
Emergence of hybrid computational dynamics through reinforcement learning
Kononov, Roman A., Pospelov, Nikita A., Anokhin, Konstantin V., Nekorkin, Vladimir V., Maslennikov, Oleg V.
Understanding how learning algorithms shape the computational strategies that emerge in neural networks remains a fundamental challenge in machine intelligence. While network architectures receive extensive attention, the role of the learning paradigm itself in determining emergent dynamics remains largely unexplored. Here we demonstrate that reinforcement learning (RL) and supervised learning (SL) drive recurrent neural networks (RNNs) toward fundamentally different computational solutions when trained on identical decision-making tasks. Through systematic dynamical systems analysis, we reveal that RL spontaneously discovers hybrid attractor architectures, combining stable fixed-point attractors for decision maintenance with quasi-periodic attractors for flexible evidence integration. This contrasts sharply with SL, which converges almost exclusively to simpler fixed-point-only solutions. We further show that RL sculpts functionally balanced neural populations through a powerful form of implicit regularization -- a structural signature that enhances robustness and is conspicuously absent in the more heterogeneous solutions found by SL-trained networks. The prevalence of these complex dynamics in RL is controllably modulated by weight initialization and correlates strongly with performance gains, particularly as task complexity increases. Our results establish the learning algorithm as a primary determinant of emergent computation, revealing how reward-based optimization autonomously discovers sophisticated dynamical mechanisms that are less accessible to direct gradient-based optimization. These findings provide both mechanistic insights into neural computation and actionable principles for designing adaptive AI systems.
- Asia > Russia (0.05)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Europe > Russia > Volga Federal District > Nizhny Novgorod Oblast > Nizhny Novgorod (0.04)
A Theoretical details
A.2 Proof of Theorem 1 We restate the theorem for completeness: Theorem 1. Assume Any ODE's solution, if it exists and converges, converges to an's estimate of the conditional effect is We now bound the remaining term. 's computation of the surrogate intervention involved Thus, such error does not accumulate even with large step sizes. Theorem 4. Effect Connectivity is necessary for nonparametric effect estimation in Let Effect Connectivity be violated, i.e. there exists a Thus, nonparametric effect estimation is impossible. The effect threshold here is 0.1.Figure 7: True positive vs. False negative rate as we vary the threshold on average
Revealed: What humans will look like in 1,000 years, according to scientists
Looking back at our primate ancestors, it would be easy to assume that humans today have reached the final chapter of our evolution. However, many scientists believe that the way humans appear today is just the start of the story. Thanks to technology, space travel, and climate change, the world around us is changing faster than ever - and experts believe that humanity will change with it. Now, artificial intelligence (AI) reveals what the humans of the future might look like. With Google's ImageFX AI image generator, MailOnline has used predictions from leading scientists to imagine how the human race might evolve.
- South America > Brazil (0.06)
- Africa > Mauritius (0.06)
- North America > United States > Wisconsin > Dane County > Madison (0.05)
- (3 more...)
Animashree Anandkumar Daniel Hsu University of California Columbia University Irvine, CA Sham Kakade University of California Microsoft Research Irvine, CA
Overcomplete latent representations have been very popular for unsupervised feature learning in recent years. In this paper, we specify which overcomplete models can be identified given observable moments of a certain order. We consider probabilistic admixture or topic models in the overcomplete regime, where the number of latent topics can greatly exceed the size of the observed word vocabulary. While general overcomplete topic models are not identifiable, we establish generic identifiability under a constraint, referred to as topic persistence. Our sufficient conditions for identifiability involve a novel set of "higher order" expansion conditions on the topic-word matrix or the population structure of the model. This set of higher-order expansion conditions allow for overcomplete models, and require the existence of a perfect matching from latent topics to higher order observed words. We establish that random structured topic models are identifiable w.h.p. in the overcomplete regime. Our identifiability results allow for general (non-degenerate) distributions for modeling the topic proportions, and thus, we can handle arbitrarily correlated topics in our framework. Our identifiability results imply uniqueness of a class of tensor decompositions with structured sparsity which is contained in the class of Tucker decompositions, but is more general than the Candecomp/Parafac (CP) decomposition.
- North America > United States > California > Orange County > Irvine (0.76)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (4 more...)
Causal Estimation with Functional Confounders
Puli, Aahlad, Perotte, Adler J., Ranganath, Rajesh
Causal inference relies on two fundamental assumptions: ignorability and positivity. We study causal inference when the true confounder value can be expressed as a function of the observed data; we call this setting estimation with functional confounders (EFC). In this setting, ignorability is satisfied, however positivity is violated, and causal inference is impossible in general. We consider two scenarios where causal effects are estimable. First, we discuss interventions on a part of the treatment called functional interventions and a sufficient condition for effect estimation of these interventions called functional positivity. Second, we develop conditions for nonparametric effect estimation based on the gradient fields of the functional confounder and the true outcome function. To estimate effects under these conditions, we develop Level-set Orthogonal Descent Estimation (LODE). Further, we prove error bounds on LODE's effect estimates, evaluate our methods on simulated and real data, and empirically demonstrate the value of EFC.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (3 more...)
Origins and genetic legacy of prehistoric dogs
Dogs were the first domesticated animal, likely originating from human-associated wolves, but their origin remains unclear. Bergstrom et al. sequenced 27 ancient dog genomes from multiple locations near to and corresponding in time to comparable human ancient DNA sites (see the Perspective by Pavlidis and Somel). By analyzing these genomes, along with other ancient and modern dog genomes, the authors found that dogs likely arose once from a now-extinct wolf population. They also found that at least five different dog populations ∼10,000 years before the present show replacement in Europe at later dates. Furthermore, some dog population genetics are similar to those of humans, whereas others differ, inferring a complex ancestral history for humanity's best friend. Science , this issue p. [557][1]; see also p. [522][2] Dogs were the first domestic animal, but little is known about their population history and to what extent it was linked to humans. We sequenced 27 ancient dog genomes and found that all dogs share a common ancestry distinct from present-day wolves, with limited gene flow from wolves since domestication but substantial dog-to-wolf gene flow. By 11,000 years ago, at least five major ancestry lineages had diversified, demonstrating a deep genetic history of dogs during the Paleolithic. Coanalysis with human genomes reveals aspects of dog population history that mirror humans, including Levant-related ancestry in Africa and early agricultural Europe. Other aspects differ, including the impacts of steppe pastoralist expansions in West and East Eurasia and a near-complete turnover of Neolithic European dog ancestry. [1]: /lookup/doi/10.1126/science.aba9572 [2]: /lookup/doi/10.1126/science.abe7823
- Europe > Sweden (0.14)
- Asia > Middle East > Iran (0.06)
- Asia > East Asia (0.05)
- (11 more...)
Self Organizing Classifiers and Niched Fitness
Vargas, Danilo Vasconcellos, Takano, Hirotaka, Murata, Junichi
Learning classifier systems are adaptive learning systems which have been widely applied in a multitude of application domains. However, there are still some generalization problems unsolved. The hurdle is that fitness and niching pressures are difficult to balance. Here, a new algorithm called Self Organizing Classifiers is proposed which faces this problem from a different perspective. Instead of balancing the pressures, both pressures are separated and no balance is necessary. In fact, the proposed algorithm possesses a dynamical population structure that self-organizes itself to better project the input space into a map. The niched fitness concept is defined along with its dynamical population structure, both are indispensable for the understanding of the proposed method. Promising results are shown on two continuous multi-step problems. One of which is yet more challenging than previous problems of this class in the literature.
- Asia > Japan > Kyūshū & Okinawa > Kyūshū > Fukuoka Prefecture > Fukuoka (0.05)
- North America > United States > New York (0.04)
- North America > United States > Michigan (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Fuzzy Logic (0.47)
This Mutation Math Shows How Life Keeps on Evolving
Natural selection has been a cornerstone of evolutionary theory ever since Darwin. Yet mathematical models of natural selection have often been dogged by an awkward problem that seemed to make evolution harder than biologists understood it to be. In a new paper appearing in Communications Biology, a multidisciplinary team of scientists in Austria and the United States identify a possible way out of the conundrum. Their answer still needs to be checked against what happens in nature, but in any case, it could be useful for biotechnology researchers and others who need to promote natural selection under artificial circumstances. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. A central premise of the theory of evolution through natural selection is that when beneficial mutations appear, they should spread throughout a population.
- Europe > Austria (0.25)
- North America > United States (0.25)
- Oceania > New Zealand (0.05)
- (2 more...)
A Sparse Graph-Structured Lasso Mixed Model for Genetic Association with Confounding Correction
Ye, Wenting, Liu, Xiang, Wang, Haohan, Xing, Eric P.
While linear mixed model (LMM) has shown a competitive performance in correcting spurious associations raised by population stratification, family structures, and cryptic relatedness, more challenges are still to be addressed regarding the complex structure of genotypic and phenotypic data. For example, geneticists have discovered that some clusters of phenotypes are more co-expressed than others. Hence, a joint analysis that can utilize such relatedness information in a heterogeneous data set is crucial for genetic modeling. We proposed the sparse graph-structured linear mixed model (sGLMM) that can incorporate the relatedness information from traits in a dataset with confounding correction. Our method is capable of uncovering the genetic associations of a large number of phenotypes together while considering the relatedness of these phenotypes. Through extensive simulation experiments, we show that the proposed model outperforms other existing approaches and can model correlation from both population structure and shared signals. Further, we validate the effectiveness of sGLMM in the real-world genomic dataset on two different species from plants and humans. In Arabidopsis thaliana data, sGLMM behaves better than all other baseline models for 63.4% traits. We also discuss the potential causal genetic variation of Human Alzheimer's disease discovered by our model and justify some of the most important genetic loci.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Asia > China > Beijing > Beijing (0.05)
- Europe (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.94)