science
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- North America > United States > New York > Broome County > Binghamton (0.04)
- (5 more...)
- Health & Medicine > Therapeutic Area (0.93)
- Education (0.93)
- Law > Civil Rights & Constitutional Law (0.68)
- (2 more...)
Do you need more sleep in fall and winter? Probably.
Do you need more sleep in fall and winter? Less sunlight, colder weather, and diet changes make us sleepier--and that's OK. Winter mornings make staying under the covers feel impossible to resist. Breakthroughs, discoveries, and DIY tips sent every weekday. It's a crisp, fall day in mid-November, and though your calendar is filled with evening get-togethers and morning runs, you're feeling sluggish.
- North America > United States > California > San Francisco County > San Francisco (0.15)
- Asia > Middle East > Jordan (0.05)
Marrying Causal Representation Learning with Dynamical Systems for Science
Causal representation learning promises to extend causal models to hidden causal variables from raw entangled measurements. However, most progress has focused on proving identifiability results in different settings, and we are not aware of any successful real-world application. At the same time, the field of dynamical systems benefited from deep learning and scaled to countless applications but does not allow parameter identification. In this paper, we draw a clear connection between the two and their key assumptions, allowing us to apply identifiable methods developed in causal representation learning to dynamical systems. At the same time, we can leverage scalable differentiable solvers developed for differential equations to build models that are both identifiable and practical.
The Trump Administration Is Turning Science Against Itself
The damage the Trump administration has done to science in a few short months is both well documented and incalculable, but in recent days that assault has taken an alarming twist. Their latest project is not firing researchers or pulling funds--although there's still plenty of that going on. Three "dire wolves" are born in an undisclosed location in the continental United States, and the media goes wild. This is big news for Game of Thrones fans and anyone interested in "de-extinction," the promise of bringing back long-vanished species. There's a lot to unpack here: Are these dire wolves really dire wolves?
Why don't we remember being a baby? New clues in memory mystery.
What's the earliest memory you can recall? While many people's recollections of the past may stretch back into childhood, research shows that the trip down memory lane generally hits a wall once you reach infancy. In some ways, this doesn't make much sense--after all, the first years of a baby's life are when they learn foundational psychological concepts, form relationships with caregivers, and gain a sense of self. Experts have long attributed this "infant amnesia" to the development timeline of the hippocampus, the region of the brain responsible for retaining memories. But according to new evidence from a team at Yale University, the explanation for early our memory blocks may be a bit more complicated.
Using the Tools of Cognitive Science to Understand Large Language Models at Different Levels of Analysis
Ku, Alexander, Campbell, Declan, Bai, Xuechunzi, Geng, Jiayi, Liu, Ryan, Marjieh, Raja, McCoy, R. Thomas, Nam, Andrew, Sucholutsky, Ilia, Veselovsky, Veniamin, Zhang, Liyi, Zhu, Jian-Qiao, Griffiths, Thomas L.
Modern artificial intelligence systems, such as large language models, are increasingly powerful but also increasingly hard to understand. Recognizing this problem as analogous to the historical difficulties in understanding the human mind, we argue that methods developed in cognitive science can be useful for understanding large language models. We propose a framework for applying these methods based on Marr's three levels of analysis. By revisiting established cognitive science techniques relevant to each level and illustrating their potential to yield insights into the behavior and internal organization of large language models, we aim to provide a toolkit for making sense of these new kinds of minds.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (2 more...)
Aristotle's Original Idea: For and Against Logic in the era of AI
The ideas that he raised in his study of logical reasoning carried the development of science over the centuries. Any scientific theory's mathematical formalization is one that falls under his idea of Demonstrative Science. T oday, in the era of AI, this title of the fatherhood of logic has a renewed significance . Behind it li es his original idea that human reasoning c ould be studied as a process and that perhaps there exist universal systems of reasoning that underly all human reasoning irrespective of the content of what we are reasoning about . This is a daring idea as it ess entially says that the human mind can study itself and indeed that it has the capacity to unravel its own self. Irrespective of whether this is possible or not, it is a thought that is a prerequisite for the existence and development of Artificial Intellig ence. In this article, we look into Aristotle's work on human thought, his work on reasoning itself but also on how it relates to science and human endeavour more generally, from a modern perspective of Artificial Intelligence and ask if this can help enli ghten our understanding of AI and S cience more generally.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Middle East > Cyprus (0.04)
- Europe > Greece > Central Macedonia > Thessaloniki (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
LLM Agents Display Human Biases but Exhibit Distinct Learning Patterns
We investigate the choice patterns of Large Language Models (LLMs) in the context of Decisions from Experience tasks that involve repeated choice and learning from feedback, and compare their behavior to human participants. We find that on the aggregate, LLMs appear to display behavioral biases similar to humans: both exhibit underweighting rare events and correlation effects. However, more nuanced analyses of the choice patterns reveal that this happens for very different reasons. LLMs exhibit strong recency biases, unlike humans, who appear to respond in more sophisticated ways. While these different processes may lead to similar behavior on average, choice patterns contingent on recent events differ vastly between the two groups. Specifically, phenomena such as ``surprise triggers change" and the ``wavy recency effect of rare events" are robustly observed in humans, but entirely absent in LLMs. Our findings provide insights into the limitations of using LLMs to simulate and predict humans in learning environments and highlight the need for refined analyses of their behavior when investigating whether they replicate human decision making tendencies.
Why the Brain Cannot Be a Digital Computer: History-Dependence and the Computational Limits of Consciousness
This paper presents a novel information-theoretic proof demonstrating that the human brain as currently understood cannot function as a classical digital computer. Through systematic quantification of distinguishable conscious states and their historical dependencies, we establish that the minimum information required to specify a conscious state exceeds the physical information capacity of the human brain by a significant factor. Our analysis calculates the bit-length requirements for representing consciously distinguishable sensory "stimulus frames" and demonstrates that consciousness exhibits mandatory temporal-historical dependencies that multiply these requirements beyond the brain's storage capabilities. This mathematical approach offers new insights into the fundamental limitations of computational models of consciousness and suggests that non-classical information processing mechanisms may be necessary to account for conscious experience.
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Consumer Products & Services (0.69)