scalar variability
Reviews: Interval timing in deep reinforcement learning agents
After reading the Author Feedback: The authors addressed and responded to all my concerns in an extensive manner. This is an interesting well-thought contribution, and I am happy to increase my score. Summary: In this paper, the authors investigate how deep reinforcement learning agents with distinct architectures (mainly, feed-forward vs. recurrent) learn to solve an interval timing task analogous to a time reproduction task widely used in the human timing literature, implemented in a virtual psychophysics lab (PsychLab/DeepMind lab). Briefly, in each trial the agent has to measure the time interval between a "ready" and "set" cue, and wait for the same duration before responding by moving their virtual gaze inside a "go" target; with the goal that the duration between the "set" cue and the "go" response should match the interval between "ready" and "set". Time intervals during training are drawn from a discrete uniform distribution.
Large-scale Generative AI Models Lack Visual Number Sense
Testolin, Alberto, Hou, Kuinan, Zorzi, Marco
Humans can readily judge the number of objects in a visual scene, even without counting, and such a skill has been documented in a variety of animal species and in babies prior to language development and formal schooling. Numerical judgments are error-free for small sets, while for larger collections responses become approximate, with variability increasing proportionally to the target number. This response pattern is observed for items of all kinds, despite variation in object features (such as color or shape), suggesting that our visual number sense relies on abstract representations of numerosity. Here, we investigated whether generative Artificial Intelligence (AI) models based on large-scale transformer architectures can reliably name the number of objects in simple visual stimuli or generate images containing a target number of items in the 1-10 range. Surprisingly, none of the foundation models considered performed in a human-like way: They all made striking errors even with small numbers, the response variability often did not increase in a systematic way, and the pattern of errors varied with object category. Our findings demonstrate that advanced AI systems still lack a basic ability that supports an intuitive understanding of numbers, which in humans is foundational for numeracy and mathematical development.