Goto

Collaborating Authors

 inferring



Reviews: Neural Taskonomy: Inferring the Similarity of Task-Derived Representations from Brain Activity

Neural Information Processing Systems

Having read the authors' response and the other reviews, I think I may have gotten a little overly excited about this paper, but I still think it is innovative and significant. In fact, I found the relationship between the two clusters in Figures 6a and 6b more convincing than I did originally. However, I am dropping my score a notch because, while I really like this paper, I believe it is in the top 50%, not the top 15% of NIPS papers. This is again, because the analysis is not as rigorous as one would like. Also, after having read the author's response and the other reviews, I have edited my review a bit below.


Reviews: Neural Taskonomy: Inferring the Similarity of Task-Derived Representations from Brain Activity

Neural Information Processing Systems

This paper takes a creative idea of using the feature spaces from many different computer vision task networks to help understand fMRI data from different brain regions. Reviewers find the idea interesting but the results are not particularly clear/definitive or surprising.


Inferring learning rules from animal decision-making

Neural Information Processing Systems

This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors. Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal's policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules. After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. We found that certain learning rules were far more capable of explaining trial-to-trial changes in an animal's policy.


HOME-3: High-Order Momentum Estimator with Third-Power Gradient for Convex and Smooth Nonconvex Optimization

Zhang, Wei, Zidan, Arif Hassan, Jahin, Afrar, Bao, Yu, Liu, Tianming

arXiv.org Artificial Intelligence

Momentum-based gradients are essential for optimizing advanced machine learning models, as they not only accelerate convergence but also advance optimizers to escape stationary points. While most state-of-the-art momentum techniques utilize lower-order gradients, such as the squared first-order gradient, there has been limited exploration of higher-order gradients, particularly those raised to powers greater than two. In this work, we introduce the concept of high-order momentum, where momentum is constructed using higher-power gradients, with a focus on the third-power of the first-order gradient as a representative case. Our research offers both theoretical and empirical support for this approach. Theoretically, we demonstrate that incorporating third-power gradients can improve the convergence bounds of gradient-based optimizers for both convex and smooth nonconvex problems. Empirically, we validate these findings through extensive experiments across convex, smooth nonconvex, and nonsmooth nonconvex optimization tasks. Across all cases, high-order momentum consistently outperforms conventional low-order momentum methods, showcasing superior performance in various optimization problems.


Review for NeurIPS paper: Inferring learning rules from animal decision-making

Neural Information Processing Systems

Weaknesses: As it is pointed out by the author (line 148-151), the result strongly relies on the correct assumption of the learning model to be REINFORCE, which I think it's a very strong assumption. It would be better supported by literature, showing animals can/are doing similar learning. Also as the authors pointed out, their model is descriptive. As the nature of a descriptive model, I feel like I don't gain much insight from the model of how animals learn. For example, the authors found a non-zero update to the bias weight on incorrect trial, which explains the "incorrect" bahevior of repeatedly choosing the wrong option. This sounds like a "noise" in the behavior to me and the model also does not explain it further besides it being noise.


Review for NeurIPS paper: Inferring learning rules from animal decision-making

Neural Information Processing Systems

I want to thank the authors for preparing the detailed rebuttal. This paper was discussed among all the reviewers during the post-rebuttal discussion phase. Overall, all the reviewers are excited about the research topic on inferring the learning rule of animals. There was a clear consensus that the paper should be accepted. The rebuttal did help clarify some of the reviewers' questions and steer their decisions towards acceptance.


Inferring the Future by Imagining the Past

Neural Information Processing Systems

A single panel of a comic book can say a lot: it can depict not only where the characters currently are, but also their motions, their motivations, their emotions, and what they might do next. More generally, humans routinely infer complex sequences of past and future events from a static snapshot of a dynamic scene, even in situations they have never seen before.In this paper, we model how humans make such rapid and flexible inferences. Building on a long line of work in cognitive science, we offer a Monte Carlo algorithm whose inferences correlate well with human intuitions in a wide variety of domains, while only using a small, cognitively-plausible number of samples. Our key technical insight is a surprising connection between our inference problem and Monte Carlo path tracing, which allows us to apply decades of ideas from the computer graphics community to this seemingly-unrelated theory of mind task.


Neural Taskonomy: Inferring the Similarity of Task-Derived Representations from Brain Activity

Neural Information Processing Systems

Convolutional neural networks (CNNs) trained for object classification have been widely used to account for visually-driven neural responses in both human and primate brains. However, because of the generality and complexity of object classification, despite the effectiveness of CNNs in predicting brain activity, it is difficult to draw specific inferences about neural information processing using CNN-derived representations. To address this problem, we used learned representations drawn from 21 computer vision tasks to construct encoding models for predicting brain responses from BOLD5000---a large-scale dataset comprised of fMRI scans collected while observers viewed over 5000 naturalistic scene and object images. Encoding models based on task features predict activity in different regions across the whole brain. Features from 3D tasks such as keypoint/edge detection explain greater variance compared to 2D tasks---a pattern observed across the whole brain.


Inferring learning rules from animal decision-making

Neural Information Processing Systems

This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors. Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal's policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules. After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. We found that certain learning rules were far more capable of explaining trial-to-trial changes in an animal's policy.