Goto

Collaborating Authors

 covariance structure


A Bayesian method for reducing bias in neural representational similarity analysis

Neural Information Processing Systems

In neuroscience, the similarity matrix of neural activity patterns in response to different sensory stimuli or under different cognitive states reflects the structure of neural representational space. Existing methods derive point estimations of neural activity patterns from noisy neural imaging data, and the similarity is calculated from these point estimations. We show that this approach translates structured noise from estimated patterns into spurious bias structure in the resulting similarity matrix, which is especially severe when signal-to-noise ratio is low and experimental conditions cannot be fully randomized in a cognitive task. We propose an alternative Bayesian framework for computing representational similarity in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data, and directly estimate this covariance structure from imaging data while marginalizing over the unknown activity patterns. Converting the estimated covariance structure into a correlation matrix offers a much less biased estimate of neural representational similarity. Our method can also simultaneously estimate a signal-to-noise map that informs where the learned representational structure is supported more strongly, and the learned covariance matrix can be used as a structured prior to constrain Bayesian estimation of neural activity patterns.





45c166d697d65080d54501403b433256-AuthorFeedback.pdf

Neural Information Processing Systems

The reviewers2 acknowledge that the ideas presented inthe paper are compelling, sound and appear tobeeffective(R3), offering a3 great add to the GP literature (R1) which is also supported by a solid and an interesting theoretical foundation (R2,4 R4). Existing multi-output GP models are not applicable to our setting (see line 79-83) and are thus not16 comparabletotheDAG-GPmodel. Wehavefurther clarified this point in Section 1.2.



0d5a4a5a748611231b945d28436b8ece-Paper.pdf

Neural Information Processing Systems

Neural networkmodels areknowntoreinforce hidden databiases, making them unreliable and difficult to interpret. We seek to build models that'know whatthey do not know' by introducing inductive biases in the function space.


ContinuousMean-CovarianceBandits

Neural Information Processing Systems

Specifically,inCMCB, there isalearner who sequentially chooses weight vectors on given options and observes random feedback according to the decisions. The agent's objective is to achieve the best trade-off between reward and risk, measured with option covariance. To capture different reward observation scenarios in practice, we considerthreefeedbacksettings,i.e.,full-information,semi-banditandfull-bandit feedback. Wepropose novelalgorithms withoptimal regrets(within logarithmic factors), and provide matching lower bounds to validate their optimalities. The experimental results also demonstrate the superiority of our algorithms.


ContinuousMean-CovarianceBandits

Neural Information Processing Systems

Specifically,inCMCB, there isalearner who sequentially chooses weight vectors on given options and observes random feedback according to the decisions. The agent's objective is to achieve the best trade-off between reward and risk, measured with option covariance. To capture different reward observation scenarios in practice, we considerthreefeedbacksettings,i.e.,full-information,semi-banditandfull-bandit feedback. Wepropose novelalgorithms withoptimal regrets(within logarithmic factors), and provide matching lower bounds to validate their optimalities. The experimental results also demonstrate the superiority of our algorithms.


Multi-Group Quadratic Discriminant Analysis via Projection

Wang, Yuchao, Wang, Tianying

arXiv.org Machine Learning

Multi-group classification arises in many prediction and decision-making problems, including applications in epidemiology, genomics, finance, and image recognition. Although classification methods have advanced considerably, much of the literature focuses on binary problems, and available extensions often provide limited flexibility for multi-group settings. Recent work has extended linear discriminant analysis to multiple groups, but more general methods are still needed to handle complex structures such as nonlinear decision boundaries and group-specific covariance patterns. We develop Multi-Group Quadratic Discriminant Analysis (MGQDA), a method for multi-group classification built on quadratic discriminant analysis. MGQDA projects high-dimensional predictors onto a lower-dimensional subspace, which enables accurate classification while capturing nonlinearity and heterogeneity in group-specific covariance structures. We derive theoretical guarantees, including variable selection consistency, to support the reliability of the procedure. In simulations and a gene-expression application, MGQDA achieves competitive or improved predictive performance compared with existing methods while selecting group-specific informative variables, indicating its practical value for high-dimensional multi-group classification problems. Supplementary materials for this article are available online.