Goto

Collaborating Authors

 Gabrielli, Andrea


Accelerating Discovery in Natural Science Laboratories with AI and Robotics: Perspectives and Challenges from the 2024 IEEE ICRA Workshop, Yokohama, Japan

arXiv.org Artificial Intelligence

Fundamental breakthroughs across many scientific disciplines are becoming increasingly rare (1). At the same time, challenges related to the reproducibility and scalability of experiments, especially in the natural sciences (2,3), remain significant obstacles. For years, automating scientific experiments has been viewed as the key to solving this problem. However, existing solutions are often rigid and complex, designed to address specific experimental tasks with little adaptability to protocol changes. With advancements in robotics and artificial intelligence, new possibilities are emerging to tackle this challenge in a more flexible and human-centric manner.


Noise-cleaning the precision matrix of fMRI time series

arXiv.org Machine Learning

We present a comparison between various algorithms of inference of covariance and precision matrices in small datasets of real vectors, of the typical length and dimension of human brain activity time series retrieved by functional Magnetic Resonance Imaging (fMRI). Assuming a Gaussian model underlying the neural activity, the problem consists in denoising the empirically observed matrices in order to obtain a better estimator of the true precision and covariance matrices. We consider several standard noise-cleaning algorithms and compare them on two types of datasets. The first type are time series of fMRI brain activity of human subjects at rest. The second type are synthetic time series sampled from a generative Gaussian model of which we can vary the fraction of dimensions per sample q = N/T and the strength of off-diagonal correlations. The reliability of each algorithm is assessed in terms of test-set likelihood and, in the case of synthetic data, of the distance from the true precision matrix. We observe that the so called Optimal Rotationally Invariant Estimator, based on Random Matrix Theory, leads to a significantly lower distance from the true precision matrix in synthetic data, and higher test likelihood in natural fMRI data. We propose a variant of the Optimal Rotationally Invariant Estimator in which one of its parameters is optimised by cross-validation. In the severe undersampling regime (large q) typical of fMRI series, it outperforms all the other estimators. We furthermore propose a simple algorithm based on an iterative likelihood gradient ascent, providing an accurate estimation for weakly correlated datasets.