Goto

Collaborating Authors

 cerebral cortex




Real-time nonlinear inversion of magnetic resonance elastography with operator learning

Rivera, Juampablo E. Heras, Neher, Caitlin M., Kurt, Mehmet

arXiv.org Artificial Intelligence

$\textbf{Purpose:}$ To develop and evaluate an operator learning framework for nonlinear inversion (NLI) of brain magnetic resonance elastography (MRE) data, which enables real-time inversion of elastograms with comparable spatial accuracy to NLI. $\textbf{Materials and Methods:}$ In this retrospective study, 3D MRE data from 61 individuals (mean age, 37.4 years; 34 female) were used for development of the framework. A predictive deep operator learning framework (oNLI) was trained using 10-fold cross-validation, with the complex curl of the measured displacement field as inputs and NLI-derived reference elastograms as outputs. A structural prior mechanism, analogous to Soft Prior Regularization in the MRE literature, was incorporated to improve spatial accuracy. Subject-level evaluation metrics included Pearson's correlation coefficient, absolute relative error, and structural similarity index measure between predicted and reference elastograms across brain regions of different sizes to understand accuracy. Statistical analyses included paired t-tests comparing the proposed oNLI variants to the convolutional neural network baselines. $\textbf{Results:}$ Whole brain absolute percent error was 8.4 $\pm$ 0.5 ($μ'$) and 10.0 $\pm$ 0.7 ($μ''$) for oNLI and 15.8 $\pm$ 0.8 ($μ'$) and 26.1 $\pm$ 1.1 ($μ''$) for CNNs. Additionally, oNLI outperformed convolutional architectures as per Pearson's correlation coefficient, $r$, in the whole brain and across all subregions for both the storage modulus and loss modulus (p < 0.05). $\textbf{Conclusion:}$ The oNLI framework enables real-time MRE inversion (30,000x speedup), outperforming CNN-based approaches and maintaining the fine-grained spatial accuracy achievable with NLI in the brain.


Do psychic cells generate consciousness?

Suzuki, Mototaka, Aru, Jaan

arXiv.org Artificial Intelligence

Technological advances in the past decades have begun to enable neuroscientists to address fundamental questions about consciousness in an unprecedented way. Here we review remarkable recent progress in our understanding of cellular-level mechanisms of conscious processing in the brain. Of particular interest are the cortical pyramidal neurons -- or "psychic cells" called by Ramón y Cajal more than 100 years ago -- which have an intriguing cellular mechanism that accounts for selective disruption of feedback signaling in the brain upon anesthetic-induced loss of consciousness. Importantly, a particular class of metabotropic receptors distributed over the dendrites of pyramidal cells are highlighted as the key cellular mechanism. After all, Cajal's instinct over a century ago may turn out to be correct -- we may have just begun to understand whether and how psychic cells indeed generate and control our consciousness.


Partitioned Memory Storage Inspired Few-Shot Class-Incremental learning

Zhang, Renye, Yin, Yimin, Zhang, Jinghua

arXiv.org Artificial Intelligence

Current mainstream deep learning techniques exhibit an over-reliance on extensive training data and a lack of adaptability to the dynamic world, marking a considerable disparity from human intelligence. To bridge this gap, Few-Shot Class-Incremental Learning (FSCIL) has emerged, focusing on continuous learning of new categories with limited samples without forgetting old knowledge. Existing FSCIL studies typically use a single model to learn knowledge across all sessions, inevitably leading to the stability-plasticity dilemma. Unlike machines, humans store varied knowledge in different cerebral cortices. Inspired by this characteristic, our paper aims to develop a method that learns independent models for each session. It can inherently prevent catastrophic forgetting. During the testing stage, our method integrates Uncertainty Quantification (UQ) for model deployment. Our method provides a fresh viewpoint for FSCIL and demonstrates the state-of-the-art performance on CIFAR-100 and mini-ImageNet datasets.


MIT maps how the brain experiences movies

Popular Science

Our brains have to do a lot of work when we watch a movie. There are plots to follow, dialogue to interpret, visuals to take in, and more. Now, scientists have created a detailed map of how the human brain functions during the process. Using data from functional magnetic resonance imaging (fMRI), a team from Massachusetts Institute of Technology mapped what different brain networks activate when subjects watch clips from a range of movies. They also saw how different executive networks in the brains are prioritized when watching easy versus difficult scenes.


Murine AI excels at cats and cheese: Structural differences between human and mouse neurons and their implementation in generative AIs

Saiga, Rino, Shiga, Kaede, Maruta, Yo, Inomoto, Chie, Kajiwara, Hiroshi, Nakamura, Naoya, Kakimoto, Yu, Yamamoto, Yoshiro, Yasutake, Masahiro, Uesugi, Masayuki, Takeuchi, Akihisa, Uesugi, Kentaro, Terada, Yasuko, Suzuki, Yoshio, Nikitin, Viktor, De Andrade, Vincent, De Carlo, Francesco, Yamashita, Yuichi, Itokawa, Masanari, Ide, Soichiro, Ikeda, Kazutaka, Mizutani, Ryuta

arXiv.org Artificial Intelligence

Mouse and human brains have different functions that depend on their neuronal networks. In this study, we analyzed nanometer-scale three-dimensional structures of brain tissues of the mouse medial prefrontal cortex and compared them with structures of the human anterior cingulate cortex. The obtained results indicated that mouse neuronal somata are smaller and neurites are thinner than those of human neurons. These structural features allow mouse neurons to be integrated in the limited space of the brain, though thin neurites should suppress distal connections according to cable theory. We implemented this mouse-mimetic constraint in convolutional layers of a generative adversarial network (GAN) and a denoising diffusion implicit model (DDIM), which were then subjected to image generation tasks using photo datasets of cat faces, cheese, human faces, and birds. The mouse-mimetic GAN outperformed a standard GAN in the image generation task using the cat faces and cheese photo datasets, but underperformed for human faces and birds. The mouse-mimetic DDIM gave similar results, suggesting that the nature of the datasets affected the results. Analyses of the four datasets indicated differences in their image entropy, which should influence the number of parameters required for image generation. The preferences of the mouse-mimetic AIs coincided with the impressions commonly associated with mice. The relationship between the neuronal network and brain function should be investigated by implementing other biological findings in artificial neural networks.


MRI scans reveal the STUNNING stages of consciousness in the brain

Daily Mail - Science & tech

Enduring questions over which part of the brain helps produce that feeling of being'awake' have been answered, thanks to stunningly detailed new brain imagery. Researchers' new high-resolution brain scans allowed them to see brain connections at a granular'submillimeter' level -- meaning down to a tiny 3/100ths of an inch. The images were then used to map a neural network of previously unseen pathways in the brain, called the'default ascending arousal network' or dAAN, which they now theorize is the core region that helps humans sustain wakeful consciousness. In recent years, the neuroscientists studying consciousness have divided this curious mystery of how the human brain is self-aware into two sub-categories: 'arousal' (wakefulness) and'awareness' (the subjective experience of being alive). The researchers hope their work exploring this dAAN pathway will help develop new treatments for patients with comas, or other conditions that hinge on wakefulness.


Delving inside the mind: Incredible graphics reveal what each section of your BRAIN does - with more than 70,000 thoughts processed every single day

Daily Mail - Science & tech

Published in 1909, Korbinian Brodmann's groundbreaking analysis of the brain can still be found in neurology textbooks and on classroom posters to this day. Using a specialized microscope, Brodmann painstakingly analyzed the entire surface of the Cerebral Cortex on cellular structure alone. After a decade of effort, Brodmann produced the most detailed map of the Cerebral Cortex yet produced, assigning each region a different number. Over time these areas have been widely used to link brain regions with specific functions, such as area four: the primary motor cortex. This region of the Cerebral Cortex is believed to control motor movements such as moving the hands and face as well as breathing and voluntary blinking. Brodmann's areas have also been mapped to functions such as processing numbers, planning, and processing emotions. Of course, the complexity doesn't stop there as scientists now believe the Cortex has at least 180 distinct regions important for language, perception, consciousness, and attention.


Efficacy of MRI data harmonization in the age of machine learning. A multicenter study across 36 datasets

Marzi, Chiara, Giannelli, Marco, Barucci, Andrea, Tessa, Carlo, Mascalchi, Mario, Diciotti, Stefano

arXiv.org Artificial Intelligence

Pooling publicly-available MRI data from multiple sites allows to assemble extensive groups of subjects, increase statistical power, and promote data reuse with machine learning techniques. The harmonization of multicenter data is necessary to reduce the confounding effect associated with non-biological sources of variability in the data. However, when applied to the entire dataset before machine learning, the harmonization leads to data leakage, because information outside the training set may affect model building, and potentially falsely overestimate performance. We propose a 1) measurement of the efficacy of data harmonization; 2) harmonizer transformer, i.e., an implementation of the ComBat harmonization allowing its encapsulation among the preprocessing steps of a machine learning pipeline, avoiding data leakage. We tested these tools using brain T1-weighted MRI data from 1740 healthy subjects acquired at 36 sites. After harmonization, the site effect was removed or reduced, and we showed the data leakage effect in predicting individual age from MRI data, highlighting that introducing the harmonizer transformer into a machine learning pipeline allows for avoiding data leakage.