voxel


Automated system generates robotic parts for novel tasks

Robohub

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand. In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators -- devices that mechanically control robotic systems in response to electrical signals -- that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it's activated, however, it portrays the famous Edvard Munch painting "The Scream."


Simple 1-D Convolutional Networks for Resting-State fMRI Based Classification in Autism

arXiv.org Machine Learning

Deep learning methods are increasingly being used with neuroimaging data like structural and function magnetic resonance imaging (MRI) to predict the diagnosis of neuropsychiatric and neurological disorders. For psychiatric disorders in particular, it is believed that one of the most promising modality is the resting-state functional MRI (rsfMRI), which captures the intrinsic connectivity between regions in the brain. Because rsfMRI data points are inherently high-dimensional (~1M), it is impossible to process the entire input in its raw form. In this paper, we propose a very simple transformation of the rsfMRI images that captures all of the temporal dynamics of the signal but sub-samples its spatial extent. As a result, we use a very simple 1-D convolutional network which is fast to train, requires minimal preprocessing and performs at par with the state-of-the-art on the classification of Autism spectrum disorders.


Low-dimensional Embodied Semantics for Music and Language

arXiv.org Machine Learning

Embodied cognition states that semantics is encoded in the brain as firing patterns of neural circuits, which are learned according to the statistical structure of human multimodal experience. However, each human brain is idiosyncratically biased, according to its subjective experience history, making this biological semantic machinery noisy with respect to the overall semantics inherent to media artifacts, such as music and language excerpts. We propose to represent shared semantics using low-dimensional vector embeddings by jointly modeling several brains from human subjects. We show these unsupervised efficient representations outperform the original high-dimensional fMRI voxel spaces in proxy music genre and language topic classification tasks. We further show that joint modeling of several subjects increases the semantic richness of the learned latent vector spaces.


From one brain scan, more information for medical artificial intelligence: System helps machine-learning models glean training information for diagnosing and treating brain conditions

#artificialintelligence

An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer's disease and multiple sclerosis. But collecting the training data is laborious: All anatomical structures in each scan must be separately outlined or hand-labeled by neurological experts. And, in some cases, such as for rare brain conditions in children, only a few scans may be available in the first place. In a paper presented at the recent Conference on Computer Vision and Pattern Recognition, the MIT researchers describe a system that uses a single labeled scan, along with unlabeled scans, to automatically synthesize a massive dataset of distinct training examples. The dataset can be used to better train machine-learning models to find anatomical structures in new scans -- the more training data, the better those predictions.


Researchers develop artificial intelligence tool to help detect brain aneurysms 7wData

#artificialintelligence

Doctors could soon get some help from an artificial intelligence tool when diagnosing brain aneurysms -- bulges in blood vessels in the brain that can leak or burst open, potentially leading to stroke, brain damage or death. The AI tool, developed by researchers at Stanford and detailed in a paper published June 7 in JAMA Network Open, highlights areas of a brain scan that are likely to contain an Aneurysm. "There's been a lot of concern about how Machine Learning will actually work within the medical field," said Allison Park, a graduate student in statistics and co-lead author of the paper. "This research is an example of how humans stay involved in the diagnostic process, aided by an artificial intelligence tool." This tool, which is built around an algorithm called HeadXNet, improved clinicians' ability to correctly identify aneurysms at a level equivalent to finding six more aneurysms in 100 scans that contain aneurysms.


Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)

arXiv.org Artificial Intelligence

Neural network models for NLP are typically implemented without the explicit encoding of language rules and yet they are able to break one performance record after another. Despite much work, it is still unclear what the representations learned by these networks correspond to. We propose here a novel approach for interpreting neural networks that relies on the only processing system we have that does understand language: the human brain. We use brain imaging recordings of subjects reading complex natural text to interpret word and sequence embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL. We study how their representations differ across layer depth, context length, and attention type. Our results reveal differences in the context-related representations across these models. Further, in the transformer models, we find an interaction between layer depth and context length, and between layer depth and attention type. We finally use the insights from the attention experiments to alter BERT: we remove the learned attention at shallow layers, and show that this manipulation improves performance on a wide range of syntactic tasks. Cognitive neuroscientists have already begun using NLP networks to study the brain, and this work closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.


One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation

arXiv.org Artificial Intelligence

Class imbalance has been one of the major challenges for medical image segmentation. The model cascade (MC) strategy significantly alleviates class imbalance issue. In spite of its outstanding performance, this method leads to an undesired system complexity and meanwhile ignores the relevance among the models. To handle these flaws of MC, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC and require only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model. Second, to optimize OM-Net more effectively, we take advantage of the correlation among tasks to design an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose to share prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. With the guidance of prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are performed to justify the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 and BraTS 2017 datasets. With the proposed approaches, we also won the joint third place in the BraTS 2018 challenge among 64 participating teams. We will make the code publicly available at https://github.com/chenhong-zhou/OM-Net.


Data-Efficient Mutual Information Neural Estimator

arXiv.org Machine Learning

Measuring Mutual Information (MI) between high-dimensional, continuous, random variables from observed samples has wide theoretical and practical applications. Recent work, MINE (Belghazi et al. 2018), focused on estimating tight variational lower bounds of MI using neural networks, but assumed unlimited supply of samples to prevent overfitting. In real world applications, data is not always available at a surplus. In this work, we focus on improving data efficiency and propose a Data-Efficient MINE Estimator (DEMINE), by developing a relaxed predictive MI lower bound that can be estimated at higher data efficiency by orders of magnitudes. The predictive MI lower bound also enables us to develop a new meta-learning approach using task augmentation, Meta-DEMINE, to improve generalization of the network and further boost estimation accuracy empirically. With improved data-efficiency, our estimators enables statistical testing of dependency at practical dataset sizes. We demonstrate the effectiveness of our estimators on synthetic benchmarks and a real world fMRI data, with application of inter-subject correlation analysis.


Tissue segmentation with deep 3D networks and spatial priors

arXiv.org Machine Learning

Conventional automated segmentation of the human head distinguishes different tissues based on image intensities in an MRI volume and prior tissue probability maps (TPM). This works well for normal head anatomies, but fails in the presence of unexpected lesions. Deep convolutional neural networks leverage instead volumetric spatial patterns and can be trained to segment lesions, but have thus far not integrated prior probabilities. Here we add to a three-dimensional convolutional network spatial priors with a TPM, morphological priors with conditional random fields, and context with a wider field-of-view at lower resolution. The new architecture, which we call MultiPrior, was designed to be a fully-trainable, three-dimensional convolutional network. Thus, the resulting architecture represents a neural network with learnable spatial memories. When trained on a set of stroke patients and healthy subjects, MultiPrior outperforms the state-of-the-art segmentation tools such as DeepMedic and SPM segmentation. The approach is further demonstrated on patients with disorders of consciousness, where we find that cognitive state correlates positively with gray-matter volumes and negatively with the extent of ventricles. We make the code and trained networks freely available to support future clinical research projects.


Automated shapeshifting for function recovery in damaged robots

arXiv.org Artificial Intelligence

A robot's mechanical parts routinely wear out from normal functioning and can be lost to injury. For autonomous robots operating in isolated or hostile environments, repair from a human operator is often not possible. Thus, much work has sought to automate damage recovery in robots. However, every case reported in the literature to date has accepted the damaged mechanical structure as fixed, and focused on learning new ways to control it. Here we show for the first time a robot that automatically recovers from unexpected damage by deforming its resting mechanical structure without changing its control policy. We found that, especially in the case of "deep insult", such as removal of all four of the robot's legs, the damaged machine evolves shape changes that not only recover the original level of function (locomotion) as before, but can in fact surpass the original level of performance (speed). This suggests that shape change, instead of control readaptation, may be a better method to recover function after damage in some cases.