voxel


From voxels to pixels and back: Self-supervision in natural-image reconstruction from fMRI

Neural Information Processing Systems

Reconstructing observed images from fMRI brain recordings is challenging. Unfortunately, acquiring sufficient ''labeled'' pairs of {Image, fMRI} (i.e., images with their corresponding fMRI responses) to span the huge space of natural images is prohibitive for many reasons. We present a novel approach which, in addition to the scarce labeled data (training pairs), allows to train fMRI-to-image reconstruction networks also on "unlabeled" data (i.e., images without fMRI recording, and fMRI recording without images). The proposed model utilizes both an Encoder network (image-to-fMRI) and a Decoder network (fMRI-to-image). Concatenating these two networks back-to-back (Encoder-Decoder & Decoder-Encoder) allows augmenting the training data with both types of unlabeled data.


High resolution neural connectivity from incomplete tracing data using nonnegative spline regression

Neural Information Processing Systems

Whole-brain neural connectivity data are now available from viral tracing experiments, which reveal the connections between a source injection site and elsewhere in the brain. To achieve this goal, we seek to fit a weighted, nonnegative adjacency matrix among 100 μm brain "voxels" using viral tracer data. Despite a multi-year experimental effort, injections provide incomplete coverage, and the number of voxels in our data is orders of magnitude larger than the number of injections, making the problem severely underdetermined. Furthermore, projection data are missing within the injection site because local connections there are not separable from the injection signal. We use a novel machine-learning algorithm to meet these challenges and develop a spatially explicit, voxel-scale connectivity map of the mouse visual system.


Detection of vertebral fractures in CT using 3D Convolutional Neural Networks

#artificialintelligence

Since our task is detection and not segmentation, correctly predicting only a sufficient amount of voxels around the vertebra centroid is needed to detect normal or fractured vertebrae in an image. We leverage this observation to construct 3D label images for our training database in a semi-automated fashion. First, radiologist S.R. created a text file with annotations for every vertebra present in the field of view as described in section 2. Next, J.N. enriched these labels with 3D centroid coordinates by manually localizing every vertebra centroid in the image using MeVisLab [8]. This step required an average of less than two minutes per image in our dataset. Finally, we extended the method described by Glocker et al. [6] to automatically generate 3D label images from these sparse annotations.


The importance of evaluating the complete automated knowledge-based planning pipeline

arXiv.org Artificial Intelligence

We determine how prediction methods combine with optimization methods in two-stage knowledge-based planning (KBP) pipelines to produce radiation therapy treatment plans. We trained two dose prediction methods, a generative adversarial network (GAN) and a random forest (RF) with the same 130 treatment plans. The models were applied to 87 out-of-sample patients to create two sets of predicted dose distributions that were used as input to two optimization models. The first optimization model, inverse planning (IP), estimates weights for dose-objectives from a predicted dose distribution and generates new plans using conventional inverse planning. The second optimization model, dose mimicking (DM), minimizes the sum of one-sided quadratic penalties between the predictions and the generated plans using several dose-objectives. Altogether, four KBP pipelines (GAN-IP, GAN-DM, RF-IP, and RF-DM) were constructed and benchmarked against the corresponding clinical plans using clinical criteria; the error of both prediction methods was also evaluated. The best performing plans were GAN-IP plans, which satisfied the same criteria as their corresponding clinical plans (78%) more often than any other KBP pipeline. However, GAN did not necessarily provide the best prediction for the second-stage optimization models. Specifically, both the RF-IP and RF-DM plans satisfied all clinical criteria 25% and 15% more often than GAN-DM plans (the worst performing planning), respectively. GAN predictions also had a higher mean absolute error (3.9 Gy) than those from RF (3.6 Gy). We find that state-of-the-art prediction methods when paired with different optimization algorithms, produce treatment plans with considerable variation in quality.


MIT designd tiny inchworm-like robots to build space settlements on Mars and homes on Earth

Daily Mail - Science & tech

From space settlements to airplanes and homes on Earth --scientists have developed a new category of robots that could change the way we build high-performance structures. The V-shaped machines, called Bipedal Isotropic Lattice Locomoting Explorers (or BILL-E), have two miniature arms that erect structures piece by piece. These appendages allow robots to move around like inchworms, opening and closing their bodies in order to travel from one spot to the next. The BILL-E robots were developed by a team at Massachusetts Institute of Technology, which foresee these tiny robots designing everything from space settlements on Mars to airplanes and homes on Earth. Professor Neil Gershenfeld in MIT's Center for Bits and Atoms said'What's at the heart of this is a new kind of robotics, that we call relative robots.'


A Topological "Reading" Lesson: Classification of MNIST using TDA

arXiv.org Machine Learning

--We present a way to use T opological Data Analysis (TDA) for machine learning tasks on grayscale images. We apply persistent homology to generate a wide range of topological features using a point cloud obtained from an image, its natural grayscale filtration, and different filtrations defined on the binarized image. We show that this topological machine learning pipeline can be used as a highly relevant dimensionality reduction by applying it to the MNIST digits dataset. We conduct a feature selection and study their correlations while providing an intuitive interpretation of their importance, which is relevant in both machine learning and TDA. Finally, we show that we can classify digit images while reducing the size of the feature set by a factor 5 compared to the grayscale pixel value features and maintain similar accuracy. I NTRODUCTION Topological Data Analysis (TDA) [1] applies techniques from algebraic topology to study and extract topological and geometric information on the shape of data. In this paper, we use persistent homology [2], a tool from TDA that extracts features representing the numbers of connected components, cycles, and voids and their birth and death during an iterative process called a filtration. Each of those features is summarized as a point in a persistence diagram .


Assembler robots make large structures from little pieces

Robohub

Today's commercial aircraft are typically manufactured in sections, often in different locations -- wings at one factory, fuselage sections at another, tail components somewhere else -- and then flown to a central plant in huge cargo planes for final assembly. But what if the final assembly was the only assembly, with the whole plane built out of a large array of tiny identical pieces, all put together by an army of tiny robots? That's the vision that graduate student Benjamin Jenett, working with Professor Neil Gershenfeld in MIT's Center for Bits and Atoms (CBA), has been pursuing as his doctoral thesis work. It's now reached the point that prototype versions of such robots can assemble small structures and even work together as a team to build up a larger assemblies. The new work appears in the October issue of the IEEE Robotics and Automation Letters, in a paper by Jenett, Gershenfeld, fellow graduate student Amira Abdel-Rahman, and CBA alumnus Kenneth Cheung SM '07, PhD '12, who is now at NASA's Ames Research Center, where he leads the ARMADAS project to design a lunar base that could be built with robotic assembly.


Assembler robots make large structures from little pieces

#artificialintelligence

Today's commercial aircraft are typically manufactured in sections, often in different locations--wings at one factory, fuselage sections at another, tail components somewhere else--and then flown to a central plant in huge cargo planes for final assembly. But what if the final assembly was the only assembly, with the whole plane built out of a large array of tiny identical pieces, all put together by an army of tiny robots? That's the vision that graduate student Benjamin Jenett, working with Professor Neil Gershenfeld in MIT's Center for Bits and Atoms (CBA), has been pursuing as his doctoral thesis work. It's now reached the point that prototype versions of such robots can assemble small structures and even work together as a team to build up a larger assemblies. The new work appears in the October issue of the IEEE Robotics and Automation Letters, in a paper by Jenett, Gershenfeld, fellow graduate student Amira Abdel-Rahman, and CBA alumnus Kenneth Cheung SM '07, Ph.D. '12, who is now at NASA's Ames Research Center, where he leads the ARMADAS project to design a lunar base that could be built with robotic assembly.


Assembler robots make large structures from little pieces

#artificialintelligence

But what if the final assembly was the only assembly, with the whole plane built out of a large array of tiny identical pieces, all put together by an army of tiny robots? That's the vision that graduate student Benjamin Jenett, working with Professor Neil Gershenfeld in MIT's Center for Bits and Atoms (CBA), has been pursuing as his doctoral thesis work. It's now reached the point that prototype versions of such robots can assemble small structures and even work together as a team to build up a larger assemblies. The new work appears in the October issue of the IEEE Robotics and Automation Letters, in a paper by Jenett, Gershenfeld, fellow graduate student Amira Abdel-Rahman, and CBA alumnus Kenneth Cheung SM '07, PhD '12, who is now at NASA's Ames Research Center, where he leads the ARMADAS project to design a lunar base that could be built with robotic assembly. "What's at the heart of this is a new kind of robotics, that we call relative robots," Gershenfeld says.


Assembler robots make large structures from little pieces

#artificialintelligence

Today's commercial aircraft are typically manufactured in sections, often in different locations -- wings at one factory, fuselage sections at another, tail components somewhere else -- and then flown to a central plant in huge cargo planes for final assembly. But what if the final assembly was the only assembly, with the whole plane built out of a large array of tiny identical pieces, all put together by an army of tiny robots? That's the vision that graduate student Benjamin Jenett, working with Professor Neil Gershenfeld in MIT's Center for Bits and Atoms (CBA), has been pursuing as his doctoral thesis work. It's now reached the point that prototype versions of such robots can assemble small structures and even work together as a team to build up a larger assemblies. The new work appears in the October issue of the IEEE Robotics and Automation Letters, in a paper by Jenett, Gershenfeld, fellow graduate student Amira Abdel-Rahman, and CBA alumnus Kenneth Cheung SM '07, PhD '12, who is now at NASA's Ames Research Center, where he leads the ARMADAS project to design a lunar base that could be built with robotic assembly.