OpenFab

Communications of the ACM

Three rhinos defined and printed using OpenFab. This poses an enormous computational challenge: large high-resolution prints comprise trillions of voxels and petabytes of data, and modeling and describing the input with spatially varying material mixtures at this scale are simply challenging. Existing 3D printing software is insufficient; in particular, most software is designed to support only a few million primitives, with discrete material choices per object. We present OpenFab, a programmable pipeline for synthesis of multimaterial 3D printed objects that is inspired by RenderMan and modern GPU pipelines. The pipeline supports procedural evaluation of geometric detail and material composition, using shader-like fablets, allowing models to be specified easily and efficiently. The pipeline is implemented in a streaming fashion: only a small fraction of the final volume is stored in memory, and output is fed to the printer with a little startup delay. We demonstrate it on a variety of multimaterial objects. State-of-the-art 3D printing hardware is capable of mixing many materials at up to 100s of dots per inch resolution, using technologies such as photopolymer phase-change inkjet technology. Each layer of the model is ultimately fed to the printer as a full-resolution bitmap where each "pixel" specifies a single material and all layers together define on the order of 108 voxels per cubic inch. This poses an enormous computational challenge as the resulting data is far too large to directly precompute and store; a single cubic foot at this resolution requires at least 1011 voxels and terabytes of storage. Even for small objects, the computation, memory, and storage demands are large.


Automated shapeshifting for function recovery in damaged robots

arXiv.org Artificial Intelligence

A robot's mechanical parts routinely wear out from normal functioning and can be lost to injury. For autonomous robots operating in isolated or hostile environments, repair from a human operator is often not possible. Thus, much work has sought to automate damage recovery in robots. However, every case reported in the literature to date has accepted the damaged mechanical structure as fixed, and focused on learning new ways to control it. Here we show for the first time a robot that automatically recovers from unexpected damage by deforming its resting mechanical structure without changing its control policy. We found that, especially in the case of "deep insult", such as removal of all four of the robot's legs, the damaged machine evolves shape changes that not only recover the original level of function (locomotion) as before, but can in fact surpass the original level of performance (speed). This suggests that shape change, instead of control readaptation, may be a better method to recover function after damage in some cases.


How morphological development can guide evolution

arXiv.org Artificial Intelligence

Organisms result from adaptive processes interacting across different time scales. One such interaction is that between development and evolution. Models have shown that development sweeps over several traits in a single agent, sometimes exposing promising static traits. Subsequent evolution can then canalize these rare traits. Thus, development can, under the right conditions, increase evolvability. Here, we report on a previously unknown phenomenon when embodied agents are allowed to develop and evolve: Evolution discovers body plans robust to control changes, these body plans become genetically assimilated, yet controllers for these agents are not assimilated. This allows evolution to continue climbing fitness gradients by tinkering with the developmental programs for controllers within these permissive body plans. This exposes a previously unknown detail about the Baldwin effect: instead of all useful traits becoming genetically assimilated, only traits that render the agent robust to changes in other traits become assimilated. We refer to this as differential canalization. This finding also has implications for the evolutionary design of artificial and embodied agents such as robots: robots robust to internal changes in their controllers may also be robust to external changes in their environment, such as transferal from simulation to reality or deployment in novel environments.


Identifying Distributed Object Representations in Human Extrastriate Visual Cortex

Neural Information Processing Systems

The category of visual stimuli has been reliably decoded from patterns of neural activity in extrastriate visual cortex [1]. It has yet to be seen whether object identity can be inferred from this activity. We present fMRI data measuring responses in human extrastriate cortex to a set of 12 distinct object images. We use a simple winner-take-all classifier, using half the data from each recording session as a training set, to evaluate encoding of object identity across fMRI voxels. Since this approach is sensitive to the inclusion of noisy voxels, we describe two methods for identifying subsets of voxels in the data which optimally distinguish object identity. One method characterizes the reliability of each voxel within subsets of the data, while another estimates the mutual information of each voxel with the stimulus set. We find that both metrics can identify subsets of the data which reliably encode object identity, even when noisy measurements are artificially added to the data. The mutual information metric is less efficient at this task, likely due to constraints in fMRI data.


Brain scan algorithm is 1,000 times faster

#artificialintelligence

MIT has published details of "VoxelMorph", a new machine-learning algorithm, which is over 1,000 times faster at registering brain scans and other 3-D images. Medical image registration is a common technique that involves overlaying two images – such as magnetic resonance imaging (MRI) scans – to compare and analyse anatomical differences in great detail. If a patient has a brain tumour, for instance, doctors can overlap a brain scan from several months ago onto a more recent scan to analyse small changes in the tumour's progress. In a pair of upcoming conference papers, however, researchers from the Massachusetts Institute of Technology (MIT) describe how to overcome this problem. Their new machine-learning algorithm can register brain scans and other 3-D images over 1,000 times more quickly.