Goto

Collaborating Authors

OpenFab

Communications of the ACM

Three rhinos defined and printed using OpenFab. This poses an enormous computational challenge: large high-resolution prints comprise trillions of voxels and petabytes of data, and modeling and describing the input with spatially varying material mixtures at this scale are simply challenging. Existing 3D printing software is insufficient; in particular, most software is designed to support only a few million primitives, with discrete material choices per object. We present OpenFab, a programmable pipeline for synthesis of multimaterial 3D printed objects that is inspired by RenderMan and modern GPU pipelines. The pipeline supports procedural evaluation of geometric detail and material composition, using shader-like fablets, allowing models to be specified easily and efficiently. The pipeline is implemented in a streaming fashion: only a small fraction of the final volume is stored in memory, and output is fed to the printer with a little startup delay. We demonstrate it on a variety of multimaterial objects. State-of-the-art 3D printing hardware is capable of mixing many materials at up to 100s of dots per inch resolution, using technologies such as photopolymer phase-change inkjet technology. Each layer of the model is ultimately fed to the printer as a full-resolution bitmap where each "pixel" specifies a single material and all layers together define on the order of 108 voxels per cubic inch. This poses an enormous computational challenge as the resulting data is far too large to directly precompute and store; a single cubic foot at this resolution requires at least 1011 voxels and terabytes of storage. Even for small objects, the computation, memory, and storage demands are large.


Learning Localized Geometric Features Using 3D-CNN: An Application to Manufacturability Analysis of Drilled Holes

arXiv.org Machine Learning

3D convolutional neural networks (3D-CNN) have been used for object recognition based on the voxelized shape of an object. In this paper, we present a 3D-CNN based method to learn distinct local geometric features of interest within an object. In this context, the voxelized representation may not be sufficient to capture the distinguishing information about such local features. To enable efficient learning, we augment the voxel data with surface normals of the object boundary. We then train a 3D-CNN with this augmented data and identify the local features critical for decision-making using 3D gradient-weighted class activation maps. An application of this feature identification framework is to recognize difficult-to-manufacture drilled hole features in a complex CAD geometry. The framework can be extended to identify difficult-to-manufacture features at multiple spatial scales leading to a real-time decision support system for design for manufacturability.



3D Topology Optimization using Convolutional Neural Networks

arXiv.org Machine Learning

Topology optimization is computationally demanding that requires the assembly and solution to a finite element problem for each material distribution hypothesis. As a complementary alternative to the traditional physics-based topology optimization, we explore a data-driven approach that can quickly generate accurate solutions. To this end, we propose a deep learning approach based on a 3D encoder-decoder Convolutional Neural Network architecture for accelerating 3D topology optimization and to determine the optimal computational strategy for its deployment. Analysis of iteration-wise progress of the Solid Isotropic Material with Penalization process is used as a guideline to study how the earlier steps of the conventional topology optimization can be used as input for our approach to predict the final optimized output structure directly from this input. We conduct a comparative study between multiple strategies for training the neural network and assess the effect of using various input combinations for the CNN to finalize the strategy with the highest accuracy in predictions for practical deployment. For the best performing network, we achieved about 40% reduction in overall computation time while also attaining structural accuracies in the order of 96%.


Multi-Resolution 3D Convolutional Neural Networks for Object Recognition

arXiv.org Machine Learning

Learning from 3D Data is a fascinating idea which is well explored and studied in computer vision. This allows one to learn from very sparse LiDAR data, point cloud data as well as 3D objects in terms of CAD models and surfaces etc. Most of the approaches to learn from such data are limited to uniform 3D volume occupancy grids or octree representations. A major challenge in learning from 3D data is that one needs to define a proper resolution to represent it in a voxel grid and this becomes a bottleneck for the learning algorithms. Specifically, while we focus on learning from 3D data, a fine resolution is very important to capture key features in the object and at the same time the data becomes sparser as the resolution becomes finer. There are numerous applications in computer vision where a multi-resolution representation is used instead of a uniform grid representation in order to make the applications memory efficient. Though such methods are difficult to learn from, they are much more efficient in representing 3D data. In this paper, we explore the challenges in learning from such data representation. In particular, we use a multi-level voxel representation where we define a coarse voxel grid that contains information of important voxels(boundary voxels) and multiple fine voxel grids corresponding to each significant voxel of the coarse grid. A multi-level voxel representation can capture important features in the 3D data in a memory efficient way in comparison to an octree representation. Consequently, learning from a 3D object with high resolution, which is paramount in feature recognition, is made efficient.