Chen, Yu-hsuan
MDDM: A Molecular Dynamics Diffusion Model to Predict Particle Self-Assembly
Ferguson, Kevin, Chen, Yu-hsuan, Kara, Levent Burak
Molecular Dynamics (MD) is a powerful computational tool that lets scientists and engineers study chemical, biological, or material systems at a micro-or nano-scale. In particular, we target a materials science application of molecular self-assembly in which the goal is to model the dynamics and structure of bulk systems containing many particles that interact with one another via a specified potential energy function. By simulating the motion and interaction of particles in a molecular system, material properties can be measured from the resulting equilibrated particle structures. While MD undoubtedly provides engineers with the capacity to perform high-fidelity material simulations, it is not without its own limitations, namely computational expense. For one, very large systems (i.e. with many particles) are required to emulate the properties of a bulk material as accurately as possible.
Topology-Agnostic Graph U-Nets for Scalar Field Prediction on Unstructured Meshes
Ferguson, Kevin, Chen, Yu-hsuan, Chen, Yiming, Gillman, Andrew, Hardin, James, Kara, Levent Burak
Machine-learned surrogate models to accelerate lengthy computer simulations are becoming increasingly important as engineers look to streamline the product design cycle. In many cases, these approaches offer the ability to predict relevant quantities throughout a geometry, but place constraints on the form of the input data. In a world of diverse data types, a preferred approach would not restrict the input to a particular structure. In this paper, we propose Topology-Agnostic Graph U-Net (TAG U-Net), a graph convolutional network that can be trained to input any mesh or graph structure and output a prediction of a target scalar field at each node. The model constructs coarsened versions of each input graph and performs a set of convolution and pooling operations to predict the node-wise outputs on the original graph. By training on a diverse set of shapes, the model can make strong predictions, even for shapes unlike those seen during training. A 3-D additive manufacturing dataset is presented, containing Laser Powder Bed Fusion simulation results for thousands of parts. The model is demonstrated on this dataset, and it performs well, predicting both 2-D and 3-D scalar fields with a median R-squared > 0.85 on test geometries. Code and datasets are available online.
Curve-based Neural Style Transfer
Chen, Yu-hsuan, Kara, Levent Burak, Cagan, Jonathan
This research presents a new parametric style transfer framework specifically designed for curve-based design sketches. In this research, traditional challenges faced by neural style transfer methods in handling binary sketch transformations are effectively addressed through the utilization of parametric shape-editing rules, efficient curve-to-pixel conversion techniques, and the fine-tuning of VGG19 on ImageNet-Sketch, enhancing its role as a feature pyramid network for precise style extraction. By harmonizing intuitive curve-based imagery with rule-based editing, this study holds the potential to significantly enhance design articulation and elevate the practice of style transfer within the realm of product design. Figure 1: Workflow of the proposed curve-based style transfer method.
Automating Style Analysis and Visualization With Explainable AI -- Case Studies on Brand Recognition
Chen, Yu-hsuan, Kara, Levent Burak, Cagan, Jonathan
Incorporating style-related objectives into shape design has been centrally important to maximize product appeal. However, stylistic features such as aesthetics and semantic attributes are hard to codify even for experts. As such, algorithmic style capture and reuse have not fully benefited from automated data-driven methodologies due to the challenging nature of design describability. This paper proposes an AI-driven method to fully automate the discovery of brand-related features. Our approach introduces BIGNet, a two-tier Brand Identification Graph Neural Network (GNN) to classify and analyze scalar vector graphics (SVG). First, to tackle the scarcity of vectorized product images, this research proposes two data acquisition workflows: parametric modeling from small curve-based datasets, and vectorization from large pixel-based datasets. Secondly, this study constructs a novel hierarchical GNN architecture to learn from both SVG's curve-level and chunk-level parameters. In the first case study, BIGNet not only classifies phone brands but also captures brand-related features across multiple scales, such as the location of the lens, the height-width ratio, and the screen-frame gap, as confirmed by AI evaluation. In the second study, this paper showcases the generalizability of BIGNet learning from a vectorized car image dataset and validates the consistency and robustness of its predictions given four scenarios. The results match the difference commonly observed in luxury vs. economy brands in the automobile market. Finally, this paper also visualizes the activation maps generated from a convolutional neural network and shows BIGNet's advantage of being a more human-friendly, explainable, and explicit style-capturing agent. Code and dataset can be found on Github: 1. Phone case study: github.com/parksandrecfan/bignet-phone 2. Car case study: github.com/parksandrecfan/bignet-car