Goto

Collaborating Authors

geometry


Physics-consistent deep learning for structural topology optimization

#artificialintelligence

Topology optimization has emerged as a popular approach to refine a component's design and increasing its performance. However, current state-of-the-art topology optimization frameworks are compute-intensive, mainly due to multiple finite element analysis iterations required to evaluate the component's performance during the optimization process. Recently, machine learning-based topology optimization methods have been explored by researchers to alleviate this issue. However, previous approaches have mainly been demonstrated on simple two-dimensional applications with low-resolution geometry. Further, current approaches are based on a single machine learning model for end-to-end prediction, which requires a large dataset for training.


On the emergence of tetrahedral symmetry in the final and penultimate layers of neural network classifiers

arXiv.org Machine Learning

A recent numerical study observed that neural network classifiers enjoy a large degree of symmetry in the penultimate layer. Namely, if $h(x) = Af(x) +b$ where $A$ is a linear map and $f$ is the output of the penultimate layer of the network (after activation), then all data points $x_{i, 1}, \dots, x_{i, N_i}$ in a class $C_i$ are mapped to a single point $y_i$ by $f$ and the points $y_i$ are located at the vertices of a regular $k-1$-dimensional tetrahedron in a high-dimensional Euclidean space. We explain this observation analytically in toy models for highly expressive deep neural networks. In complementary examples, we demonstrate rigorously that even the final output of the classifier $h$ is not uniform over data samples from a class $C_i$ if $h$ is a shallow network (or if the deeper layers do not bring the data samples into a convenient geometric configuration).


A SAT-based Resolution of Lam's Problem

arXiv.org Artificial Intelligence

In 1989, computer searches by Lam, Thiel, and Swiercz experimentally resolved Lam's problem from projective geometry$\unicode{x2014}$the long-standing problem of determining if a projective plane of order ten exists. Both the original search and an independent verification in 2011 discovered no such projective plane. However, these searches were each performed using highly specialized custom-written code and did not produce nonexistence certificates. In this paper, we resolve Lam's problem by translating the problem into Boolean logic and use satisfiability (SAT) solvers to produce nonexistence certificates that can be verified by a third party. Our work uncovered consistency issues in both previous searches$\unicode{x2014}$highlighting the difficulty of relying on special-purpose search code for nonexistence results.


Aligning Hyperbolic Representations: an Optimal Transport-based approach

arXiv.org Machine Learning

Hyperbolic embeddings are state-of-the-art models to learn representations of data with an underlying hierarchical structure [18]. The hyperbolic space serves as a geometric prior to hierarchical structures, tree graphs, heavy-tailed distributions, e.g., scale-free, powerlaw [45]. A relevant tool to implement hyperbolic space algorithms is the Möbius gyrovector spaces or Gyrovector spaces [66]. Gyrovector spaces are an algebraic formalism, which leads to vector-like operations, i.e., gyrovector, in the Poincaré model of the hyperbolic space. Thanks to this formalism, we can quickly build estimators that are well-suited to perform end-to-end optimization [6]. Gyrovector spaces are essential to design the hyperbolic version of several machine learning algorithms, like Hyperbolic Neural Networks (HNN) [24], Hyperbolic Graph NN [36], Hyperbolic Graph Convolutional NN [12], learning latent feature representations [41, 46], word embeddings [62, 25], and image embeddings [29]. Modern machine learning algorithms rely on the availability to accumulate large volumes of data, often coming from various sources, e.g., acquisition devices or languages. However, these massive amounts of heterogeneous data can entangle downstream learning tasks since the data may follow different distributions. Alignment aims at building connections between two or more disparate data sets by aligning their underlying manifolds.


Consistent Representation Learning for High Dimensional Data Analysis

arXiv.org Machine Learning

High dimensional data analysis for exploration and discovery includes three fundamental tasks: dimensionality reduction, clustering, and visualization. When the three associated tasks are done separately, as is often the case thus far, inconsistencies can occur among the tasks in terms of data geometry and others. This can lead to confusing or misleading data interpretation. In this paper, we propose a novel neural network-based method, called Consistent Representation Learning (CRL), to accomplish the three associated tasks end-to-end and improve the consistencies. The CRL network consists of two nonlinear dimensionality reduction (NLDR) transformations: (1) one from the input data space to the latent feature space for clustering, and (2) the other from the clustering space to the final 2D or 3D space for visualization. Importantly, the two NLDR transformations are performed to best satisfy local geometry preserving (LGP) constraints across the spaces or network layers, to improve data consistencies along with the processing flow. Also, we propose a novel metric, clustering-visualization inconsistency (CVI), for evaluating the inconsistencies. Extensive comparative results show that the proposed CRL neural network method outperforms the popular t-SNE and UMAP-based and other contemporary clustering and visualization algorithms in terms of evaluation metrics and visualization.


A Deep Learning-based Collocation Method for Modeling Unknown PDEs from Sparse Observation

arXiv.org Machine Learning

Deep learning-based modeling of dynamical systems driven by partial differential equations (PDEs) has become quite popular in recent years. However, most of the existing deep learning-based methods either assume strong physics prior, or depend on specific initial and boundary conditions, or require data in dense regular grid making them inapt for modeling unknown PDEs from sparsely-observed data. This paper presents a deep learning-based collocation method for modeling dynamical systems driven by unknown PDEs when data sites are sparsely distributed. The proposed method is spatial dimension-independent, geometrically flexible, learns from sparsely-available data and the learned model does not depend on any specific initial and boundary conditions. We demonstrate our method in the forecasting task for two-dimensional wave equation and Burgers-Fisher equation in multiple geometries with different boundary conditions.


Fit2Form: 3D Generative Model for Robot Gripper Form Design

arXiv.org Artificial Intelligence

The 3D shape of a robot's end-effector plays a critical role in determining it's functionality and overall performance. Many industrial applications rely on task-specific gripper designs to ensure the system's robustness and accuracy. However, the process of manual hardware design is both costly and time-consuming, and the quality of the resulting design is dependent on the engineer's experience and domain expertise, which can easily be out-dated or inaccurate. The goal of this work is to use machine learning algorithms to automate the design of task-specific gripper fingers. We propose Fit2Form, a 3D generative design framework that generates pairs of finger shapes to maximize design objectives (i.e., grasp success, stability, and robustness) for target grasp objects. We model the design objectives by training a Fitness network to predict their values for pairs of gripper fingers and their corresponding grasp objects. This Fitness network then provides supervision to a 3D Generative network that produces a pair of 3D finger geometries for the target grasp object. Our experiments demonstrate that the proposed 3D generative design framework generates parallel jaw gripper finger shapes that achieve more stable and robust grasps compared to other general-purpose and task-specific gripper design algorithms. Video can be found at https://youtu.be/utKHP3qb1bg.


Accelerating Grasp Exploration by Leveraging Learned Priors

arXiv.org Artificial Intelligence

The ability of robots to grasp novel objects has industry applications in e-commerce order fulfillment and home service. Data-driven grasping policies have achieved success in learning general strategies for grasping arbitrary objects. However, these approaches can fail to grasp objects which have complex geometry or are significantly outside of the training distribution. We present a Thompson sampling algorithm that learns to grasp a given object with unknown geometry using online experience. The algorithm leverages learned priors from the Dexterity Network robot grasp planner to guide grasp exploration and provide probabilistic estimates of grasp success for each stable pose of the novel object. We find that seeding the policy with the Dex-Net prior allows it to more efficiently find robust grasps on these objects. Experiments suggest that the best learned policy attains an average total reward 64.5% higher than a greedy baseline and achieves within 5.7% of an oracle baseline when evaluated over 300,000 training runs across a set of 3000 object poses.


Double Descent Risk and Volume Saturation Effects: A Geometric Perspective

arXiv.org Machine Learning

The appearance of the double-descent risk phenomenon has received growing interest in the machine learning and statistics community, as it challenges well-understood notions behind the U-shaped train-test curves. Motivated through Rissanen's minimum description length (MDL), Balasubramanian's Occam's Razor, and Amari's information geometry, we investigate how the logarithm of the model volume: $\log V$, works to extend intuition behind the AIC and BIC model selection criteria. We find that for the particular model classes of isotropic linear regression and statistical lattices, the $\log V$ term may be decomposed into a sum of distinct components, each of which assist in their explanations of the appearance of this phenomenon. In particular they suggest why generalization error does not necessarily continue to grow with increasing model dimensionality.


Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE

arXiv.org Machine Learning

The ability to record activities from hundreds of neurons simultaneously in the brain has placed an increasing demand for developing appropriate statistical techniques to analyze such data. Recently, deep generative models have been proposed to fit neural population responses. While these methods are flexible and expressive, the downside is that they can be difficult to interpret and identify. To address this problem, we propose a method that integrates key ingredients from latent models and traditional neural encoding models. Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder, which we adapt to make appropriate for neuroscience applications. Specifically, we propose to construct latent variable models of neural activity while simultaneously modeling the relation between the latent and task variables (non-neural variables, e.g. sensory, motor, and other externally observable states). The incorporation of task variables results in models that are not only more constrained, but also show qualitative improvements in interpretability and identifiability. We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex. We demonstrate that pi-VAE not only fits the data better, but also provides unexpected novel insights into the structure of the neural codes.