Goto

Collaborating Authors

 neural-adjoint method


Supplementary material: Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method

Neural Information Processing Systems

The 3 MMD kernel used was 0.05, 0.2 and 0.9. Table 2: Ablation Study Experimental DesignLabel Add Ω? The results indicate that adding both steps to NA (i.e., transitioning It seems that the left point's failure to find the global minimum However, if we run the whole experiment one more time, it is a different story. The invertible neural network is specially designed to have hard invertibility (full reconstruction). The conditional invertible neural network uses a similar structure as an invertible neural network.


Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method

Neural Information Processing Systems

We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system that will give rise to a particular set of measurements. Recently many new approaches based upon deep learning have arisen, generating promising results.


Supplementary material: Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method

Neural Information Processing Systems

The 3 MMD kernel used was 0.05, 0.2 and 0.9. Table 2: Ablation Study Experimental DesignLabel Add Ω? The results indicate that adding both steps to NA (i.e., transitioning It seems that the left point's failure to find the global minimum However, if we run the whole experiment one more time, it is a different story. The invertible neural network is specially designed to have hard invertibility (full reconstruction). The conditional invertible neural network uses a similar structure as an invertible neural network.


Review for NeurIPS paper: Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method

Neural Information Processing Systems

Relation to Prior Work: "Inverse problems" can be framed as optimization, minimizing the loss L(x) where this loss is a distance between yhat f(x) and the observations y. Thus I have a potential case with the paper's presentation of the "neural adjoint" method as related to previous work There is lots of work on using NNs for model-based or surrogate-based optimization. Sometimes people model an objective function Jhat ftheta(x), and search (i.e. via gradient descent) for x* which minimizes Jhat: this is most common in Bayesian optimization (e.g. Sometimes people model an output yhat ftheta(x), and search (i.e. via gradient descent) for x* which minimizes J(yhat) where J is a known function: this is most common in surrogate-based optimization. The neural-adjoint method is clearly a special case of this latter scenario.


Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method

Neural Information Processing Systems

We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system that will give rise to a particular set of measurements. Recently many new approaches based upon deep learning have arisen, generating promising results. As a result, the accuracy of each approach should be evaluated as a function of time rather than a single estimated solution, as is often done now. Using this metric, we compare several state-of-the-art inverse modeling approaches on four benchmark tasks: two existing tasks, a new 2-dimensional sinusoid task, and a challenging modern task of meta-material design. Finally, inspired by our conception of the inverse problem, we explore a simple solution that uses a deep neural network as a surrogate (i.e., approximation) for the forward model, and then uses backpropagation with respect to the model input to search for good inverse solutions.


Benchmarking deep inverse models over time, and the neural-adjoint method

Ren, Simiao, Padilla, Willie, Malof, Jordan

arXiv.org Machine Learning

We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system that will give rise to a particular set of measurements. Recently many new approaches based upon deep learning have arisen generating impressive results. We conceptualize these models as different schemes for efficiently, but randomly, exploring the space of possible inverse solutions. As a result, the accuracy of each approach should be evaluated as a function of time rather than a single estimated solution, as is often done now. Using this metric, we compare several state-of-the-art inverse modeling approaches on four benchmark tasks: two existing tasks, one simple task for visualization and one new task from metamaterial design. Finally, inspired by our conception of the inverse problem, we explore a solution that uses a deep learning model to approximate the forward model, and then uses backpropagation to search for good inverse solutions. This approach, termed the neural-adjoint, achieves the best performance in many scenarios.