Review for NeurIPS paper: Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method
–Neural Information Processing Systems
Relation to Prior Work: "Inverse problems" can be framed as optimization, minimizing the loss L(x) where this loss is a distance between yhat f(x) and the observations y. Thus I have a potential case with the paper's presentation of the "neural adjoint" method as related to previous work There is lots of work on using NNs for model-based or surrogate-based optimization. Sometimes people model an objective function Jhat ftheta(x), and search (i.e. via gradient descent) for x* which minimizes Jhat: this is most common in Bayesian optimization (e.g. Sometimes people model an output yhat ftheta(x), and search (i.e. via gradient descent) for x* which minimizes J(yhat) where J is a known function: this is most common in surrogate-based optimization. The neural-adjoint method is clearly a special case of this latter scenario.
Neural Information Processing Systems
Jan-21-2025, 02:23:53 GMT
- Technology: