Goto

Collaborating Authors

 green


Imaging with super-resolution in changing random media

Christie, Alexander, Leibovich, Matan, Moscoso, Miguel, Novikov, Alexei, Papanicolaou, George, Tsogka, Chrysoula

arXiv.org Artificial Intelligence

High-resolution imaging from array data in unknown inhomogeneous ambient media requires estimating both the medium properties and the object characteristics. For diverse measurements collected from different sources in different, changing media, we introduce in this paper an algorithm that recovers the ambient media properties needed for high-resolution imaging as well as the source locations and strengths that constitute the imaging target. This algorithm extends and improves upon our previous work on imaging through random media using array data. Previously, we addressed imaging through a single unknown random medium, either weakly scattering [ 1 ] or strongly scattering [ 2 ].




6dd16c884345ad63e4708367222410e5-Supplemental-Conference.pdf

Neural Information Processing Systems

We conducted a comparison between the adjoint method, the Green's function method and a classical Gaussian process on the ordinary differential equation model presented in section 4.1.


Min-Max Optimization Is Strictly Easier Than Variational Inequalities

Shugart, Henry, Altschuler, Jason M.

arXiv.org Artificial Intelligence

Classically, a mainstream approach for solving a convex-concave min-max problem is to instead solve the variational inequality problem arising from its first-order optimality conditions. Is it possible to solve min-max problems faster by bypassing this reduction? This paper initiates this investigation. We show that the answer is yes in the textbook setting of unconstrained quadratic objectives: the optimal convergence rate for first-order algorithms is strictly better for min-max problems than for the corresponding variational inequalities. The key reason that min-max algorithms can be faster is that they can exploit the asymmetry of the min and max variables--a property that is lost in the reduction to variational inequalities. Central to our analyses are sharp characterizations of optimal convergence rates in terms of extremal polynomials which we compute using Green's functions and conformal mappings.


Learning Swarm Interaction Dynamics from Density Evolution

Mavridis, Christos, Tirumalai, Amoolya, Baras, John

arXiv.org Artificial Intelligence

We consider the problem of understanding the coordinated movements of biological or artificial swarms. In this regard, we propose a learning scheme to estimate the coordination laws of the interacting agents from observations of the swarm's density over time. We describe the dynamics of the swarm based on pairwise interactions according to a Cucker-Smale flocking model, and express the swarm's density evolution as the solution to a system of mean-field hydrodynamic equations. We propose a new family of parametric functions to model the pairwise interactions, which allows for the mean-field macroscopic system of integro-differential equations to be efficiently solved as an augmented system of PDEs. Finally, we incorporate the augmented system in an iterative optimization scheme to learn the dynamics of the interacting agents from observations of the swarm's density evolution over time. The results of this work can offer an alternative approach to study how animal flocks coordinate, create new control schemes for large networked systems, and serve as a central part of defense mechanisms against adversarial drone attacks.




6dd16c884345ad63e4708367222410e5-Supplemental-Conference.pdf

Neural Information Processing Systems

We conducted a comparison between the adjoint method, the Green's function method and a classical Gaussian process on the ordinary differential equation model presented in section 4.1.


Interpretability and Generalization Bounds for Learning Spatial Physics

Queiruga, Alejandro Francisco, Gutman-Solo, Theo, Jiang, Shuai

arXiv.org Machine Learning

While there are many applications of ML to scientific problems that look promising, visuals can be deceiving. For scientific applications, actual quantitative accuracy is crucial. This work applies the rigor of numerical analysis for differential equations to machine learning by specifically quantifying the accuracy of applying different ML techniques to the elementary 1D Poisson differential equation. Beyond the quantity and discretization of data, we identify that the function space of the data is critical to the generalization of the model. We prove generalization bounds and convergence rates under finite data discretizations and restricted training data subspaces by analyzing the training dynamics and deriving optimal parameters for both a white-box differential equation discovery method and a black-box linear model. The analytically derived generalization bounds are replicated empirically. Similar lack of generalization is empirically demonstrated for deep linear models, shallow neural networks, and physics-specific DeepONets and Neural Operators. We theoretically and empirically demonstrate that generalization to the true physical equation is not guaranteed in each explored case. Surprisingly, we find that different classes of models can exhibit opposing generalization behaviors. Based on our theoretical analysis, we also demonstrate a new mechanistic interpretability lens on scientific models whereby Green's function representations can be extracted from the weights of black-box models. Our results inform a new cross-validation technique for measuring generalization in physical systems. We propose applying it to the Poisson equation as an evaluation benchmark of future methods.