linearization
- North America > Mexico (0.05)
- Atlantic Ocean > Gulf of Mexico (0.05)
- North America > United States > Georgia > Fulton County > Atlanta (0.05)
- (3 more...)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Europe > Norway (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.05)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Middle East > Jordan (0.04)
PETAL: Physics Emulation Through Averaged Linearizations for Solving Inverse Problems
Inverse problems describe the task of recovering an underlying signal of interest given observables. Typically, the observables are related via some non-linear forward model applied to the underlying unknown signal. Inverting the non-linear forward model can be computationally expensive, as it often involves computing and inverting a linearization at a series of estimates. Rather than inverting the physics-based model, we instead train a surrogate forward model (emulator) and leverage modern auto-grad libraries to solve for the input within a classical optimization framework. Current methods to train emulators are done in a black box supervised machine learning fashion and fail to take advantage of any existing knowledge of the forward model. In this article, we propose a simple learned weighted average model that embeds linearizations of the forward model around various reference points into the model itself, explicitly incorporating known physics. Grounding the learned model with physics based linearizations improves the forward modeling accuracy and provides richer physics based gradient information during the inversion process leading to more accurate signal recovery. We demonstrate the efficacy on an ocean acoustic tomography (OAT) example that aims to recover ocean sound speed profile (SSP) variations from acoustic observations (e.g.
- North America > United States (0.07)
- North America > Mexico (0.07)
- Atlantic Ocean > Gulf of Mexico (0.07)
Sliding Mode Control and Subspace Stabilization Methodology for the Orbital Stabilization of Periodic Trajectories
Surov, Maksim, Freidovich, Leonid
The problem of orbital stabilization of periodic trajectories has been addressed in a series of publications: [1, 2, 3, 4, 5, 6, 7]. Many of these works, e.g., [1, 2, 4, 7], employ the transverse linearization approach, which approximates the dynamics near a reference periodic orbit by a linear time-varying (LTV) system with periodic coefficients. As shown in [2, 8], a feedback designed to stabilize the trivial solution of this auxiliary LTV system can be used to construct a control law that stabilizes the orbit of the original nonlinear system. Under the mild assumption of controllability of the LTV system over one period, the LQR approach can be used to design the feedback. The practical effectiveness of this method was demonstrated in experiments with real robotic systems in [9, 10, 11]. A substantially different stabilization method for the LTV system was proposed in [5], where the authors developed an alternative scheme combining Floquet theory with sliding-mode control. Following this line of work, we show that a specific feedback linearization of the transverse dynamics yields an LTV system endowed with a stable invariant subspace. In this setting, the control objective reduces to driving all trajectories into the stable subspace, which is achieved via sliding-mode-based control. This method does not require solving the computationally demanding periodic LQR problem.
- Europe > Austria > Vienna (0.14)
- North America > United States > New York (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- (4 more...)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Illinois (0.04)
- Asia > Middle East > Jordan (0.04)
Coordinate Descent for Network Linearization
Rakhlin, Vlad, Jevnisek, Amir, Avidan, Shai
ReLU activations are the main bottleneck in Private Inference that is based on ResNet networks. This is because they incur significant inference latency. Reducing ReLU count is a discrete optimization problem, and there are two common ways to approach it. Most current state-of-the-art methods are based on a smooth approximation that jointly optimizes network accuracy and ReLU budget at once. However, the last hard thresholding step of the optimization usually introduces a large performance loss. We take an alternative approach that works directly in the discrete domain by leveraging Coordinate Descent as our optimization framework. In contrast to previous methods, this yields a sparse solution by design. We demonstrate, through extensive experiments, that our method is State of the Art on common benchmarks.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Africa > Rwanda > Kigali > Kigali (0.04)
5b4a2146246bc3a3a941f32225bbb792-AuthorFeedback.pdf
We thank the reviewers for the detailed feedback. Why can we assume smoothness / differentiability of the expected utility? The paper should address if there are ways to use unbounded losses (e.g., by switching from utilities directly to inference The assumptions on utilities (and hence on losses) arise from the derivation of the optimization objective (Eq.1) In Section 3.2, we relax these assumptions and provide In practice, the procedure seems to work well for unbounded losses as well. I believe this could be possible by designing a kind of compound loss... Why exactly the utility infimum should be 0? We typically want this, but β > 0 can be used for reducing the calibration effect if so desired.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > Panama (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.93)
- (2 more...)