Home
About
A Brief History of AI
AI-Alerts
AI Magazine
AAAI Conferences
NeurIPS
Books
Classics
Gradient perturbation: For a parametric function fθ(x) parameterized by θ and loss function L(fθ(x),y), usual mini-batched first-order optimizers update θ using gradients gt = 1 N
Open in new window
aitopics.org uses cookies to deliver the best possible experience. By continuing to use this site, you consent to the use of cookies.
Learn more »
I understand
Add feedback
Send feedback to help us improve this new enhanced search experience.
Select feedback type:
General
Views
Title
Summary
Body
Concept Tags
Oilfield Places
Thank You!