Goto

Collaborating Authors

How to Build and Iterate Chatbots Effectively

#artificialintelligence

Many businesses are building chatbots to provide personalized services to fans and customers, and to form a deeper relationship with them. Chatbots allow brands to form one-to-one connections and conversations with their users.


Reducing the variance in online optimization by transporting past gradients

Neural Information Processing Systems

Most stochastic optimization methods use gradients once before discarding them. While variance reduction methods have shown that reusing past gradients can be beneficial when there is a finite number of datapoints, they do not easily extend to the online setting. One issue is the staleness due to using past gradients. We propose to correct this staleness using the idea of {\em implicit gradient transport} (IGT) which transforms gradients computed at previous iterates into gradients evaluated at the current iterate without using the Hessian explicitly. In addition to reducing the variance and bias of our updates over time, IGT can be used as a drop-in replacement for the gradient estimate in a number of well-understood methods such as heavy ball or Adam.


Merantix

#artificialintelligence

We build partnerships with leading global organizations to assemble complex datasets and gain insight into their business needs. We target scalable problems faced by many clients. We develop models that predict actionable insights for our partners. We use state-of-the-art machine learning techniques and infrastructure and apply rigorous measures of success. We iterate to transform models into standalone products.


Nonlinear Acceleration of Stochastic Algorithms

Neural Information Processing Systems

Extrapolation methods use the last few iterates of an optimization algorithm to produce a better estimate of the optimum. They were shown to achieve optimal convergence rates in a deterministic setting using simple gradient iterates. Here, we study extrapolation methods in a stochastic setting, where the iterates are produced by either a simple or an accelerated stochastic gradient algorithm. We first derive convergence bounds for arbitrary, potentially biased perturbations, then produce asymptotic bounds using the ratio between the variance of the noise and the accuracy of the current point. Finally, we apply this acceleration technique to stochastic algorithms such as SGD, SAGA, SVRG and Katyusha in different settings, and show significant performance gains.


Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems

arXiv.org Machine Learning

In this paper we study the smooth convex-concave saddle point problem. Specifically, we analyze the last iterate convergence properties of the Extragradient (EG) algorithm. It is well known that the ergodic (averaged) iterates of EG converge at a rate of $O(1/T)$ (Nemirovski, 2004). In this paper, we show that the last iterate of EG converges at a rate of $O(1/\sqrt{T})$. To the best of our knowledge, this is the first paper to provide a convergence rate guarantee for the last iterate of EG for the smooth convex-concave saddle point problem. Moreover, we show that this rate is tight by proving a lower bound of $\Omega(1/\sqrt{T})$ for the last iterate. This lower bound therefore shows a quadratic separation of the convergence rates of ergodic and last iterates in smooth convex-concave saddle point problems.