Collaborating Authors

A Dual Augmented Block Minimization Framework for Learning with Limited Memory

Neural Information Processing Systems

In past few years, several techniques have been proposed for training of linear Support Vector Machine (SVM) in limited-memory setting, where a dual block-coordinate descent (dual-BCD) method was used to balance cost spent on I/O and computation. In this paper, we consider the more general setting of regularized \emph{Empirical Risk Minimization (ERM)} when data cannot fit into memory. In particular, we generalize the existing block minimization framework based on strong duality and \emph{Augmented Lagrangian} technique to achieve global convergence for ERM with arbitrary convex loss function and regularizer. The block minimization framework is flexible in the sense that, given a solver working under sufficient memory, one can integrate it with the framework to obtain a solver globally convergent under limited-memory condition. We conduct experiments on L1-regularized classification and regression problems to corroborate our convergence theory and compare the proposed framework to algorithms adopted from online and distributed settings, which shows superiority of the proposed approach on data of size ten times larger than the memory capacity.

Automatic Machine Learning Frameworks of the Next Generation


Automated Machine Learning (AutoML) is a process of building a complete Machine Learning pipeline automatically, without (or with minimal) human help. The main goal of the AutoML framework was to find the best possible ML pipeline under the selected time budget. For its purpose, AutoML frameworks were training many different ML algorithms and tune their hyper-parameters. The improvements in the performance can be obtained by increasing the number of algorithms and checked hyper-parameters settings, which means longer computation time. The goal of AutoML analysis depends on the user.

The Anatomy of Deep Learning Frameworks


Deep Learning, whether you like it or not is here to stay, and with any tech gold-rush comes a plethora of options that can seem daunting to newcomers. If you were to start off with deep learning, one of the first questions to ask is, which framework to learn? I'd say instead of a simple trial-and-error, if you try to understand the building blocks of all these frameworks, it would help you make an informed decision. Common choices include Theano, TensorFlow, Torch, and Keras. All of these choices have their own pros and cons and have their own way of doing things.

4 tips for choosing the right deep learning framework


As deep learning (DL) has become more popular over the years, an increasing number of businesses and software engineers have designed frameworks to make DL more usable. Similar to a machine learning framework, a deep learning framework is an application or tool that generates deep learning models instantly with minimal effort without diving deep into the underlying algorithms. Frameworks only define deep learning models made of pre-built and optimized components. Deep learning engineers use a framework to handle most of the work in real time rather than manually coding hundreds of lines. Today, there's a wide range of frameworks available that the new deep learning user would have difficulty picking.