Goto

Collaborating Authors

Results


Deep Learning Stochastic Gradient Descent

#artificialintelligence

This video will help you understand Stochastic Gradient Descent in Deep Neural Network in a very simplified manner. Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. Get 10% flat off on the above complete course with certification - http://bit.ly/39trxCf (APPLY COUPON - YTDEG) Get 15% flat off on the these AI/ML courses with certification - (APPLY COUPON - YTEDU) 1.Learn Machine Learning By Building Projects - http://bit.ly/2MxMSSl 2.The Complete Web Development Course - Build 15 Projects - http://bit.ly/32Ah9oW 3.The Full Stack Web Development - http://bit.ly/2MZDBRV 4.Projects In Laravel: Learn Laravel Building 10 Projects - http://bit.ly/2MAiHtH 5.Mathematical Foundation For Machine Learning and AI - http://bit.ly/2N23Eb1 Get 10% flat off on the Below full E-Degree with certification - (APPLY COPOUN - YTDEG) Advance Artificial Intelligence & Machine Learning E-Degree - http://bit.ly/38mbiXm


Machine Learning on Volatile Instances

arXiv.org Machine Learning

Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation across multiple worker nodes. However, running distributed SGD can be prohibitively expensive because it may require specialized computing resources such as GPUs for extended periods of time. We propose cost-effective strategies to exploit volatile cloud instances that are cheaper than standard instances, but may be interrupted by higher priority workloads. To the best of our knowledge, this work is the first to quantify how variations in the number of active worker nodes (as a result of preemption) affects SGD convergence and the time to train the model. By understanding these trade-offs between preemption probability of the instances, accuracy, and training time, we are able to derive practical strategies for configuring distributed SGD jobs on volatile instances such as Amazon EC2 spot instances and other preemptible cloud instances. Experimental results show that our strategies achieve good training performance at substantially lower cost.


Moniqua: Modulo Quantized Communication in Decentralized SGD

arXiv.org Machine Learning

Running Stochastic Gradient Descent (SGD) in a decentralized fashion has shown promising results. In this paper we propose Moniqua, a technique that allows decentralized SGD to use quantized communication. We prove in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication. Moniqua improves upon prior works in that it (1) requires zero additional memory, (2) works with 1-bit quantization, and (3) is applicable to a variety of decentralized algorithms. We demonstrate empirically that Moniqua converges faster with respect to wall clock time than other quantized decentralized algorithms. We also show that Moniqua is robust to very low bit-budgets, allowing 1-bit-per-parameter communication without compromising validation accuracy when training ResNet20 and ResNet110 on CIFAR10.


Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

arXiv.org Machine Learning

Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks. As a result, there is a surge of interest in using these models for making potentially important decisions in high-regret applications. However, despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures (or nodes attributes) lead them to make wrong predictions. Presence of these adversarial examples raises serious security concerns. Most of the existing robust GNN design/training methods are only applicable to white-box settings where model parameters are known and gradient based methods can be used by performing convex relaxation of the discrete graph domain. More importantly, these methods are not efficient and scalable which make them infeasible in time sensitive tasks and massive graph datasets. To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner. On several applications, we show that the proposed techniques are significantly less computationally expensive and, in some cases, more robust than the state-of-the-art methods making them suitable to large-scale problems which were out of the reach of traditional robust training methods.


Neural Architecture Search Could Tune AI's Algorithmic Heart - InformationWeek

#artificialintelligence

Data science has evolved far beyond science. It now represents the heart and soul of many disruptive business applications. Everywhere you look, enterprise data science practices have become industrialized within 24x7 DevOps workflows. Under that trend, automation has come to practically every process in the machine-learning DevOps pipeline that surrounds AI. Modeling is the next and perhaps ultimate milestone in the move toward end-to-end, data-science pipeline automation.


Cross-Domain Collaborative Filtering via Translation-based Learning

arXiv.org Machine Learning

With the proliferation of social media platforms and e-commerce sites, several cross-domain collaborative filtering strategies have been recently introduced to transfer the knowledge of user preferences across domains. The main challenge of cross-domain recommendation is to weigh and learn users' different behaviors in multiple domains. In this paper, we propose a Cross-Domain collaborative filtering model following a Translation-based strategy, namely CDT. In our model, we learn the embedding space with translation vectors and capture high-order feature interactions in users' multiple preferences across domains. In doing so, we efficiently compute the transitivity between feature latent embeddings, that is if feature pairs have high interaction weights in the latent space, then feature embeddings with no observed interactions across the domains will be closely related as well. We formulate our objective function as a ranking problem in factorization machines and learn the model's parameters via gradient descent. In addition, to better capture the non-linearity in user preferences across domains we extend the proposed CDT model by using a deep learning strategy, namely DeepCDT. Our experiments on six publicly available cross-domain tasks demonstrate the effectiveness of the proposed models, outperforming other state-of-the-art cross-domain strategies.


SaaS: Speed as a Supervisor for Semi-supervised Learning

arXiv.org Machine Learning

We introduce the SaaS Algorithm for semi-supervised learning, which uses learning speed during stochastic gradient descent in a deep neural network to measure the quality of an iterative estimate of the posterior probability of unknown labels. Training speed in supervised learning correlates strongly with the percentage of correct labels, so we use it as an inference criterion for the unknown labels, without attempting to infer the model parameters at first. Despite its simplicity, SaaS achieves state-of-the-art results in semi-supervised learning benchmarks.


Learning with Opponent-Learning Awareness

arXiv.org Artificial Intelligence

Multi-agent settings are quickly gathering importance in machine learning. This includes a plethora of recent work on deep multi-agent reinforcement learning, but also can be extended to hierarchical RL, generative adversarial networks and decentralised optimisation. In all these settings the presence of multiple learning agents renders the training problem non-stationary and often leads to unstable training or undesired final results. We present Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in the environment. The LOLA learning rule includes an additional term that accounts for the impact of one agent's policy on the anticipated parameter update of the other agents. Preliminary results show that the encounter of two LOLA agents leads to the emergence of tit-for-tat and therefore cooperation in the iterated prisoners' dilemma, while independent learning does not. In this domain, LOLA also receives higher payouts compared to a naive learner, and is robust against exploitation by higher order gradient-based methods. Applied to repeated matching pennies, LOLA agents converge to the Nash equilibrium. In a round robin tournament we show that LOLA agents can successfully shape the learning of a range of multi-agent learning algorithms from literature, resulting in the highest average returns on the IPD. We also show that the LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making the method suitable for model-free RL. This method thus scales to large parameter and input spaces and nonlinear function approximators. We also apply LOLA to a grid world task with an embedded social dilemma using deep recurrent policies and opponent modelling. Again, by explicitly considering the learning of the other agent, LOLA agents learn to cooperate out of self-interest.


Gradient Descent Quantizes ReLU Network Features

arXiv.org Machine Learning

Deep neural networks are often trained in the over-parametrized regime (i.e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem. Several studies have highlighted the fact that the training procedure, i.e. mini-batch Stochastic Gradient Descent (SGD) leads to solutions that have specific properties in the loss landscape. However, even with plain Gradient Descent (GD) the solutions found in the over-parametrized regime are pretty good and this phenomenon is poorly understood. We propose an analysis of this behavior for feedforward networks with a ReLU activation function under the assumption of small initialization and learning rate and uncover a quantization effect: The weight vectors tend to concentrate at a small number of directions determined by the input data. As a consequence, we show that for given input data there are only finitely many, "simple" functions that can be obtained, independent of the network size. This puts these functions in analogy to linear interpolations (for given input data there are finitely many triangulations, which each determine a function by linear interpolation). We ask whether this analogy extends to the generalization properties - while the usual distribution-independent generalization property does not hold, it could be that for e.g. smooth functions with bounded second derivative an approximation property holds which could "explain" generalization of networks (of unbounded size) to unseen inputs.


The Hidden Vulnerability of Distributed Learning in Byzantium

arXiv.org Machine Learning

While machine learning is going through an era of celebrated success, concerns have been raised about the vulnerability of its backbone: stochastic gradient descent (SGD). Recent approaches have been proposed to ensure the robustness of distributed SGD against adversarial (Byzantine) workers sending poisoned gradients during the training phase. Some of these approaches have been proven Byzantine-resilient: they ensure the convergence of SGD despite the presence of a minority of adversarial workers. We show in this paper that convergence is not enough. In high dimension $d \gg 1$, an adver\-sary can build on the loss function's non--convexity to make SGD converge to ineffective models. More precisely, we bring to light that existing Byzantine--resilient schemes leave a margin of poisoning of $\Omega\left(f(d)\right)$, where $f(d)$ increases at least like $\sqrt[p]{d~}$. Based on this leeway, we build a simple attack, and experimentally show its strong to utmost effectivity on CIFAR--10 and MNIST. We introduce Bulyan, and prove it significantly reduces the attackers leeway to a narrow $O( \frac{1}{\sqrt{d~}})$ bound. We empirically show that Bulyan does not suffer the fragility of existing aggregation rules and, at a reasonable cost in terms of required batch size, achieves convergence as if only non--Byzantine gradients had been used to update the model.