smaller network
- Asia > Singapore (0.04)
- North America > United States (0.04)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.98)
- Information Technology > Artificial Intelligence > Machine Learning > Computational Learning Theory (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
A multilevel approach to accelerate the training of Transformers
Lauga, Guillaume, Chaumette, Maël, Desainte-Maréville, Edgar, Lasalle, Étienne, Lebeurrier, Arthur
In this article, we investigate the potential of multilevel approaches to accelerate the training of transformer architectures. Using an ordinary differential equation (ODE) interpretation of these architectures, we propose an appropriate way of varying the discretization of these ODE Transformers in order to accelerate the training. We validate our approach experimentally by a comparison with the standard training procedure.
Learning to grow machine-learning models
It's no secret that OpenAI's ChatGPT has some incredible capabilities -- for instance, the chatbot can write poetry that resembles Shakespearean sonnets or debug code for a computer program. These abilities are made possible by the massive machine-learning model that ChatGPT is built upon. Researchers have found that when these types of models become large enough, extraordinary capabilities emerge. But bigger models also require more time and money to train. The training process involves showing hundreds of billions of examples to a model.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- North America > United States > Texas > Travis County > Austin (0.05)
Efficiently Learning Small Policies for Locomotion and Manipulation
Hegde, Shashank, Sukhatme, Gaurav S.
Neural control of memory-constrained, agile robots requires small, yet highly performant models. We leverage graph hyper networks to learn graph hyper policies trained with off-policy reinforcement learning resulting in networks that are two orders of magnitude smaller than commonly used networks yet encode policies comparable to those encoded by much larger networks trained on the same task. We show that our method can be appended to any off-policy reinforcement learning algorithm, without any change in hyperparameters, by showing results across locomotion and manipulation tasks. Further, we obtain an array of working policies, with differing numbers of parameters, allowing us to pick an optimal network for the memory constraints of a system. Training multiple policies with our method is as sample efficient as training a single policy. Finally, we provide a method to select the best architecture, given a constraint on the number of parameters. Project website: https://sites.google.com/usc.edu/graphhyperpolicy
- North America > United States > California (0.14)
- Europe > France (0.04)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Photos to 3D Scenes in Milliseconds
As if taking a picture wasn't a challenging enough technological prowess, we are now doing the opposite: modeling the world from pictures. I've covered amazing AI-based models that could take images and turn them into high-quality scenes. A challenging task consists of taking a few images in the 2-dimensional picture world to create how the object or person would look in the real world. You can easily see how useful this technology is for many industries like video games, animation movies, or advertising. Take a few pictures and instantly have a realistic model to insert into your product.
How knowledge distillation compresses neural networks
If you've ever used a neural network to solve a complex problem, you know they can be enormous in size, containing millions of parameters. For instance, the famous BERT model has about 110 million. To illustrate the point, this is the number of parameters for the most common architectures in (natural language processing) NLP, as summarized in the recent State of AI Report 2020 by Nathan Benaich and Ian Hogarth. In Kaggle competitions, the winner models are often ensembles, composed of several predictors. Although they can beat simple models by a large margin in terms of accuracy, their enormous computational costs make them utterly unusable in practice. Is there any way to somehow leverage these powerful but massive models to train state of the art models, without scaling the hardware?
Better Together: Resnet-50 accuracy with $13 \times$ fewer parameters and at $3\times$ speed
Nath, Utkarsh, Kushagra, Shrinu
Recent research on compressing deep neural networks has focused on reducing the number of parameters. Smaller networks are easier to export and deploy on edge-devices. We introduce Adjoined networks as a training approach that can regularize and compress any CNN-based neural architecture. Our one-shot learning paradigm trains both the original and the smaller networks together. The parameters of the smaller network are shared across both the architectures. We prove strong theoretical guarantees on the regularization behavior of the adjoint training paradigm. We complement our theoretical analysis by an extensive empirical evaluation of both the compression and regularization behavior of adjoint networks. For resnet-50 trained adjointly on Imagenet, we are able to achieve a $13.7x$ reduction in the number of parameters (For size comparison, we ignore the parameters in the last linear layer as it varies by dataset and are typically dropped during fine-tuning. Else, the reductions are $11.5x$ and $95x$ for imagenet and cifar-100 respectively.) and a $3x$ improvement in inference time without any significant drop in accuracy. For the same architecture on CIFAR-100, we are able to achieve a $99.7x$ reduction in the number of parameters and a $5x$ improvement in inference time. On both these datasets, the original network trained in the adjoint fashion gains about $3\%$ in top-1 accuracy as compared to the same network trained in the standard fashion.
IBM's AI classifies seizures with 98.4% accuracy using EEG data
In a paper published on the preprint server Arxiv.org this week, IBM researchers describe SeizureNet, a machine learning framework that learns the features of seizures to classify various types. They say that it achieves state-of-the-art classification accuracy on a popular data set, and that it helps to improve the classification accuracy of smaller networks for applications with low memory and faster inference. If the claims stand up to academic scrutiny, the framework could, for instance, help the over 3.4 million people with epilepsy better understand the factors that trigger their seizures. The World Health Organization estimates that up to 70% of people living with epilepsy could live seizure-free if properly diagnosed and treated. SeizureNet is a machine learning framework consisting of individual classifiers (specifically convolutional neural networks) that learn the features of electroencephalograms (EEGs) -- i.e., tests that evaluate the electrical activity in the brain -- to predict seizure types.
- Oceania > Australia (0.06)
- Asia > Bangladesh (0.06)
Research Guide: Model Distillation Techniques for Deep Learning
Knowledge distillation is a model compression technique whereby a small network (student) is taught by a larger trained neural network (teacher). The smaller network is trained to behave like the large neural network. This enables the deployment of such models on small devices such as mobile phones or other edge devices. In this guide, we'll look at a couple of papers that attempt to tackle this challenge. In this paper, a small model is trained to generalize in the same way as the larger teacher model.