training inference
9 libraries for parallel & distributed training/inference of deep learning models
In this blog we will cover a few basics of large model training before jumping to the list of libraries available. To skip the basics of large model training and jump to the list of libraries click here. Large deep learning models require significant amount of memory to train. Models require memory to store intermediate activations, weights etc.. while training. Some models can be trained only with a very small batch size on a single GPU while other models may not fit on single GPU.
Technology: Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)