TensorFlow for R
The tf$distribute$Strategy API provides an abstraction for distributing your training across multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes. This tutorial uses the tf$distribute$MirroredStrategy, which does in-graph replication with synchronous training on many GPUs on one machine. Then, it uses all-reduce to combine the gradients from all processors and applies the combined value to all copies of the model. MirroredStategy is one of several distribution strategy available in TensorFlow core.
Jan-24-2020, 10:11:28 GMT
- Technology: