Distributed Learning
Last updated
Last updated
Learn how the training of deep models can be distributed across multiple machines.
Map Reduce can be described on the following steps: 1. Split your training set, in batches (ex: divide by the number of workers on your farm: 4) 2. Give each machine of your farm 1/4th of the data 3. Perform Forward/Backward propagation, on each computer node (All nodes share the same model) 4. Combine results of each machine and perform gradient descent 5. Update model version on all nodes.
Consider the batch gradient descent formula, which is the gradient descent applied on all training set:
Each machine will deal with 100 elements (After splitting the dataset), calculating , then: Each machine is calculating the back-propagation and error for it's own split of data. Remember that all machines have the same copy of the model. After each machine calculated their respective . Another machine will combine those gradients, calculate the new weights and update the model in all machines. The whole point of this procedure is to check if we can combine the calculations of all nodes and still make sense, in terms of the final calculation.
Caffe
Torch (Parallel layer)
This approach has some problems:
The complete model must fit on every machine
If the model is to big it will take time to update all machines with the same model
Another approach whas used on google DistBelief project where they use a normal neural network model with weights separated between multiple machines.
On this approach only the weights (thick edges) that cross machines need to be synchronized between the workers. This technique could only be used on fully connected layers. If you mix both techniques (reference on Alexnet) paper, you do this share fully connected processing (Just a matrix multiplication), then when you need to the the convolution part, each convolution layer get one part of the batch.
Here each model replica is trained independently with pieces of data and a parameter server that synchronize the parameters between the workers.
Now google offer on Tensorflow some automation on choosing which strategy to follow depending on your work.