PyTorch
Introduction
PyTorch is another deep learning library that's is actually a fork of Chainer(Deep learning library completely on python) with the capabilities of torch. Basically it's the facebook solution to merge torch with python.
Some advantages
Easy to Debug and understand the code
Has as many type of layers as Torch (Unpool, CONV 1,2,3D, LSTM, Grus)
Lot's of loss functions
Can be considered as a Numpy extension to GPUs
Faster than others "define-by-run" libraries, like chainer and dynet
Allow to build networks which structure is dependent on the computation itself (Useful on reinforcement learning)
PyTorch Components
Package | Description |
torch | Numpy like library with GPU support |
torch.autograd | Give differentiation support for all torch ops |
torch.nn | Neural network library integrated with autograd |
torch.optim | Optimization for torch.nn (ADAM, SGD, RMSPROP, etc...) |
torch.multiprocessing | Memory sharing between tensors |
torch.utils | DataLoader, Training and other utility functions |
torch.legacy | Old code ported from Torch |
How it differs from Tensorflow/Theano
The major difference from Tensorflow is that PyTorch methodology is considered "define-by-run" while Tensorflow is considered "defined-and-run", so on PyTorch you can for instance change your model on run-time, debug easily with any python debugger, while tensorflow has always a graph definition/build. You can consider tensorflow as a more production tool while PyTorch is more a research tool.
The Basics:
Here we will see how to create tensors, and do some manipulation:
Create tensors filled with some value
Now we will do some computation on the GPU
Autograd and variables
The Autograd on PyTorch is the component responsible to do the backpropagation, as on Tensorflow you only need to define the forward propagation. PyTorch autograd looks a lot like TensorFlow: in both frameworks we define a computational graph, and use automatic differentiation to compute gradients.
We just need to wrap tensors with Variable objects, a Variable represents a node in a computational graph. They are not like tensorflow placeholders, on PyTorch you place the values directly on the model. Again to include a tensor on the graph wrap it with a variable.
Consider the following simple graph:
Complete example
Here we mix the concepts and show how to train a MNIST dataset using CNN
References:
Last updated