Artificial Inteligence
  • Preface
  • Introduction
  • Machine Learning
    • Linear Algebra
    • Supervised Learning
      • Neural Networks
      • Linear Classification
      • Loss Function
      • Model Optimization
      • Backpropagation
      • Feature Scaling
      • Model Initialization
      • Recurrent Neural Networks
        • Machine Translation Using RNN
    • Deep Learning
      • Convolution
      • Convolutional Neural Networks
      • Fully Connected Layer
      • Relu Layer
      • Dropout Layer
      • Convolution Layer
        • Making faster
      • Pooling Layer
      • Batch Norm layer
      • Model Solver
      • Object Localization and Detection
      • Single Shot Detectors
        • Yolo
        • SSD
      • Image Segmentation
      • GoogleNet
      • Residual Net
      • Deep Learning Libraries
    • Unsupervised Learning
      • Principal Component Analysis
      • Generative Models
    • Distributed Learning
    • Methodology for usage
      • Imbalanced/Missing Datasets
  • Artificial Intelligence
    • OpenAI Gym
    • Tree Search
    • Markov Decision process
    • Reinforcement Learning
      • Q_Learning_Simple
      • Deep Q Learning
      • Deep Reinforcement Learning
    • Natural Language Processing
      • Word2Vec
  • Appendix
    • Statistics and Probability
      • Probability
        • Markov Chains
        • Random Walk
    • Lua and Torch
    • Tensorflow
      • Multi Layer Perceptron MNIST
      • Convolution Neural Network MNIST
      • SkFlow
    • PyTorch
      • Transfer Learning
      • DataLoader and DataSets
      • Visualizing Results
Powered by GitBook
On this page

Was this helpful?

  1. Appendix
  2. PyTorch

Transfer Learning

PreviousPyTorchNextDataLoader and DataSets

Last updated 5 years ago

Was this helpful?

On this chapter we will learn about 2 scenarios on transfer learning

  • Initialize the network with a set of weights trained from another session. (Instead of initializing the network with random values).

  • Load some network, freeze it's weights up to a certain point and re-train the rest, normally on a smaller dataset.

References

https://discuss.pytorch.org/t/discussion-about-datasets-and-dataloaders/296
http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
https://medium.com/towards-data-science/transfer-learning-using-pytorch-4c3475f4495
https://medium.com/towards-data-science/transfer-learning-using-pytorch-part-2-9c5b18e15551
http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html