Learning Pytorch

20 Oct 2020

Complete the 60 minute blitz on the official Pytorch documentation website! (I find that this assumes you have a basic understanding of how a neural network is constructed.)

https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html

There’s literally only 1 cell that runs only on GPU, everything else you can run on a CPU.

You can also use Google Colab if you don’t have a GPU (click on Run in Google Colab and follow the steps here to enable GPU on colab: https://stackoverflow.com/questions/50560395/how-to-install-cuda-in-google-colab-gpus)


Quick pointers that might be helpful:

1) Autograd

“the autograd class is just a Jacobian-vector product computing engine. A Jacobian matrix in very simple words is a matrix representing all the possible partial derivatives of two vectors.”

“Partial derivatives: a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant”

2) Normalise

torchvision.transforms.Normalize(mean, std, inplace=False)[SOURCE] Normalize a tensor image with mean and standard deviation. Given mean: (mean[1],…,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]



Other notes that i found helpful when working through this 60 minute blitz:

Understanding PyTorch’s autograd: https://towardsdatascience.com/pytorch-autograd-understanding-the-heart-of-pytorchs-magic-2686cd94ec95

Quick backpropagation revision: https://ml-cheatsheet.readthedocs.io/en/latest/backpropagation.html


Further deep dive into debugging learning algorithms (by understanding what goes on beneath the hood!): https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/