Skip to content

Commit

Permalink
Merge branch 'main' into colab_merge
Browse files Browse the repository at this point in the history
  • Loading branch information
ma595 committed Jul 10, 2024
2 parents 7573362 + 88fdb65 commit 591aa84
Show file tree
Hide file tree
Showing 2 changed files with 166 additions and 16 deletions.
128 changes: 114 additions & 14 deletions exercises/01_penguin_classification.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Apply transforms to the data. See Task 4 exercise comments above.\n",
"# Apply transforms we need to PenguinDataset to convert input data and target class to tensors. \n",
"# See Task 4 exercise comments above.\n",
"\n",
"# Create train_set\n",
"\n",
Expand Down Expand Up @@ -327,8 +328,11 @@
" - The ``DataLoader`` object allows us to put our inputs and targets in mini-batches, which makes for more efficient training.\n",
" - Note: rather than supplying one input-target pair to the model at a time, we supply \"mini-batches\" of these data at once (typically a small power of 2, like 16 or 32).\n",
" - The number of items we supply at once is called the batch size.\n",
" - The ``DataLoader`` can also randomly shuffle the data each epoch (when training).\n",
" - It allows us to load different mini-batches in parallel, which can be very useful for larger datasets and images that can't all fit in memory at once.\n",
" - Q. What number should we choose for the batch size?\n",
" - The ``DataLoader`` can also randomly shuffle the data each epoch (when training). This avoids accidental patterns in the data harming the fitting process. Consider providing lots of the positive class followed by the negative class,\n",
"the network will only learn by saying yes all the time. Therefore need to intersperse positives and negatives.\n",
"\n",
" - The ``DataLoader`` also allows us to load different mini-batches in parallel, which can be very useful for larger datasets and images that can't all fit in memory at once.\n",
"\n",
"\n",
"Note: we are going to use batch normalisation layers in our network, which don't work if the batch size is one. This can happen on the last batch, if we don't choose a batch size that evenly divides the number of items in the data set. To avoid this, we can set the ``drop_last`` argument to ``True``. The last batch, which will be of size ``len(data_set) % batch_size`` gets dropped, and the data are reshuffled. This is only relevant during the training process - validation will use population statistics."
Expand All @@ -353,23 +357,48 @@
"\n",
"Here we will create our neural network in PyTorch, and have a general discussion on clean and messy ways of going about it.\n",
"\n",
"  The module `torch.nn` contains different classes that help you build neural network models. All models in PyTorch inherit from the subclass `nn.Module`, which has useful methods like `parameters()`, `__call__()` and others.\n",
"\n",
"  `torch.nn` also has various layers that you can use to build your neural network. For example, we will use `nn.Linear` in our code below, which constructs a fully connected layer. `torch.nn.Linear` is a subclass of `torch.nn.Module`. \n",
"\n",
"  What exactly is a \"layer\"? It is essentially a step in the neural network computation. i.e. The `nn.Linear` layer computes the linear transformation of the input vector `$x$`: `$y$ = $W^T x + b$`. Where `W` is the matrix of tunable parameters and `b` is a bias vector.\n",
"\n",
"We can also think of the ReLU activation as a \"layer\". However, there are no tunable parameters associated with the ReLU activation function.\n",
"\n",
"  The `__init__()` method is where we typically define the attributes of a class. In our case, all the \"sub-components\" of our model should be defined here.\n",
"\n",
"  The `forward` method is called when we use the neural network to make a prediction. Another term for \"making a prediction\" is running the forward pass, because information flows forward from the input through the hidden layers to the output. This builds a computational graph. To compute parameter updates, we run the backward pass by calling the function `loss.backward()`. During the backward pass, `autograd` traverses this graph to compute the gradients, which are then used to update the model's parameters.\n",
"\n",
"  The `forward` method is called from the `__call__()` function of `nn.Module`, so that when we run `model(batch)`, the `forward` method is called. \n",
"- First, we will create quite an ugly network to highlight how to make a neural network in PyTorch on a very basic level.\n",
"- We will then discuss a trick for making the print-out nicer.\n",
"- We will then utilise `torch.nn.Sequential` as a neater approach.\n",
"- Finally, we will discuss how the best approach would be to write a class where various parameters (e.g. number of layers, dropout probabilities, etc.) are passed as arguments."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"from torch.nn import Module\n",
"from torch.nn import BatchNorm1d, Linear, ReLU, Dropout\n",
"from torch import Tensor\n",
"\n",
"\n",
"class FCNet(Module):\n",
" \"\"\"Fully-connected neural network.\"\"\""
" \"\"\"Fully-connected neural network.\"\"\"\n",
"\n",
" # define __init__ function - model defined here.\n",
" def __init__(self):\n",
" pass\n",
"\n",
" # define forward function which calls network\n",
" def forward(self, batch: Tensor) -> Tensor:\n",
" pass\n",
"\n",
"\n",
"# define a model and print and test (try with torch.rand() function)"
]
},
{
Expand Down Expand Up @@ -400,7 +429,9 @@
"\n",
"While we talked about stochastic gradient descent in the slides, most people use the so-called [Adam optimiser](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html).\n",
"\n",
"You can think of it as a more complex and improved implementation of SGD."
"You can think of it as a more complex and improved implementation of SGD.\n",
"\n",
"Here we will tell the optimiser what parameters to fit in order to minimise the loss. "
]
},
{
Expand All @@ -413,20 +444,58 @@
"from torch.optim import Adam"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Have a go at importing the model weights for a large model like ResNet50"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Task 9: Writing basic training and validation loops\n",
"\n",
"- Before we jump in and write these loops, we must first choose an activation function to apply to the model's outputs.\n",
" - Here we are going to use the softmax activation function: see [the PyTorch docs](https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html).\n",
" - For those of you who've studied physics, you may be remininded of the partition function in thermodynamics.\n",
" - This activation function is good for classifcation when the result is one of ``A or B or C``.\n",
" - It's bad if you even want to assign two classification to one images—say a photo of a dog _and_ a cat.\n",
"- Before we jump in and write these loops, we must first choose an activation function to apply to the model's outputs so that they compared to our targets i.e. `[0, 0, 1]`. We chose not to include this in the network itself.\n",
" - Here we are going to use the softmax activation function: see [the PyTorch docs](https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html). It can be seen as a generalization of both the logits and sigmoid functions to handle multi-class classification tasks\n",
" - For those of you who've studied physics, you may be reminded of the partition function in thermodynamics.\n",
" - This activation function is good for classification when the result is one of ``A or B or C``.\n",
" - It's bad if you even want to assign two classification to a single image—say a photo of a dog _and_ a cat.\n",
" - It turns the raw outputs, or logits, into \"psuedo probabilities\", and we take our prediction to be the most probable class.\n",
"\n",
"- We will write the training loop together, then you can go ahead and write the (simpler) validation loop."
"- Have a go at writing these loops. Read the comments below for help.\n",
"\n",
"TIPS:\n",
"\n",
"- The model needs to be configured for training and validation.\n",
"- We need to tell the softmax function over what dimension we should sum the probabilities over in order to equal 1. This should be along the column axis. \n",
"- The automatic behaviour of the optimiser is to accumulate gradients during training.\n",
"\n",
"- Extracting metrics: \n",
" - Define a dictionary `metrics = {\"loss\": [], \"accuracy\" : []}`\n",
" - Append the loss `loss.item()` which is a 1x1 tensor. We do not need gradients.\n",
" - Get the accuracy by writing a function `get_batch_accuracy(preds: Tensor, targets: Tensor)`.\n",
" - A decision can be computed as follows: `decision = preds.argmax(dim=1)`\n",
" - We need to supply the metrics as `means` over each epoch.\n",
" - The metrics should be a dictionary containing \"loss\" and \"accuracy\" as keys and lists as values which we append to each iteration. We can then use dictionary comprehension to get epoch statistics. \n",
" ```\n",
" metrics = {\"loss \" : [1.0, 2.0, 3.0], \"accuracy\" : [0.7, 0.8, 0.9]}\n",
" return {k : mean(v) for k, v in metrics.items() }\n",
" ```\n",
" - If the validation performance gets really poor this is a sign that we have possibly overfit. \n",
"\n",
"- Utilise `@no_grad` where possible. It temporarily disables gradient calculation, which is beneficial during evaluation phases when gradient updates are not required. \n",
"\n",
"\n",
"NOTE: In PyTorch, `requires_grad=True` is set automatically for the parameters of layers defined using `torch.nn.Module` subclasses. Examine the following example:\n",
"```\n",
"x = ones(10, requires_grad=True)\n",
"y = 2*x.exp()\n",
"print(y)\n",
"```\n",
"- Why use BCELoss?\n",
" - It may seem odd to be using BCELoss for a multi-class classification problem. In this case, BCELoss treats each element of the prediction vector as an independent binary classification problem. For each class, it compares the predicted probability against the target and computes the loss. It might be better to use `CrossEntropyLoss` instead (ground truth does not need to be one-hot encoded). `CrossEntropyLoss` combines softmax and negative log likelihood. \n"
]
},
{
Expand Down Expand Up @@ -464,6 +533,27 @@
"\n",
" \"\"\"\n",
"\n",
" # setup the model for training. IMPORTANT!\n",
"\n",
" # setup loss and accuracy metrics dictionary\n",
"\n",
" # iterate over the batch, targets in the train_loader\n",
" for batch, targets in train_loader:\n",
" pass\n",
"\n",
" # zero the gradients (otherwise gradients accumulate)\n",
"\n",
" # run forward model and compute proxy probabilities over dimension 1 (columns of tensor).\n",
"\n",
" # compute loss\n",
" # e.g. pred = [0.2, 0.7, 0.1] and target = [0, 1, 0]\n",
"\n",
" # compute gradients\n",
"\n",
" # nudge parameters in direction of steepest descent c\n",
"\n",
" # append metrics\n",
"\n",
"\n",
"def validate_one_epoch(\n",
" model: Module,\n",
Expand All @@ -486,7 +576,10 @@
" Dict[str, float]\n",
" Metrics of interest.\n",
"\n",
" \"\"\""
" \"\"\"\n",
"\n",
" for batch, targets in valid_loader:\n",
" pass"
]
},
{
Expand Down Expand Up @@ -514,7 +607,14 @@
"source": [
"epochs = 3\n",
"\n",
"# define train_metrics and valid_metrics lists. \n",
"\n",
"for _ in range(epochs):\n",
"\n",
" # append output of train_one_epoch() to train_metrics\n",
"\n",
" # append output of valid_one_epoch() to valid_metrics\n",
"\n",
" pass"
]
},
Expand Down
54 changes: 52 additions & 2 deletions worked-solutions/01_penguin_classification_solutions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -358,6 +358,56 @@
]
}
],
"source": [
"# Apply transforms we need to PenguinDataset to convert input data and target class to tensors. \n",
"# See Task 4 exercise comments above.\n",
"\n",
"\n",
"# Create train_set\n",
"train_set = PenguinDataset(\n",
" input_keys=features,\n",
" target_keys=[\"species\"],\n",
" train=True,\n",
")\n",
"\n",
"# Create valid_set\n",
"valid_set = PenguinDataset(\n",
" input_keys=features,\n",
" target_keys=[\"species\"],\n",
" train=False,\n",
")\n",
"\n",
"\n",
"for _, (input_feats, target) in zip(range(5), train_set):\n",
" print(input_feats, target)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### (Optional) Task 4b: \n",
"\n",
"Apply the `torchvision.transforms.Compose` transformations instead of hardcoding as above. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([ 42.9000, 13.1000, 5000.0000, 215.0000, 0.0000]) tensor([0., 0., 1.])\n",
"tensor([ 46.1000, 13.2000, 4500.0000, 211.0000, 0.0000]) tensor([0., 0., 1.])\n",
"tensor([ 44.9000, 13.3000, 5100.0000, 213.0000, 0.0000]) tensor([0., 0., 1.])\n",
"tensor([ 43.3000, 13.4000, 4400.0000, 209.0000, 0.0000]) tensor([0., 0., 1.])\n",
"tensor([ 42.0000, 13.5000, 4150.0000, 210.0000, 0.0000]) tensor([0., 0., 1.])\n"
]
}
],
"source": [
"# Apply the transforms we need to the PenguinDataset to get out input\n",
"# targets as Tensors. See Task 4 exercise comments above.\n",
Expand Down Expand Up @@ -774,7 +824,7 @@
" and to instead use the stats it has built up from the training set.\n",
" The model should not \"remember\" anything from the validation set.\n",
" - We also protect this function with ``torch.no_grad()``, because having\n",
" gradients enable while validating is a pointless waste of\n",
" gradients enabled while validating is a pointless waste of\n",
" resources — they are only needed for training.\n",
"\n",
" \"\"\"\n",
Expand Down

0 comments on commit 591aa84

Please sign in to comment.