Skip to content

Google Collab Setup Guide

ProGamerGov edited this page Nov 29, 2019 · 43 revisions

Google Collab


Google Colaboratory offers free access to a single 12GB NVIDIA Tesla T4, Tesla K80 GPU or a Tesla P100-PCIE-16GB for machine learning projects. There are some limitations however, and you can learn more about them here: https://research.google.com/colaboratory/faq.html

Setup

Quick Start

  1. Go to this link: https://gist.github.com/ProGamerGov/a0dcd317c301ef2edca552319477d7d8

  2. Click on the "Open in Collab" button at the top.

  3. Then follow the instructions in the setup cell's comments (basically enable GPU hardware), before running it.

  4. You can now use your own parameters, or mess around with the example parameters. Just make sure that you only run the cell with your parameters.

  5. Refer to the visual guide for more information on how to upload, download, and delete files.

  6. You can terminate (stop running) the instance at "Runtime > Manage sessions > Terminate" in Collab, or it will automatically terminate after a set period of time if your browser is no longer connected to it.

Github

Go to: https://colab.research.google.com and select a new Python3 Notebook.

Collab lets you run terminal commands by adding the ! character as the first character for the command.

To enable GPU usage navigate to: "Edit > Notebook settings" or "Runtime > Change runtime type" and select GPU as your Hardware accelerator. Note that you will have to reinstall neural-style-pt if you change your hardware accelerator.

Create a new code cell, add the following code to it, then click the play button on the left side of the code cell:

!git clone https://github.com/ProGamerGov/neural-style-pt

!mv neural-style-pt/* .

!rm -rf neural-style-pt

!python3 models/download_models.py

!wget https://raw.githubusercontent.com/ProGamerGov/Neural-Tools/master/linear-color-transfer.py

If successful, then you should see an output similar to this:

Cloning into 'neural-style-pt'...
remote: Enumerating objects: 428, done.
remote: Total 428 (delta 0), reused 0 (delta 0), pack-reused 428
Receiving objects: 100% (428/428), 36.21 MiB | 45.27 MiB/s, done.
Resolving deltas: 100% (222/222), done.
Downloading the VGG-19 model
Downloading: "https://s3-us-west-2.amazonaws.com/jcjohns-models/vgg19-d01eb7cb.pth" to /root/.cache/torch/checkpoints/vgg19-d01eb7cb.pth
100% 548M/548M [00:16<00:00, 35.4MB/s]
Downloading the VGG-16 model
Downloading: "https://s3-us-west-2.amazonaws.com/jcjohns-models/vgg16-00b39a1b.pth" to /root/.cache/torch/checkpoints/vgg16-00b39a1b.pth
100% 528M/528M [00:16<00:00, 33.0MB/s]
Downloading the NIN model
All models have been successfully downloaded

If successfully, remove the commands from the code cell, so that you don't accidentally run them again.

After that's complete, test that your neural-style-pt installation works with the appropriate command based on your chosen hardware:

CPU:

!python3 neural_style.py -gpu c -backend mkl -image_size 64

GPU:

!python3 neural_style.py -gpu 0 -backend cudnn

To see what files exist on your Collab instance, click the arrow on the left side and select "Files". You can then choose to download, delete, or rename any of the files that you see.

Multiscale Generation (Multires)

Code Cell

Instead of using a bash script, you can simply place all of the commands in the same code cell, or different code cells. If you use multiple code cells, then you will have to run each cell manually one after the other unless you navigate to "Runtime > Run all":

!python3 neural_style.py -output_image out1.png -image_size 512

!python3 neural_style.py -output_image out2.png -init image -init_image out1.png -image_size 720

!python3 neural_style.py -output_image out3.png -init image -init_image out2.png -image_size 1024

!python3 neural_style.py -output_image out4.png -init image -init_image out3.png -image_size 1536

Bash:

You can download scripts to your Collab instance with:

!wget <fileurl/script_name.sh>

Or you can simply upload them via the file browser, by using the upload option or dragging the files onto it.

Then fix the permissions with:

!chmod u+x ./<script_name.sh>

And finally you can run the script with:

!./<script_name.sh>

You can mount your Google Drive to your Collab instance by adding the following to a code cell:

from google.colab import drive
drive.mount('/content/drive')

To check whether your instance is using a Tesla K80 or a Tesla T4, add the following code to a cell and then run it:

import torch
torch.cuda.device_count()
torch.cuda.get_device_name(0)

To check what backends are available:

import torch
print(*torch.__config__.show().split("\n"), sep="\n")

Visual Guide

Here's what your Python3 Notebook will look like before you start editing it:

And zoomed in:

The file browser:

Create a new code cell:

You can change a code cell's position with the arrows on the right, or delete the cell with the delete option:

Running the code:

You can add the folder containing all the models from Alternative Neural Models to your own Google Drive, for easy usage of them:


You can display an image in a code cell with the following code:

from IPython.display import Image
# Add any available image to inside the brackets after "Image", to display it
Image("out.png")

Other useful commands:

!ls   # Get list of items in a directory

!rm -rf <filename> # Delete specified file or directory 

!wget <fileurl> # This will download the specified file

!mv <oldpath> <newpath> # Move a file or folder from one location to another.

!cp -r <oldpath> <newpath> # Copy a file or folder from one location to another.

Speed


  • Collab instances will use either a Tesla K80 or Tesla T4. You can find information about the speed of a Tesla K80 on the neural-style-pt README.

Here are the times for running 500 iterations with -image_size=512 on a Tesla T4 with different settings:

  • -backend nn -optimizer lbfgs: 72 seconds
  • -backend nn -optimizer adam: 66 seconds
  • -backend cudnn -optimizer lbfgs: 48 seconds
  • -backend cudnn -optimizer adam: 40 seconds
  • -backend cudnn -cudnn_autotune -optimizer lbfgs: 51 seconds
  • -backend cudnn -cudnn_autotune -optimizer adam: 43 seconds

Here are the times for running 500 iterations with -image_size=512 on a Tesla P100-PCIE-16GB with different settings:

  • -backend nn -optimizer lbfgs: 61 seconds
  • -backend nn -optimizer adam: 47 seconds
  • -backend cudnn -optimizer lbfgs: 37 seconds
  • -backend cudnn -optimizer adam: 23 seconds
  • -backend cudnn -cudnn_autotune -optimizer lbfgs: 39 seconds
  • -backend cudnn -cudnn_autotune -optimizer adam: 25 seconds