Free GPU for fast.ai on Google Colab

In this tutorial, I will guide you to use google colab for fast.ai lessons.

Google colab is a tool which provides free GPU machine continuously for 12 hours. Even you can reconnect to a different GPU machine after 12 hours.

Here are the simple steps for running fast.ai Notebooks on google colab.

  1. Download fast.ai lesson notebooks. from https://github.com/fastai/fastai/tree/master/courses/dl1
  2. Login to your Google(Gmail) account in a browser.
  3. Go to colaboratory at URL https://research.google.com/colaboratory/unregistered.html
  4. A pop-up window will appear, close that window.
  5. Upload a new Notebook from your downloaded notebook files on colab (go to File ->Upload Notebook ) whichever lesson you want to work on.
  6. Now change your runtime machine to GPU machine and choose the type of python (Python 2 or Python 3) you are going to use by clicking on (runtime -> change runtime type).
  7. You can check if GPU running or not by writing the following code:

    it should come up with output ‘/device:GPU:0′ 

Now install the following libraries in your notebook by inserting code cells( Insert -> code cell ):

  1. Install pytorch using
  2. Install fast.ai using
  3. Install libSM using

Download dataset using bash commands as an example of dogs vs cats dataset

Now you are ready to use fast.ai on google colab.

Enjoy!!!

9 thoughts on “Free GPU for fast.ai on Google Colab

    1. Afshin Mokhtari

      Actually there are issues. I find that apparently google scrubs your directory after you close your .ipynb file so the dogscats data has to be re-downloaded every session; same with loading in torch, fast.ai, … although I get messages indicating those libraries have already been installed, the notebook fails without those series of commands in order.

      Also, I often hit a code block where it gives an error complaining about the Javascript widget still loading and ‘Failed to display Jupyter Widget of type HBox’. Not long after running the model, the environment gave me a running out of memory warning.

      And now around the Fine-tuning and differential learning rate annealing section, I’m getting an error telling me np is not defined anymore. Hmmm, seems like I have to do this on Paperspace or something after all 🙁

      Reply
      1. theailea

        Once you are logged in to google colab with your google account and assigned a GPU machine, you can access same machine continuously for 12 hours, using same account.So you need not to run same command again and again.

        If you get any error regarding “running out of memory” please ignore it and proceed further since it is having 13 GB of RAM. So, it will work for lesson 1 but for other computer vision problem you will face out of memory error. So, better to switch to paperspace or something else.

        But since it is free of cost and having 13 GB GPU machine you can run some other code which is much better than running on CPU only.

        Reply
        1. Stas

          Do you consistently get 100% GPU? I get 5% of it 99% of the times I connected to it. I have only seen 100% GPU RAM about 2 times over many many attempts. I have just retested it – getting only 5% GPU RAM.

          Please see: https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available

          Perhaps they do a different allocation depending on where you connect from? I connect from Canada.

          Based on the comments to my post on stackoverflow it seems to be an issue for many users.

          Do you connect from US?

          Thank you.

          Reply
          1. theailea

            I am connecting from India.
            I am facing the same issue of not getting 100% of GPU memory all the time but I have completed lesson 1 of fast.ai dl1 without any memory issue.
            Still figuring out how GPU memory is being distributed by Google to users. Will update soon.
            Thanks!

            Reply

Leave a Reply