Tag Archives: Deep Learning

Snake Game Using Tensorflow Object Detection API – Part III

In the previous blogs we have seen how to generate data for object detection and convert it into TFRecord format to train the model. In this blog we will learn how to use this data to train the model.

To train the model we will use the pre-trained model and then use transfer learning to train it on our dataset. I have used mobilenet pre trained model. Here is mobilenet model. For its configuration file you can go to model -> research -> object_detection -> samples -> configs ->> ssd_mobilenet_v1_pets.config.
The configuration file that we have downloaded, needs to be edited as per our requirement. In configuration file we have changed the no. of classes, no. of steps in training, path to model checkpoint and path to pbtxt files as shown below.

For the object-detection.pbtxt file, create a pbtxt file and put following text inside it to specify our labels for the problem.

Now go to models -> research -> object detection -> legecy and copy train.py file to models -> research folder.

Then create a folder named images inside models -> research folder. Put your mobilenet model, configuration file, train and test image data folders, and train and test csv label files. Inside training_data folder, create a folder named data and put your train and test TFRecord files. The hierarchy will look like this:

Also create a training folder inside the images folder where model will save its checkpoints. Now run the following command to train the model from models -> research folder.

Time for training your model will depend upon your machine configuration and no. of steps that you have mentioned in the configuration file.

Now we have our trained model and its checkpoints are saved inside the models/research/images/training folder. In order to test this model and use this model to detect objects we need to export the inference graph.

To do this first we need to copy models/research/object_detection/export_inference_graph.py to models/research/ folder. Then inside models/research folder create a folder named “snake” which will save the inference graph. From models -> research folder run the following command:

Now we are having forzen_inference_graph.pb inside models/research/snake folder which will be used to detect object using trained model.

This is all for training the model and saving the inference graph, in the next blog we will see how to use this inference graph for object detection and how to run our snake game using this trained object detection model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part IV

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API – Part II

In the previous blog, we did two things. First, we create a dataset and second we split this into training and test. In this blog, we will learn how to convert this dataset into TFRecord format for training.

Before creating TFRecord file, we just need to do one more step. In the last blog, we have generated XML files using LabelImg. To get labels for training and test dataset, we need to convert these XML files into CSV format. To do this we will use the following code which has been taken from this repository.

In the above main function, you should specify your XML files path for both train and test folder. The generated CSV files will contain columns as filename, width, and height of images, output label of images and coordinates of the annotated rectangular box as shown in the figure below

Once you have your train and test images with labels in CSV format, let’s convert data in TFRecord format.

A TFRecord file store your data as a sequence of binary strings. It has many advantages over normal data formats. To do this we will use the following code which has been taken from this repository. According to your requirement, you need to change the condition for labels at line 31 below.

Save this code in a file named generate_tfrecord.py. Now in order to use this code, first we need to clone tensorflow object detection API. For that do the following:

Then we need to do the following steps to avoid getting error of protoc:

  1. Go to this release link and download protobuf according to your operating system.
  2. Extract the downloaded file and go to bin folder inside it.
  3. Copy protoc.exe file and put in models -> research -> object_detection -> protos folder.
  4. In protos folder run the following command for .proto files.

After cloning this repository, copy generate_tfrecord.py inside models -> research folder and run the following command.

Above commands will generate two files named train.record and test.record which will be used for training of model.

This is all for generating TFRecord file, in the next blog we will perform training and testing of object detection model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part III

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API

Here, we will learn how to use tensorflow object detection API with the computer’s webcam to play a snake game. We will use hand gestures instead of the keyboard.

We will use following steps to play snake game using tensorflow object detection API:

  1. Generate dataset.
  2. Convert train and test datasets into tfrecord format.
  3. Train a pre-trained model using generated data.
  4. Integrate trained model with snake game.
  5. Play the snake game using your own hand gestures.

In this blog, we will cover only the first step i.e. how to create your own training data and the rest steps will be covered in the subsequent blogs.

You can find code here.

Generate Dataset

A snake game problem generally contains four directions to move i.e. up, down, right and left. For each of the four directions, we need to generate at least 100 images per direction. You can use your phone or laptop camera to do this. Try to generate images with a different background for better generalization. Below are some examples of images of hand gestures.

Hand Gestures

Now we have our captured images of hand gestures. The next thing is to annotate these images according to their classes. Which means we need to create rectangular boxes around hand gestures and label them appropriately. Don’t worry there is a tool named LabelImg, which is highly helpful to annotate images to create training and test dataset. To start with LabelImg you can follow there GitHub link. The start screen of Labellmg would look like this.

At the left side of the screen, you can find various options. Click on Open dir and choose the input image folder. Then click on Change save dir and select output folder where generated XML files will be saved. This XML file will contain coordinates of your generated rectangular box in the image, something like this.

To create a rectangular box in the image using Labellmg, you just need to press ‘W’ and then create a box and save it. You can create one or multiple boxes in one image as shown in the figure below. Repeat this for all the images.

Now we have images and their corresponding XML files. Then we will separate this dataset into training and testing in the 90/10 ratio. To do this we need to put 90% of images of each class ‘up’, ‘right’, ‘left’ and ‘down’ and their corresponding XML files in one folder and other 10% in other folder.

That’s all for creating dataset, in the next blog we will see how to create TFRecord files from these datasets which will be used for training of the model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part II

 Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Densely Connected Convolutional Networks – DenseNet

When we see a machine learning problem related to an image, the first things comes into our mind is CNN(convolutional neural networks). Different convolutional networks like LeNet, AlexNet, VGG16, VGG19, ResNet, etc. are used to solve different problems either it is supervised(classification) or unsupervised(image generation). Through these years there has been more deeper and deeper CNN architectures are used. As more complex problem comes, more deeper convolutional networks are preferred. But with deeper networks problem of vanishing gradient arises.

To solve this problem Gao Huang et al. introduced Dense Convolutional networks. DenseNets have several compelling advantages:

  1. alleviate the vanishing-gradient problem
  2. strengthen feature propagation
  3. encourage feature reuse, and substantially reduce the number of parameters.

How DenseNet works?

Recent researches like ResNet also tries to solve the problem of vanishing gradient. ResNet passes information from one layer to another layer via identity connection. In ResNet features are combined through summation before passing into the next layer.

While in DenseNet, it introduces connection from one layer to all its subsequent layer in a feed forward fashion (As shown in the figure below). This connection is done using concatenation not through summation.

source: DenseNet

ResNet architecture preserve information explicitly through identity connection, also recent variation of ResNet shows that many layers contribute very little and can in fact be randomly dropped during training. DenseNet architecture explicitly differentiates between information that is added to the network and information that is preserved.

In DenseNet, Each layer has direct access to the gradients from the loss function and the original input signal, leading to an r improved flow of information and gradients throughout the network, DenseNets have a regularizing effect, which reduces overfitting on tasks with smaller training set sizes.

An important difference between DenseNet and existing network architectures is that DenseNet can have very narrow layers, e.g., k = 12.  It refers to the hyperparameter k as the growth rate of the network. It means each layer in dense block will only produce k features. And these k features will be concatenated with previous layers features and will be given as input to the next layer.

DenseNet Architecture

The best way to illustrate any architecture is done with the help of code. So, I have implemented DenseNet architecture in Keras using MNIST data set.

A DenseNet consists of dense blocks. Each dense block consists of convolution layers. After a dense block a transition layer is added to proceed to next dense block (As shown in figure below).

Every layer in a dense block is directly connected to all its subsequent layers. Consequently, each layer receives the feature-maps of all preceding layer.

Each convolution layer is consist of three consecutive operations: batch normalization (BN) , followed by a rectified linear unit (ReLU) and a 3 × 3 convolution (Conv). Also dropout can be added which depends on your architecture requirement.

An essential part of convolutional networks is down-sampling layers that change the size of feature-maps. To facilitate down-sampling in DenseNet architecture it divides the network into multiple densely connected dense blocks(As shown in figure earlier).

The layers between blocks are transition layers, which do convolution and pooling. The transition layers consist of a batch normalization layer and an 1×1 convolutional layer followed by a 2×2 average pooling layer.

DenseNets can scale naturally to hundreds of layers, while exhibiting no optimization difficulties. Because of their compact internal representations and reduced feature redundancy, DenseNets may be good feature extractors for various computer vision tasks that build on convolutional features

The full code can be found here.

Referenced research paper: Densely Connected Convolutional Networks

Hope you enjoy reading. If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Variational Autoencoders

Variational autoencoders are an extension of autoencoders and used as generative models. You can generate data like text, images and even music with the help of variational autoencoders.

Autoencoders are the neural network used to reconstruct original input. To know more about autoencoders please got through this blog. They have a certain application like denoising autoencoders and dimensionality reduction for data visualization. But apart from that, they are fairly limited.

To overcome this limitation, variational autoencoders comes into place. A common autoencoder learns a function which does not train autoencoder to generate images from a particular distribution. Also, if you try to create a generative model using autoencoders, you do not want to generate data as therein input. You want the output data with some variations which mostly look like input data.

Variational Autoencoder Model

A variational autoencoder has encoder and decoder part mostly same as autoencoders, the difference is instead of creating a compact distribution from its encoder, it learns a latent variable model. These latent variables are used to create a probability distribution from which input for the decoder is generated. Another is, instead of using mean squared or cross entropy loss function (as in autoencoders ) it has its own loss function.

I will not go further into the mathematics behind it, Lets jump into the code which will give more understanding about variational autoencoders. To know more about the mathematics behind it please go through this tutorial.

I have implemented variational autoencoder in keras using MNIST dataset. So lets first download the data.

Now create an encoder model as it is created in autoencoders.

Latent Distribution Parameters and Function

Now encode the output of the encoder to latent distribution parameters. Here, I have created two parameters mu and sigma which represents the mean and standard distribution of the distribution.

Here I have taken latent space dimension equal to 2. This is the bottleneck which means we are passing our entire set of data to two single variables. So if we increase our latent space dimension to 5, 10 or higher, we can get better results in the output. But this will create more data in the bottleneck.

Now create a Gaussian distribution function with mean zero and standard deviation of 1. This distribution will give variation in the input to the decoder, which will help to get variation in the output. Then decoder will predict the output using distribution.

Loss Function

For the loss function, a variational autoencoder uses the sum of two losses, one is the generative loss which is a binary cross entropy loss and measures how accurately the image is predicted, another is the latent loss, which is KL divergence loss, measures how closely a latent variable match Gaussian distribution. This KL divergence makes sure that our distribution generated from encoder do not go away from the origin. Then train the model.

Our model is ready and we can generate images from it very easily. All we need to do is sample latent variable from distribution and pass it to the decoder. Lets test with the following code:

Here is the output generated from sampled distribution in the above code.

The full code can be find here.

Hope you understand the basics of variational autoencoders. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Referenced papers: Auto-Encoding Variational BayesTutorial on Variational Autoencoders

Denoising Autoencoders

In my previous blog, we have discussed what is an autoencoder, its applications and a simple implementation in keras. In this blog, we will see a variant of autoencoder – ‘ denoising autoencoders ‘.

A denoising autoencoder is an extension of autoencoders. An autoencoder tries to learn identity function( output equals to input ), which makes it risking to not learn useful feature. One method to overcome this problem is to use denoising autoencoders.

For training a denoising autoencoder, we need to use noisy input data. For that, we need to add some noise to an original image. The amount of corrupting data depends on the amount of information present in data. Usually, 25-30 % data is being corrupted. This can be higher if your data contains less information. Let see how you can add noise to data in code:

To calculate loss, the output of the denoising autoencoder is then compared to original input instead of the corrupted one. Such a loss function train model to learn interesting features rather than learning identity function.

I have implemented denoising autoencoder in keras using MNIST data, which will give you an overview, how a denoising autoencoder works.

following is the result of denoising autoencoder.

The full code can be find here.

Hope you understand the usefulness of denoising autoencoder. In the next blog, we will feature variational autoencoders. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Autoencoders

Let’s start with a simple definition of autoencoders. ‘ Autoencoders are the neural networks trained to reconstruct their original input’.

Now, you might be thinking what’s the use of reconstructing same data. Let me give you an example If you want to transfer data of GB’s of size and somehow if you can compress it into MB’s and then able to reconstruct back the data to the original size, isn’t that a better way to transfer data. This is one of the applications of autoencoders.

Autoencoders generally consists of two parts, one is encoder and other is decoder. Encoder downscale data to less number of features and decoder upscale the extracted features to original one.

There are some practical applications of autoencoders:

  1. Dimensionality reduction for data visualization
  2. Image Denoising
  3. Generative Models

Visualizing a 10-dimensional vector is difficult. To overcome this problem we need to reduce that 10-dimensional vector into 2-D or 3-D. One of the famous algorithm PCA (Principal Component Analysis) tries to solve this problem. PCA uses linear transformations while autoencoders can use both linear and non-linear transformations for dimensionality reduction. Which makes autoencoders to generate more complex and interesting features than PCA.

Autoencoders can be used to remove the noise present in the image. It can also be used to generate new images required for a specific task. We will see more about these two applications in the next blog.

Now, let’s start with the simple implementation of autoencoders in Keras using MNIST data. First, let’s download MNIST training and test data and reshape it.

Encoder

MNIST data consists of images of digits. So, it is better to use a convolutional neural network in our encoders and decoders. In our encoder, I have used conv and max-pooling layers to extract the compressed representation. Then flatten the encoder output to 32 features. Which will be the input to the decoder.

Decoder

In the decoder, we need to upsample the extracted 32 features into the original size of the image. To achieve this, I have used Conv2DTranspose functions from keras. Then the final layer of the decoder will give the reconstructed output which will be similar to the original input.

To minimize reconstruction loss, we train the network with a large dataset and update weights. Now, our model is created, the next thing is to compile and train the model.

Below are the results from autoencoder trained above. The first line of digits shows the original input (test images) while the second line represents the reconstructed inputs from the model.

The full code can be find here.

Hope you understand the basics of autoencoders, where these can be used and how a simple autoencoder be implemented. In the next blog, we will see how to denoise an image using autoencoders. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Referenced Research Paper: http://proceedings.mlr.press/v27/baldi12a/baldi12a.pdf

Neural Arithmetic Logic Units

In this tutorial, you will learn about neural arithmetic logic units (NALU).

You can see full TensorFlow implementation of neural arithmetic logic units in my GitHub Repository.

In Present time you can see neural network has wide application area from simple classification problem to complex self-driving cars problem. And neural network is doing very well in these fields. But can you think a neural network can’t count. Even animals as simple as bees can do that.

The problem is that a neural network can not perform numerical extrapolation outside training data. It will not be able to learn even a scalar identity function outside it’s training set. Recently DeepMind researchers released a paper in which they have generated a function which tries to solve this problem.

Failure Of Neural Network Learning a scalar identity function

The problem of neural nets not able to learn identity relations is not new. But in paper they tried to show this using an example.

They used an autoencoder of 3 layers each of 8 units and tried to predict identity relations. As an example if input is given as 4 then output should also be 4. They used different non-linear functions in this network like sigmoid and tanh but they all fail to extrapolate identity relation outside training data set.

They also saw that some highly linear function like PReLU are able to reduce the error but you can see even neural nets have function that are capable of extrapolation, they fail to do it.

To solve this problem they proposed two models:

  1. NAC (Neural Accumulator)
  2. NALU (Neural Arithmetic Logic Units)

NAC (Neural Accumulator)

Neural accumulator is able to solve problem of addition and subtraction.

NAC is a special case where transformation matrix of a layer consists of [-1, 0,1]. This makes output from W as addition or subtraction of rows of input vector rather than arbitrary re scaling produced by non-linear functions. As an example if our input layer consists of X1  and X2 then output from NAC will be linear combinations of input vectors. It will make numbers consistent throughout the model no matter haw many operations are applied there.

Since W is having hard constraints that every element of W should be one of {-1, 0, 1}. It makes learning difficult. The problem of difficult learning is that hard constraints creates difficulty in updating weights during back propagation. To solve this they proposed a continuous and differentiable parameterization of W.

 

w_hat and m_hat are randomly initialized weights and be convenient to learn with gradient descent. This guarantees that W will be in range of {-1, 1} and be close to {-1 , 0 ,1}. Here ” * ” means element-wise multiplication.

NALU (Neural Arithmetic Logic Units)

NAC is able to solve problem of addition/subtraction but also to solve problem of multiplication/division they came up with NALU which consists of two NAC sub cells, one capable of addition/subtraction and other for multiplication/division.

It consists of these five equations:

Where,

  1.  w_hat, m_hat and G and randomly initialized weights,
  2.  ϵ is used to get away with problem of log(0),
  3.  x and y are input and output layer respectively,
  4. g is the gate which will be between 0 and 1.

Here the concept of gate is being added as variable ” g ” , such that if output value is applied with g = 1(on) then multiply/divide sub cell is 0 (off) and vice-versa.

For addition and subtraction (a = matmul(x, W) ), is identical to original NAC while for multiply/divide NAC operates in log space and capable of learning to multiply and divide (m = exp(matmul(W , log(|x|+ ϵ ) ) ).

So, this NALU is capable of both extrapolation and interpolation.

Experiments performed with NAC and NALU models

In paper they have also applied these concepts over different task to see ability of NAC and NALU. They found NALU to be very useful in different problems like:

  1. Learning tasks using different arithmetic functions( x+y, x-y, x-y+x, x*y, etc)
  2. Counting Task using recurrent network in which images of different digits are being fed to model and output should count no of each different type of digits.
  3. Language to number translation task in which expression like ” five hundred fifteen ” is being fed to network and output should return ” 515 “. Here NALU is applied with a LSTM model in output layer.
  4. NALU is also used with reinforcement learning to track time in a grid-world environment.

Summary

We have seen that NAC and NALU can be applied to overcome problem of failure of numerical representation to generalize outside the range observed in training data set. If you have gone through this blog, you have seen that this NAC and NALU concept is very easy to grasp and apply. However, it can not be said that NALU will be perfect for every task, so we have to see where it is giving good results.

Referenced Research Paper : Neural Arithmetic Logic Units

 

 

 

 

 

Free GPU for fast.ai on Google Colab

In this tutorial, I will guide you to use google colab for fast.ai lessons.

Google colab is a tool which provides free GPU machine continuously for 12 hours. Even you can reconnect to a different GPU machine after 12 hours.

Here are the simple steps for running fast.ai Notebooks on google colab.

  1. Download fast.ai lesson notebooks. from https://github.com/fastai/fastai/tree/master/courses/dl1
  2. Login to your Google(Gmail) account in a browser.
  3. Go to colaboratory at URL https://research.google.com/colaboratory/unregistered.html
  4. A pop-up window will appear, close that window.
  5. Upload a new Notebook from your downloaded notebook files on colab (go to File ->Upload Notebook ) whichever lesson you want to work on.
  6. Now change your runtime machine to GPU machine and choose the type of python (Python 2 or Python 3) you are going to use by clicking on (runtime -> change runtime type).
  7. You can check if GPU running or not by writing the following code:

    it should come up with output ‘/device:GPU:0′ 

Now install the following libraries in your notebook by inserting code cells( Insert -> code cell ):

  1. Install pytorch using
  2. Install fast.ai using
  3. Install libSM using

Download dataset using bash commands as an example of dogs vs cats dataset

Now you are ready to use fast.ai on google colab.

Enjoy!!!