Tag Archives: object detection

Snake Game Using Tensorflow Object Detection API – Part IV

In the last blog we have trained the model and saved the inference graph. In this blog we will learn how to use this inference graph for object detection and how to run our snake game using this trained object detection model.

To play snake game using this trained model, you first need to develop a snake game. But don’t worry you need not to develop it from scratch, you can clone this repository. And if you want to know algorithm behind this code you can follow this blog.

Now we have our snake game next thing is to use this object detection model to play the snake game. To do this we need to run both snake game file and following script from models/research folder simultaneously.

In the above code we need to specify path to our inference graph using ” PATH_TO_CKPT ” variable. Also we need to specify ” PATH_TO_LABELS ” variable with path of object-detection.pbtxt file. Then specify number of classes i.e. 4 in our case.

In the above script we have used ” pyautogui ” to press the button when particular hand gesture for a particular direction is detected.

Finally you can play snake game using your hand gestures. Let see some of the results.

Pretty well yeah. This is all for playing snake game using tensorflow object detection API. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API – Part III

In the previous blogs we have seen how to generate data for object detection and convert it into TFRecord format to train the model. In this blog we will learn how to use this data to train the model.

To train the model we will use the pre-trained model and then use transfer learning to train it on our dataset. I have used mobilenet pre trained model. Here is mobilenet model. For its configuration file you can go to model -> research -> object_detection -> samples -> configs ->> ssd_mobilenet_v1_pets.config.
The configuration file that we have downloaded, needs to be edited as per our requirement. In configuration file we have changed the no. of classes, no. of steps in training, path to model checkpoint and path to pbtxt files as shown below.

For the object-detection.pbtxt file, create a pbtxt file and put following text inside it to specify our labels for the problem.

Now go to models -> research -> object detection -> legecy and copy train.py file to models -> research folder.

Then create a folder named images inside models -> research folder. Put your mobilenet model, configuration file, train and test image data folders, and train and test csv label files. Inside training_data folder, create a folder named data and put your train and test TFRecord files. The hierarchy will look like this:

Also create a training folder inside the images folder where model will save its checkpoints. Now run the following command to train the model from models -> research folder.

Time for training your model will depend upon your machine configuration and no. of steps that you have mentioned in the configuration file.

Now we have our trained model and its checkpoints are saved inside the models/research/images/training folder. In order to test this model and use this model to detect objects we need to export the inference graph.

To do this first we need to copy models/research/object_detection/export_inference_graph.py to models/research/ folder. Then inside models/research folder create a folder named “snake” which will save the inference graph. From models -> research folder run the following command:

Now we are having forzen_inference_graph.pb inside models/research/snake folder which will be used to detect object using trained model.

This is all for training the model and saving the inference graph, in the next blog we will see how to use this inference graph for object detection and how to run our snake game using this trained object detection model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part IV

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API

Here, we will learn how to use tensorflow object detection API with the computer’s webcam to play a snake game. We will use hand gestures instead of the keyboard.

We will use following steps to play snake game using tensorflow object detection API:

  1. Generate dataset.
  2. Convert train and test datasets into tfrecord format.
  3. Train a pre-trained model using generated data.
  4. Integrate trained model with snake game.
  5. Play the snake game using your own hand gestures.

In this blog, we will cover only the first step i.e. how to create your own training data and the rest steps will be covered in the subsequent blogs.

You can find code here.

Generate Dataset

A snake game problem generally contains four directions to move i.e. up, down, right and left. For each of the four directions, we need to generate at least 100 images per direction. You can use your phone or laptop camera to do this. Try to generate images with a different background for better generalization. Below are some examples of images of hand gestures.

Hand Gestures

Now we have our captured images of hand gestures. The next thing is to annotate these images according to their classes. Which means we need to create rectangular boxes around hand gestures and label them appropriately. Don’t worry there is a tool named LabelImg, which is highly helpful to annotate images to create training and test dataset. To start with LabelImg you can follow there GitHub link. The start screen of Labellmg would look like this.

At the left side of the screen, you can find various options. Click on Open dir and choose the input image folder. Then click on Change save dir and select output folder where generated XML files will be saved. This XML file will contain coordinates of your generated rectangular box in the image, something like this.

To create a rectangular box in the image using Labellmg, you just need to press ‘W’ and then create a box and save it. You can create one or multiple boxes in one image as shown in the figure below. Repeat this for all the images.

Now we have images and their corresponding XML files. Then we will separate this dataset into training and testing in the 90/10 ratio. To do this we need to put 90% of images of each class ‘up’, ‘right’, ‘left’ and ‘down’ and their corresponding XML files in one folder and other 10% in other folder.

That’s all for creating dataset, in the next blog we will see how to create TFRecord files from these datasets which will be used for training of the model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part II

 Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.