Tag Archives: python

Show current DateTime on live video using OpenCV-Python

Have you seen the security cameras output where DateTime continuously keeps updating? In this blog, we will be doing the same using OpenCV-Python i.e. we will put current DateTime on the live webcam feed. So, let’s get started.

For fetching current DateTime, we will be using Python’s DateTime module. The following code shows how to get the current DateTime

To put the DateTime on the live video, we will be using cv2.putText() on each frame as shown below

To know more about cv2.putText(), refer to this blog.

Above are the two things, that we will be needing for this task. I hope you understand these. Now, let’s get started

Steps:

  • Open the camera using cv2.VideoCapture()
  • Until the camera is open
    • Grab each frame using cap.read()
    • Put the current DateTime on each frame using cv2.putText() as discussed above
    • Display each frame using cv2.imshow()
  • On termination, release the webcam and destroy all windows using cap.release() and cv2.destroyAllWindows() respectively.

Code:

The snapshot of the output looks like this

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Scraping Video Information from YouTube

Web scraping is a way to extract information from the internet in an automated fashion. We all know that YouTube is a huge resource of data having tons of videos with their relative information’s like views, comments, etc.In this blog we will learn how to use web scraping in python to extract video information from YouTube search. In video information we will extract number of views and video heading appeared in search results.

To get started with this, we first need to install two important libraries. First is ” requests ” to get the response from a YouTube search result and other is ” Beautiful Soup ” to parse this response into html content.

Now we have install the required libraries, let’s get started.

  • Import the libraries
  • Whenever you search in YouTube, it creates a base search URL and then adds your search query into that URL to complete the it. Let say we search ” theailearner ” in the YouTube. Base search URL and query can be defined as follows.
  • Now, we will scrape the data from this URL using ” requests ” library.
  • Once we scraped the data, we will parse it into HTML using beautiful soup and find all the videos information resulted in search result. To extract particular information we will use particular class from HTML data.
  • The above used soup.findall() function will give the required data, but to make it easily understandable we need to run a simple python script.

 

Now you might have got some feeling about how to scrape data from YouTube. We can also scrape the other data from YouTube like video information from a channel, comments in a video, likes and dislikes and etc.

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Creating a Snake Game using OpenCV-Python

Isn’t it interesting to create a snake game using OpenCV-Python? And what if I tell you that you only gonna need

  • cv2.imshow()
  • cv2.waitKey()
  • cv2.putText()
  • cv2.rectangle()

So, let’s get started.

Import Libraries

For this we only need four libraries

Displaying Game Objects

  • Game Window: Here, I have used a 500×500 image as my game window.
  • Snake and Apple: I have used green squares for displaying a snake and a red square for an apple. Each square has a size of 10 units.

Game Rules

Now, let’s define some game rules

  • Collision with boundaries: If the snake collides with the boundaries, it dies.
  • Collision with self: If the snake collides with itself, it should die. For this, we only need to check whether the snake’s head is in snake body or not.
  • Collision with apple: If the snake collides with the apple, the score is increased and the apple is moved to a new location.

Also, on eating apple snake length should increase. Otherwise, snake moves as it is.

  • Snake game has a fixed time for a keypress. If you press any button in that time, the snake should move in that direction otherwise continue moving in the previous direction. Sadly, with OpenCV cv2.waitKey() function, if you hold down the left direction button, the snake starts moving fast in that direction. So, to make the snake movement uniform, i did something like this.

Because cv2.waitKey() returns -1 when no key is pressed, so this ‘k’ stores the first key pressed in that time. Because the while loop is for a fixed time, so it doesn’t matter how fast you pressed a key. It will always wait a fixed time.

  • Snake cannot move backward: Here, I have used the w, a, s, d controls for moving the snake. If the snake was moving right and we pressed the left button, it will continue moving right or in short snake cannot directly move backwards.

After seeing which direction button is pressed, we change our head position

Displaying the final Score

For displaying the final score, i have used cv2.putText() function.

Finally, our snake game is ready and looks like this

The full code can be found here.

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API – Part IV

In the last blog we have trained the model and saved the inference graph. In this blog we will learn how to use this inference graph for object detection and how to run our snake game using this trained object detection model.

To play snake game using this trained model, you first need to develop a snake game. But don’t worry you need not to develop it from scratch, you can clone this repository. And if you want to know algorithm behind this code you can follow this blog.

Now we have our snake game next thing is to use this object detection model to play the snake game. To do this we need to run both snake game file and following script from models/research folder simultaneously.

In the above code we need to specify path to our inference graph using ” PATH_TO_CKPT ” variable. Also we need to specify ” PATH_TO_LABELS ” variable with path of object-detection.pbtxt file. Then specify number of classes i.e. 4 in our case.

In the above script we have used ” pyautogui ” to press the button when particular hand gesture for a particular direction is detected.

Finally you can play snake game using your hand gestures. Let see some of the results.

Pretty well yeah. This is all for playing snake game using tensorflow object detection API. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API – Part III

In the previous blogs we have seen how to generate data for object detection and convert it into TFRecord format to train the model. In this blog we will learn how to use this data to train the model.

To train the model we will use the pre-trained model and then use transfer learning to train it on our dataset. I have used mobilenet pre trained model. Here is mobilenet model. For its configuration file you can go to model -> research -> object_detection -> samples -> configs ->> ssd_mobilenet_v1_pets.config.
The configuration file that we have downloaded, needs to be edited as per our requirement. In configuration file we have changed the no. of classes, no. of steps in training, path to model checkpoint and path to pbtxt files as shown below.

For the object-detection.pbtxt file, create a pbtxt file and put following text inside it to specify our labels for the problem.

Now go to models -> research -> object detection -> legecy and copy train.py file to models -> research folder.

Then create a folder named images inside models -> research folder. Put your mobilenet model, configuration file, train and test image data folders, and train and test csv label files. Inside training_data folder, create a folder named data and put your train and test TFRecord files. The hierarchy will look like this:

Also create a training folder inside the images folder where model will save its checkpoints. Now run the following command to train the model from models -> research folder.

Time for training your model will depend upon your machine configuration and no. of steps that you have mentioned in the configuration file.

Now we have our trained model and its checkpoints are saved inside the models/research/images/training folder. In order to test this model and use this model to detect objects we need to export the inference graph.

To do this first we need to copy models/research/object_detection/export_inference_graph.py to models/research/ folder. Then inside models/research folder create a folder named “snake” which will save the inference graph. From models -> research folder run the following command:

Now we are having forzen_inference_graph.pb inside models/research/snake folder which will be used to detect object using trained model.

This is all for training the model and saving the inference graph, in the next blog we will see how to use this inference graph for object detection and how to run our snake game using this trained object detection model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part IV

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API – Part II

In the previous blog, we did two things. First, we create a dataset and second we split this into training and test. In this blog, we will learn how to convert this dataset into TFRecord format for training.

Before creating TFRecord file, we just need to do one more step. In the last blog, we have generated XML files using LabelImg. To get labels for training and test dataset, we need to convert these XML files into CSV format. To do this we will use the following code which has been taken from this repository.

In the above main function, you should specify your XML files path for both train and test folder. The generated CSV files will contain columns as filename, width, and height of images, output label of images and coordinates of the annotated rectangular box as shown in the figure below

Once you have your train and test images with labels in CSV format, let’s convert data in TFRecord format.

A TFRecord file store your data as a sequence of binary strings. It has many advantages over normal data formats. To do this we will use the following code which has been taken from this repository. According to your requirement, you need to change the condition for labels at line 31 below.

Save this code in a file named generate_tfrecord.py. Now in order to use this code, first we need to clone tensorflow object detection API. For that do the following:

Then we need to do the following steps to avoid getting error of protoc:

  1. Go to this release link and download protobuf according to your operating system.
  2. Extract the downloaded file and go to bin folder inside it.
  3. Copy protoc.exe file and put in models -> research -> object_detection -> protos folder.
  4. In protos folder run the following command for .proto files.

After cloning this repository, copy generate_tfrecord.py inside models -> research folder and run the following command.

Above commands will generate two files named train.record and test.record which will be used for training of model.

This is all for generating TFRecord file, in the next blog we will perform training and testing of object detection model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part III

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Snake Game Using Tensorflow Object Detection API

Here, we will learn how to use tensorflow object detection API with the computer’s webcam to play a snake game. We will use hand gestures instead of the keyboard.

We will use following steps to play snake game using tensorflow object detection API:

  1. Generate dataset.
  2. Convert train and test datasets into tfrecord format.
  3. Train a pre-trained model using generated data.
  4. Integrate trained model with snake game.
  5. Play the snake game using your own hand gestures.

In this blog, we will cover only the first step i.e. how to create your own training data and the rest steps will be covered in the subsequent blogs.

You can find code here.

Generate Dataset

A snake game problem generally contains four directions to move i.e. up, down, right and left. For each of the four directions, we need to generate at least 100 images per direction. You can use your phone or laptop camera to do this. Try to generate images with a different background for better generalization. Below are some examples of images of hand gestures.

Hand Gestures

Now we have our captured images of hand gestures. The next thing is to annotate these images according to their classes. Which means we need to create rectangular boxes around hand gestures and label them appropriately. Don’t worry there is a tool named LabelImg, which is highly helpful to annotate images to create training and test dataset. To start with LabelImg you can follow there GitHub link. The start screen of Labellmg would look like this.

At the left side of the screen, you can find various options. Click on Open dir and choose the input image folder. Then click on Change save dir and select output folder where generated XML files will be saved. This XML file will contain coordinates of your generated rectangular box in the image, something like this.

To create a rectangular box in the image using Labellmg, you just need to press ‘W’ and then create a box and save it. You can create one or multiple boxes in one image as shown in the figure below. Repeat this for all the images.

Now we have images and their corresponding XML files. Then we will separate this dataset into training and testing in the 90/10 ratio. To do this we need to put 90% of images of each class ‘up’, ‘right’, ‘left’ and ‘down’ and their corresponding XML files in one folder and other 10% in other folder.

That’s all for creating dataset, in the next blog we will see how to create TFRecord files from these datasets which will be used for training of the model.

Next Blog: Snake Game Using Tensorflow Object Detection API – Part II

 Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

PEPs

PEP stands for Python Enhancement Proposals. According to Python.org
 “A PEP is a design document providing information to the Python community, or describing a new feature for Python or its processes or environment. The PEP should provide a concise technical specification of the feature and a rationale for the feature.”

Anyone can submit their own pep which then will be thoroughly peer-reviewed by the community.

PEP numbers like PEP0, PEP8 etc are assigned by the PEP editors, and once assigned are never changed. (See here for complete pep list)

According to PEP 1, there are three different types of PEPs:

  • Standards: Describes a new feature or implementation.
  • Informational: Tells us about general guidelines or information to the community but doesn’t propose a new feature.
  • Process: Describes a process surrounding Python like procedures, guidelines etc. Unlike informational PEPs, you are not free to ignore them.

There are few PEPs which are worth reading like

  • PEP 8: a style guide for python.
  • PEP 20: The Zen of Python (A list of 19 statements that briefly explain the philosophy behind Python).
  • PEP 257: Docstring Convention.

So, if you see any discrepancy write your PEP and wait for its review. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Saving and Loading models in Keras

Generally, a deep learning model takes a large amount of time to train, so its better to know how to save trained model. In this blog we will learn about how to save whole keras model i.e. its architecture, weights and optimizer state.

Lets first create a model in Keras. This is a simple autoencoder model. If you need to know more about autoencoders please refer this blog.

Above we have created a Keras model named as “autoencoder“. Now lets see how to save this model.

Saving and loading only architecture of a model

In keras, you can save and load architecture of a model in two formats: JSON or YAML Models generated in these two format are human readable and can be edited if needed.

Saving and Loading Weights of a Keras Model

With model architecture you will also need model weights to predict output from trained model.

Saving and Loading Both Architecture and Weights in one File

This will save following four parameters in “autoencoder_model.h5” file:

  1. Model Architecture
  2. Model Weights
  3. Loss and Optimizer
  4. State of the optimizer allowing to resume training where you left.

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Sparse Autoencoders

In the last blog we have seen autoencoders and its applications. In this blog we will learn one of its variant, sparse autoencoders.

In every autoencoder, we try to learn compressed representation of the input. Let’s take an example of a simple autoencoder having input vector dimension of 1000, compressed into 500 hidden units and reconstructed back into 1000 outputs. The hidden units will learn correlated features present in the input. But what if input features are completely random? Then it will we difficult for hidden units to learn interesting structure present in data. In that situation what we can do is increase the number of hidden units and add some sparsity constraints. Now the question is what are sparsity constraints?

When sparsity constraints added to a hidden unit, it only activates some units (having large activation values) and makes rest to zero. So, even if we are having a large number of hidden units( as in the above example), it will only fire some hidden units and learn useful structure present in the data.

The simplest implementation of sparsity constraints can be done in keras. You can simple add activity_regularizer to a layer (see line 11) and it will do the rest.

But, if you want to add sparse constraints by writing your own function, you can follow reference given below.

References: Sparse Autoencoders

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.