In this blog, we will discuss how to find different features of contours such as area, centroid, orientation, etc. With the help of these features/statistics, we can do some sort of recognition. So, in this blog, we will refer to a very old fundamental work in computer vision known as Image moments that helps us to calculate these statistics. So, let’s first discuss what are image moments and how to calculate them.
In simple terms, image moments are a set of statistical parameters to measure the distribution of where the pixels are and their intensities. Mathematically, the image moment Mij of order (i,j) for a greyscale image with pixel intensities I(x,y) is calculated as
Here, x, y refers to the row and column index and I(x,y) refers to the intensity at that location (x,y). Now, let’s discuss how simple image properties are calculated from image moments.
Area:
For a binary image, the zeroth order moment corresponds to the area. Let’s discuss how?
Using the above formulae, the zeroth order moment (M00) is given by
For a binary image, this corresponds to counting all the non-zero pixels and that is equivalent to the area. For greyscale image, this corresponds to the sum of pixel intensity values.
Centroid:
Centroid simply is the arithmetic mean position of all the points. In terms of image moments, centroid is given by the relation
This is simple to understand. For instance, for a binary image M10 corresponds to the sum of all non-zero pixels (x-coordinate) and M00 is the total number of non-zero pixels and that is what the centroid is.
Let’s take a simple example to understand how to calculate image moments for a given image.
Below are the area and centroid calculation for the above image
OpenCV-Python
OpenCV provides a function cv2.moments() that outputs a dictionary containing all the moment values up to 3rd order.
1 2 3 4 |
output = cv2.moments(input[,binaryImage]) # input: image(single channel) or array of 2D points. Should be either np.int32 or np.float32. # binaryImage: Only used if input is image. If True all the non-zero pixels are treated as 1's. |
Below is the sample code that shows how to use cv2.moments().
1 2 3 4 5 6 7 8 9 10 |
import cv2 # read the image img = cv2.imread('star.jpg',0) # Binarize the image ret,thresh = cv2.threshold(img,127,255,0) # Find the contours contours,hierarchy = cv2.findContours(thresh, 1, 2) cnt = contours[0] # Calculate the moments M = cv2.moments(cnt) |
From this moments dictionary, we can easily extract the useful features such as area, centroid etc. as shown below.
1 2 3 4 5 |
# Calculate area area = M['m00'] # Calculate centroid cx = int(M['m10']/M['m00']) cy = int(M['m01']/M['m00']) |
That’s all about image moments. Hope you enjoy reading.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.
For a grayscale image, does calculating the centroid of the blob using above image holds true? Also, I would like to calculate the radius of the blob. How could I do so?
Thanks! this was incredibly useful, really struggled to understand image moments until I found this
When you count the x-coordinate for the centroid, why do you use the y-coordinates in the numerator and vice versa for the y-coordinate of the centroid? Result is the same in this case, but shouldn’t it be switched?