In the previous blog, we discuss various first-order derivative filters. In this blog, we will discuss the Laplacian of Gaussian (LoG), a second-order derivative filter. So, let’s get started
Mathematically, the Laplacian is defined as
Unlike first-order filters that detect the edges based on local maxima or minima, Laplacian detects the edges at zero crossings i.e. where the value changes from negative to positive and vice-versa.
Let’s obtain kernels for Laplacian similar to how we obtained kernels using finite difference approximations for the first-order derivative.
Adding these two kernels together we obtain the Laplacian kernel as shown below
This is called a negative Laplacian because the central peak is negative. Other variants of Laplacian can be obtained by weighing the pixels in the diagonal directions also. Make sure that the sum of all kernel elements is zero so that the filter gives zero response in the homogeneous regions.
Let’s now discuss some properties of the Laplacian
- Unlike first-order that requires two masks for finding edges, Laplacian uses 1 mask but the edge orientation information is lost in Laplacian.
- Laplacian gives better edge localization as compared to first-order.
- Unlike first-order, Laplacian is an isotropic filter i.e. it produces a uniform edge magnitude for all directions.
- Similar to first-order, Laplacian is also very sensitive to noise
To reduce the noise effect, image is first smoothed with a Gaussian filter and then we find the zero crossings using Laplacian. This two-step process is called the Laplacian of Gaussian (LoG) operation.
But this can also be performed in one step. Instead of first smoothing an image with a Gaussian kernel and then taking its Laplace, we can obtain the Laplacian of the Gaussian kernel and then convolve it with the image. This is shown below where f is the image and g is the Gaussian kernel.
Now, let’s see how to obtain LoG kernel. Mathematically, LoG can be written as
The LoG kernel weights can be sampled from the above equation for a given standard deviation, just as we did in Gaussian Blurring. Just convolve the kernel with the image to obtain the desired result, as easy as that.
Select the size of the Gaussian kernel carefully. If LoG is used with small Gaussian kernel, the result can be noisy. If you use a large Gaussian kernel, you may get poor edge localization.
Now, let’s see how to do this using OpenCV-Python
OpenCV-Python
OpenCV provides a builtin function that calculates the Laplacian of an image. You can find it here. Below is the basic syntax of what this function looks like
1 2 3 4 |
cv2.Laplacian(src, ddepth[, ksize[, scale[, delta[, borderType]]]]) # src - input image # ddepth - Desired depth of the destination image. # ksize - kernel size |
Steps for LoG:
- Apply LoG on the image. This can be done in two ways:
- First, apply Gaussian and then Laplacian or
- Convolve the image with LoG kernel directly
- Find the zero crossings in the image
- Threshold the zero crossings to extract only the strong edges.
Let’s understand each step through code
1 2 3 4 5 6 7 8 9 10 11 |
import cv2 import numpy as np # Load the image in greyscale img = cv2.imread('D:/downloads/clouds.jpg',0) # Apply Gaussian Blur blur = cv2.GaussianBlur(img,(3,3),0) # Apply Laplacian operator in some higher datatype laplacian = cv2.Laplacian(blur,cv2.CV_64F) |
Since zero crossings is a change from negative to positive and vice-versa, so an approximate way is to clip the negative values to find the zero crossings.
1 2 3 4 5 |
# But this tends to localize the edge towards the brighter side. laplacian1 = laplacian/laplacian.max() cv2.imshow('a7',laplacian1) cv2.waitKey(0) |
Another way is to check each pixel for zero crossing as shown below
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
def Zero_crossing(image): z_c_image = np.zeros(image.shape) # For each pixel, count the number of positive # and negative pixels in the neighborhood for i in range(1, image.shape[0] - 1): for j in range(1, image.shape[1] - 1): negative_count = 0 positive_count = 0 neighbour = [image[i+1, j-1],image[i+1, j],image[i+1, j+1],image[i, j-1],image[i, j+1],image[i-1, j-1],image[i-1, j],image[i-1, j+1]] d = max(neighbour) e = min(neighbour) for h in neighbour: if h>0: positive_count += 1 elif h<0: negative_count += 1 # If both negative and positive values exist in # the pixel neighborhood, then that pixel is a # potential zero crossing z_c = ((negative_count > 0) and (positive_count > 0)) # Change the pixel value with the maximum neighborhood # difference with the pixel if z_c: if image[i,j]>0: z_c_image[i, j] = image[i,j] + np.abs(e) elif image[i,j]<0: z_c_image[i, j] = np.abs(image[i,j]) + d # Normalize and change datatype to 'uint8' (optional) z_c_norm = z_c_image/z_c_image.max()*255 z_c_image = np.uint8(z_c_norm) return z_c_image |
Depending upon the image you may need to apply thresholding and median blurring to suppress the noise.
Hope you enjoy reading.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.
this zero crossing code is taking too much time to execute. Is it fine that it is taking 48sec to execute?
When I applied the Laplacian code, cv2.imshow showed me collect image. But cv2.imwrite could not save picture as same as I can see the image through the imshow window.