Tag Archives: gaussian filter

Laplacian of Gaussian (LoG)

In the previous blog, we discuss various first-order derivative filters. In this blog, we will discuss the Laplacian of Gaussian (LoG), a second-order derivative filter. So, let’s get started

Mathematically, the Laplacian is defined as

Unlike first-order filters that detect the edges based on local maxima or minima, Laplacian detects the edges at zero crossings i.e. where the value changes from negative to positive and vice-versa.

Let’s obtain kernels for Laplacian similar to how we obtained kernels using finite difference approximations for the first-order derivative.

Adding these two kernels together we obtain the Laplacian kernel as shown below

This is called a negative Laplacian because the central peak is negative. Other variants of Laplacian can be obtained by weighing the pixels in the diagonal directions also. Make sure that the sum of all kernel elements is zero so that the filter gives zero response in the homogeneous regions.

Let’s now discuss some properties of the Laplacian

  • Unlike first-order that requires two masks for finding edges, Laplacian uses 1 mask but the edge orientation information is lost in Laplacian.
  • Laplacian gives better edge localization as compared to first-order.
  • Unlike first-order, Laplacian is an isotropic filter i.e. it produces a uniform edge magnitude for all directions.
  • Similar to first-order, Laplacian is also very sensitive to noise

To reduce the noise effect, image is first smoothed with a Gaussian filter and then we find the zero crossings using Laplacian. This two-step process is called the Laplacian of Gaussian (LoG) operation.

But this can also be performed in one step. Instead of first smoothing an image with a Gaussian kernel and then taking its Laplace, we can obtain the Laplacian of the Gaussian kernel and then convolve it with the image. This is shown below where f is the image and g is the Gaussian kernel.

Now, let’s see how to obtain LoG kernel. Mathematically, LoG can be written as

The LoG kernel weights can be sampled from the above equation for a given standard deviation, just as we did in Gaussian Blurring. Just convolve the kernel with the image to obtain the desired result, as easy as that.

Select the size of the Gaussian kernel carefully. If LoG is used with small Gaussian kernel, the result can be noisy. If you use a large Gaussian kernel, you may get poor edge localization.

Now, let’s see how to do this using OpenCV-Python

OpenCV-Python

OpenCV provides a builtin function that calculates the Laplacian of an image. You can find it here. Below is the basic syntax of what this function looks like

Steps for LoG:

  • Apply LoG on the image. This can be done in two ways:
    • First, apply Gaussian and then Laplacian or
    • Convolve the image with LoG kernel directly
  • Find the zero crossings in the image
  • Threshold the zero crossings to extract only the strong edges.

Let’s understand each step through code

Since zero crossings is a change from negative to positive and vice-versa, so an approximate way is to clip the negative values to find the zero crossings.

Another way is to check each pixel for zero crossing as shown below

Depending upon the image you may need to apply thresholding and median blurring to suppress the noise.

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Bilateral Filtering

Till now, we have discussed various smoothing filters like Averaging, Median, Gaussian, etc. All these filters are effective in removing different types of noises but at the same time produce an undesirable side effect of blurring the edges also. So isn’t it be nice, if we somehow prevent averaging across edges, while still smoothing other regions. This is what exactly Bilateral filtering does.

Let’s first refresh some basic concepts which will be needed to understand Bilateral filtering.

I hope you are all familiar with the domain and range of any function. If not then let’s refresh these concepts. Domain and range are the set of all plausible values that the independent and dependent variables can take respectively. We all know that the image is also a function (a 2-D light intensity function F(x,y)). Thus for an image, the domain is the set of all possible pixel locations and range corresponds to all possible intensity values.

Now, let’s use these concepts to understand Bilateral filtering.

All the filters we read till now like Median, Gaussian, etc. were domain filters. This means that the filter weights are assigned using the spatial closeness (i.e. domain). This has an issue as it will blur the edges also. Let’s take an example to see how.

Below is a small 3×3 patch extracted from a large image having a diagonal edge. Because in domain filters, we are assigning filter weights according to the spatial closeness, more weights are given to the nearer pixels as compared to the distant pixels. This leads to the edge blurring. See how the central pixel value changed from 10 to 4.

Thus, domain filters doesn’t consider whether a pixel is an edge pixel or not. It just assigns weights according to spatial closeness and thus leads to edge blurring.

Now, let’s see what will happen if we consider range filters. In range filters, we assign weights according to the intensity difference. This ensures that only those pixels with similar intensity to the central pixel is considered for blurring. Because in range filtering, we are not considering the spatial relationship. So, now the similar intensity pixels that are far away from the central pixel affect the final value of the central pixel more as compared to the nearby approx. similar pixels. This makes no sense.

Thus, range filtering alone also doesn’t solve the problem of edge blurring.

Now, what if we combine both domain and range filtering. That will solve our problem. Because now, first, the domain filter will make sure that only nearby pixels (say a 3×3 window) are considered for blurring and then the range filter will make sure that the weights in this 3×3 window are given according to the intensity difference wrt. center pixel. This way it will preserve the edges. This is known as Bilateral filtering (bi for both domain and range filtering).

I hope you understood Bilateral filtering. Now, let’s see how to do this using OpenCV-Python

OpenCV-Python

OpenCV provides an inbuilt function for bilateral filtering as shown below. You can read more about it here but a short description is given below

  • If the sigma values are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.
  • Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.

Let’s take an example to understand this

There exist several extensions to this filter like the guided filter that deals with the artifacts generated by this. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Gaussian Blurring

In the previous blog, we discussed smoothing filters. In this article, we will discuss another smoothing technique known as Gaussian Blurring, that uses a low pass filter whose weights are derived from a Gaussian function. This is perhaps the most frequently used low pass filter in computer vision applications. We will also discuss various properties of the Gaussian filter that makes the algorithm more efficient. So, let’s get started with a basic background introduction.

We already know that a digital image is obtained by sampling and quantizing the continuous signal. Thus if we were to interpolate a pixel value, more chances are that it resembles that of the neighborhood pixels and less on the distant pixels. Similarly while smoothing an image, it makes more sense to take the weighted average instead of just averaging the values under the mask (like we did in Averaging).

So, we should look for a distribution/function that assigns more weights to the nearest pixels as compared to the distant pixels. This is the motivation for using Gaussian distribution.

A 2-d Gaussian function is obtained by multiplying two 1-d Gaussian functions (one for each direction) as shown below

2-d Gaussian function with mean=0 and std. deviation= σ

Now, just convolve the 2-d Gaussian function with the image to get the output. But for that, we need to produce a discrete approximation to the Gaussian function. Here comes the problem.

Because the Gaussian function has infinite support (meaning it is non-zero everywhere), the approximation would require an infinitely large convolution kernel. In other words, for each pixel calculation, we will need the entire image. So, we need to truncate or limit the kernel size.

For Gaussian, we know that 99.3% of the distribution falls within 3 standard deviations after which the values are effectively close to zero. So, we limit the kernel size to contain only values within 3σ from the mean. This approximation generally yields a result sufficiently close to that obtained by the entire Gaussian distribution.

Note: The approximated kernel weights would not sum exactly 1 so, normalize the weights by the overall kernel sum. Otherwise, this will cause darkening or brightening of the image.

A normalized 3×3 Gaussian filter is shown below (See the weight distribution)

Later we will see how to obtain different Gaussian kernels. Now, let’s see some interesting properties of the Gaussian filter that makes it efficient.

Properties

  • First, the Gaussian kernel is linearly separable. This means we can break any 2-d filter into two 1-d filters. Because of this, the computational complexity is reduced from O(n2) to O(n). Let’s see an example
  • Applying multiple successive Gaussian kernels is equivalent to applying a single, larger Gaussian blur, whose radius is the square root of the sum of the squares of the multiple kernels radii. Using this property we can approximate a non-separable filter by a combination of multiple separable filters.
  • The Gaussian kernel weights(1-D) can be obtained quickly using the Pascal’s Triangle. See how the third row corresponds to the 3×3 filter we used above.

Because of these properties, Gaussian Blurring is one of the most efficient and widely used algorithm. Now, let’s see some applications

Applications

  • Computer Graphics
  • Before edge detection (Canny Edge Detector)
  • Before down-sampling an image to reduce the ringing effect

Now let’s see how to do this using OpenCV-Python

OpenCV-Python

OpenCV provides an inbuilt function for both creating a Gaussian kernel and applying Gaussian blurring. Let’s see them one by one.

To create a Gaussian kernel of your choice, you can use

To apply Gaussian blurring, use

This first creates a Gaussian kernel and then convolves it with the image.

Now, let’s take an example to implement these two functions. First, use the cv2.getGaussianKernel() to create a 1-D kernel. Then use the cv2.sepFilter() to apply these kernels to the input image.

The second method is quite easy to use. Just one line as shown below

Both these methods produce the same result but the second one is more easy to implement. Try using this for a different type of noises and compare the results with other techniques.

That’s all about Gaussian blurring. Hope you enjoy reading. In the next blog, we will discuss Bilateral filtering, another smoothing technique that preserves edges also.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.