Spatial Filtering

In the previous blogs, we discussed Intensity Transformation, a point processing technique for image enhancement. In this blog, we will discuss another image enhancement method known as Spatial Filtering, that transforms the intensity of a pixel according to the intensities of the neighboring pixels.

First let’s discuss what is a spatial filter?

The spatial filter is a window with some width and height that is usually much less than that of the image. Mostly 3×3, 5×5 or 7×7 size filters are used. The values in the filter are called coefficients or weights. There are other terms to call filters such as mask, kernel, template, or window. A 3×3 spatial filter is shown below

Now, let’s see the mechanism of Spatial Filtering.

The spatial filtering can be characterized as a ‘shift-and-multiply’ operation. First, we place the filter over a portion of an image. Then we multiply the filter weights (or coefficients) with the corresponding image pixel values, sum these up. The center image pixel value is then replaced with the result obtained. Then shift the filter to a new location and repeat the process again.

For the corner image pixels, we pad the image with 0’s. The whole process is shown below where a 3×3 filter is convolved with a 5×5 input image (blue color below) to produce a 7×7 output image.

This process is actually known as “correlation” but here, we refer to this as “convolution” operation. This should not be confused with mathematics convolution.

Note: The mathematics convolution is similar to correlation except that the mask is first flipped both horizontally and vertically.

Mathematically, the result of convolving a filter mask “w” of size mxn with an image “f” of size MxN is given by the expression

Here, we assume that filters are of odd size thus m=2a+1 and n=2b+1, where a and b are positive integers.

Let’s see how to do this using Python

Python Code

Again remember that this function does actually compute the correlation, not the convolution. If you need a real convolution, flip the kernel both horizontally and vertically and then apply the above function.

If you want the output image to be of the same size as that of the input, then you must change the padding as shown below

You can also do this using scipy or other libraries.

OpenCV

OpenCV has a builtin function cv2.filter2D() to convolve a kernel with an image. It’s arguments are

  • src: input image
  • ddepth: desired depth of the output image. If it is negative, it will be the same as that of the input image.
  • borderType: pixel extrapolation method.

This returns the output image of the same size and the same number of channels as the input image. Depending on the border type, you may get different outputs.

Hope you enjoy reading. In the next blog, we will learn how to do image smoothing or blurring by just changing the filter weights.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Leave a Reply