Category Archives: Image Processing

Creating Subplots in OpenCV-Python

In this blog, we will learn how to create subplots using OpenCV-Python. We know that cv2.imshow() only shows 1 image at a time. Displaying images side by side helps greatly in analyzing the result. Unlike Matlab, there is no direct function for creating subplots. But since OpenCV reads images as arrays, we can concatenate arrays using the inbuilt cv2.hconcat() and cv2.vconcat() functions. After that, we display this concatenated image using cv2.imshow().

cv2.hconcat([ img1, img2 ]) —– horizontally concatenated image as output. Same for cv2.vconcat().

Below is the sample code where i displayed 2 gamma corrected images using this method

The output looks like this

To put the text on images, use cv2.puttext() and if you want to leave spacing between the images shown, use cv2.copyMakeBorder(). You can play around with many other OpenCV functions.

Note: Array dimensions must match when using cv2.hconcat(). This means you cannot display color and greyscale images side by side using this method.

I hope this information will help you. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Bit-plane Slicing

You probably know that everything on a computer is stored as strings of bits. In Bit-plane slicing, we take the advantage of this fact to perform various image operations. Let’s see how.

I hope you have basic understanding of binary and decimal relationship.

For an 8-bit image, a pixel value of 0 is represented as 00000000 in binary form and 255 is encoded as 11111111. Here, the leftmost bit is known as the most significant bit (MSB) as it contributes the maximum. e.g. if MSB of 11111111 is changed to 0 (i.e. 01111111), then the value changes from 255 to 127. Similarly, rightmost bit is known as Least significant bit (LSB).

In Bit-plane slicing, we divide the image into bit planes. This is done by first converting the pixel values in the binary form and then dividing it into bit planes. Let’s see by an example.

For simplicity let’s take a 3×3, 3-bit image as shown below. We know that the pixel values for 3-bit can take values between 0 to 7.

Bit Plane Slicing

I hope you understand what is bit plane slicing and how it is preformed. Next Question that comes to mind is What’s the benefit of doing this?

Pros:

  • Image Compression (We will see later how we can construct nearly the original image using less number of bits).
  • Converting a gray level image to a binary image. In general, images reconstructed from bit planes is similar to applying some intensity transformation function to the original image. e.g. Image reconstructed from MSB is same as applying thresholding function to the original image. We will validate this in the code below.
  • Through this, we can analyze the relative importance of each bit in the image that will help in determining the number of bits used to quantize the image.

Let’s see how we can do this using OpenCV-Python

Code

The output looks like this

Original Image
8 bit planes (Top row – 8,7,6,5 ; bottom – 4,3,2,1 bit planes)

Clearly from the above figure, the last 4 bit planes do not seem to have much information in them.

Now, if we combine the 8,7,6,5 bit planes, we will get approximately the original image as shown below.

Image using 4 bit planes (8,7,6,5)

This can be done by the following code

Clearly, storing these 4 frames instead of the original image requires less space. Thus, it is used in Image Compression.

I hope you understand Bit plane slicing. If you find any other application of this, please let me know. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Log Transformation

Log transformation means replacing each pixel value with its logarithm. The general form of log transformation function is

s = T(r) = c*log(1+r)

Where, ‘s’ and ‘r’ are the output and input pixel values and c is the scaling constant represented by the following expression (for 8-bit)

c = 255/(log(1 + max_input_pixel_value))

The value of c is chosen such that we get the maximum output value corresponding to the bit size used. e.g for 8 bit image, c is chosen such that we get max value equal to 255.

For an 8-bit image, log transformation looks like this

Clearly, the low intensity values in the input image are mapped to a wider range of output levels. The opposite is true for the higher values.

Applications:

  • Expands the dark pixels in the image while compressing the brighter pixels
  • Compresses the dynamic range (display of Fourier transform).

Dynamic range refers to the ratio of max and min intensity values. When the dynamic range of the image is greater than that of displaying device(like in Fourier transform), the lower values are suppressed. To overcome this issue, we use log transform. Log transformation first compresses the dynamic range and then upscales the image to a dynamic range of the display device. In this way, lower values are enhanced and thus the image shows significantly more details.

The code below shows how to apply log transform using OpenCV Python

Thus, a logarithmic transform is appropriate when we want to enhance the low pixel values at the expense of loss of information in the high pixel values.

Be careful, if most of the details are present in the high pixel values, then applying the log transform results in the loss of information as shown below

Before
After

In the next blog, we will discuss Power law or Gamma transformation. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Image Negatives or inverting images using OpenCV

Image negatives, most of you might have heard this term, in good old days were used to produce images. Film Photography has not yet become obsolete as some wedding photographers are still shooting film. Because one has to pay for the film rolls and processing fees, most people have now switched to digital.

I recently heard of Foveon X3 direct image sensor which claims to combine the power of digital sensor with the essence of the film. (Check here)

Image negative is produced by subtracting each pixel from the maximum intensity value. e.g. for an 8-bit image, the max intensity value is 28– 1 = 255, thus each pixel is subtracted from 255 to produce the output image.

Thus, the transformation function used in image negative is

s = T(r) = L – 1 – r

Where L-1 is the max intensity value and s, and r are the output and input pixel values respectively.

For grayscale images, light areas appear dark and vice versa. For color images, colors are replaced by their complementary colors. Thus, red areas appear cyan, greens appear magenta, and blues appear yellow, and vice versa.

The output looks like this

Method 2

OpenCV provides a built-in function cv2.bitwise_not() that inverts every bit of an array. This takes as input the original image and outputs the inverted image. Below is the code for this.

There is a long debate going on whether black on white or white on black is better. To my knowledge, Image negative favors black on white thus it is suited for enhancing the white or gray information embedded in the dark regions of the image especially when the black areas are dominant in size.

Application: In grayscale images, when the background is black, the foreground gray levels are not clearly visible. So, converting background to white, the gray levels now become more visible.

In the next blog, we will discuss Log transformations in detail. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Intensity Transformation

Intensity transformation as the name suggests, we transform the pixel intensity value using some transformation function or mathematical expression.

Intensity transformation operation is usually represented in the form

s = T(r)

where, r and s denotes the pixel value before and after processing and T is the transformation that maps pixel value r into s.

Basic types of transformation functions used for image enhancement are

  • Linear (Negative and Identity Transformation)
  • Logarithmic (log and inverse-log transformation)
  • Power law transformation

The below figure summarize these functions. Here, L denotes the intensity value (for 8-bit, L = [0,255])


source: R. C. GonzalezR. E. Woods, Digital Image Processing

This is a spatial domain technique which means that all the operations are done directly on the pixels. Also known as a point processing technique (output depend only on the single pixel) as opposed to neighborhood processing techniques(like filtering) which we will discuss later.

Applications:

  • To increase the contrast between certain intensity values or image regions.
  • For image thresholding or segmentation

In the next blog, we will discuss these different transformation functions in detail. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Image Processing – Bicubic Interpolation

In the last blog, we discussed what is Bi-linear interpolation and how it is performed on images. In this blog, we will learn Bi-cubic interpolation in detail.

Note: We will be using some concepts from the Nearest Neighbour and Bilinear interpolation blog. Check them first before moving forward.

Difference between Bi-linear and Bi-cubic:

  1. Bi-linear uses 4 nearest neighbors to determine the output, while Bi-cubic uses 16 (4×4 neighbourhood).
  2. Weight distribution is done differently.

So, the only thing we need to know is how weights are distributed and rest is same as Bi-linear.

In OpenCV, weights are distributed according to the following code (whole code can be found here)

x used in the above code is calculated from below code where x = fx

Similarly, for y, replace x with fy and fy can be obtained by replacing dx and scale_x in the above code by dy and scale_y respectively (Explained in the previous blog).

Note: For Matlab, use A= -0.50

Let’s see an example. We take the same 2×2 image from the previous blog and want to upscale it by a factor of 2 as shown below

Steps:

  • In the last blog, we calculated for P1. This time let’s take ‘P2’. First, we find the position of P2 in the input image as we did before. So, we find P2 coordinate as (0.75,0.25) with dx = 1 and dy=0.
  • Because cubic needs 4 pixels (2 on left and 2 on right) so, we pad the input image.
  • OpenCV has different methods to add borders which you can check here. Here, I used cv2.BORDER_REPLICATE method. You can use any. After padding the input image looks like this
After padding, Blue square is the input image
  • To find the value of P2, let’s first visualize where P2 is in the image. Yellow is the input image before padding. We take the blue 4×4 neighborhood as shown below
  • For P2, using dx and dy we calculate fx and fy from code above. We get, fx=0.25 and fy=0.75
  • Now, we substitute fx and fy in the above code to calculate the four coefficients. Thus we get coefficients = [-0.0351, 0.2617,0.8789, -0.1055] for fy =0.75 and for fx=0.25 we get coefficients = [ -0.1055 , 0.8789, 0.2617, -0.0351]
  • First, we will perform cubic interpolation along rows( as shown in the above figure inside blue box) with the above calculated weights for fx as
    -0.1055 *10 + 0.8789*10 + 0.2617*20 -0.0351*20 = 12.265625
    -0.1055 *10 + 0.8789*10 + 0.2617*20 -0.0351*20 = 12.265625
    -0.1055 *10 + 0.8789*10 + 0.2617*20 -0.0351*20 = 12.265625
    -0.1055 *30 + 0.8789*30 + 0.2617*40 -0.0351*40 = 32.265625
  • Now, using above calculated 4 values, we will interpolate along columns using calculated weights for fy as
    -0.0351*12.265 + 0.2617*12.265 + 0.8789*12.265 -0.1055*32.625 = 10.11702
  • Similarly, repeat for other pixels.

The final result we get is shown below:

This produces noticeably sharper images than the previous two methods and balances processing time and output quality. That’s why it is used widely (e.g. Adobe Photoshop etc.)

In the next blog, we will see these interpolation methods using OpenCV functions on real images. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Image Processing – Bilinear Interpolation

In the previous blog, we learned how to find the pixel coordinate in the input image and then we discussed nearest neighbour algorithm. In this blog, we will discuss Bi-linear interpolation method in detail.

Bi-linear interpolation means applying a linear interpolation in two directions. Thus, it uses 4 nearest neighbors, takes their weighted average to produce the output

So, let’s first discuss what is linear interpolation and how it is performed?

Linear interpolation means we estimate the value using linear polynomials. Suppose we have 2 points having value 10 and 20 and we want to guess the values in between them. Simple Linear interpolation looks like this

More weight is given to the nearest value(See 1/3 and 2/3 in the above figure). For 2D (e.g. images), we have to perform this operation twice once along rows and then along columns that is why it is known as Bi-Linear interpolation.

Algorithm for Bi-linear Interpolation:

Suppose we have 4 pixels located at (0,0), (1,0), (0,1) and (1,1) and we want to find value at (0.3,0.4).

  1. First, find the value along rows i.e at position A:(0,0.4) and B:(1,0.4) by linear interpolation.
  2. After getting the values at A and B, apply linear interpolation for point (0.3,0.4) between A and B and this is the final result.

Let’s see how to do this for images. We take the same 2×2 image from the previous blog and want to upscale it by a factor of 2 as shown below

Same assumptions as we took in the last blog, pixel is of size 1 and is located at the center.

  • Let’s take ‘P1’. First, we find the position of P1 in the input image. By projecting the 4×4 image on the input 2×2 image we get the coordinates of P1 as (0.25,0.25). (For more details, See here)
  • Since P1 is the border pixel and has no values to its left, so OpenCV replicates the border pixel. This means the row or column at the very edge of the original is replicated to the extra border(padding). OpenCV has different methods to add borders which you can check here.
  • So, now our input image (after border replication) looks like this. Note the values in red shows the input image.
  • To find the value of P1, let’s first visualize where P1 is in the input image (previous step image). Below figure shows the upper left 2×2 input image region and the location of P1 in that.
Image-1
  • Before applying Bi-linear interpolation let’s see how weights are distributed.

Both Matlab and OpenCV yield different results for interpolation because their weight distribution is done differently. Here, I will only explain for OpenCV.

In OpenCV, weights are distributed according to this equation

Where dx is the column index of the unknown pixel and fx is the weight that is assigned to the right pixel, 1-fx is given to the left pixel. Scale_x is the ratio of input width by output width. Similarly, for y, dy is the row index and scale_y is the ratio of heights now.

After knowing how weights are calculated let’s get back to the problem again.

  • For P1, both row and column index i.e dx, and dy =0 so, fx = 0.75 and fy =0.75.
  • We apply linear interpolation with weights fx for both A and B(See Image-1) as 0.75*10(right) + 0.25*10 = 10 (Explained in the Algorithm above)
  • Now, for P1 apply linear interpolation between A and B with the weights fy as 0.75*10(B) +0.25*10(A) = 10
  • So, we get P1 =10. Similarly, repeat for other pixels.

The final result we get is shown below:

This produces smoother results than the nearest neighbor but, the results for sharp transitions like edges, are not ideal.

In the next blog, we will discuss Bi-cubic interpolation. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Image Processing – Nearest Neighbour Interpolation

In the previous blog, we discussed image interpolation, its types and why we need interpolation. In this blog, we will discuss the Nearest Neighbour, a non-adaptive interpolation method in detail.

Algorithm: We assign the unknown pixel to the nearest known pixel.

Let’s see how this works. Suppose, we have a 2×2 image and let’s say we want to upscale this by a factor of 2 as shown below.

Let’s pick up the first pixel (denoted by ‘P1’) in the unknown image. To assign it a value, we must find its nearest pixel in the input 2×2 image. Let’s first see some facts and assumptions used in this.

Assumption: a pixel is always represented by its center value. Each pixel in our input 2×2 image is of unit length and width.

Indexing in OpenCV starts from 0 while in matlab it starts from 1. But for the sake of simplicity, we will start indexing from 0.5 which means that our first pixel is at 0.5 next at 1.5 and so on as shown below.

So for the above example, the location of each pixel in input image is {’10’:(0.5,0.5), ’20’:(1.5,0.5), ’30’:(0.5,1.5), ’40’:(1.5,1.5)}.

After finding the location of each pixel in the input image, follow these 2 steps

  1. First, find the position of each pixel (of the unknown image) in the input image. This is done by projecting the 4×4 image on the 2×2 image. So, we can easily find out the coordinates of each unknown pixel e.g location of ‘P1’ in the input image is (0.25,0.25), for ‘P2’ (0.75,0.25) and so on.
  2. Now, compare the above-calculated coordinates of each unknown pixel with the input image pixels to find out the nearest pixel e.g. ‘P1′(0.25,0.25) is nearest to 10 (0.5,0.5) so we assign ‘P1’ value of 10. Similarly, for other pixels, we can find their nearest pixel.

The final result we get is shown in figure below:

This is the fastest interpolation method as it involves little calculation. This results in a pixelated or blocky image. This has the effect of simply making each pixel bigger

Application: To resize bar-codes.

Shortcut: Simply duplicate the rows and columns to get the interpolated or zoomed image e.g. for 2x, we duplicate each row and column 2 times.

In the next blog, we will discuss Bi-linear interpolation method. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Arithmetic Operations for Image Enhancement

In this blog, we will learn how simple arithmetic operations like addition, subtraction etc can be used for image enhancement. First, let’s start with image addition also known as Image averaging.

Image Averaging

This is based on the assumption that noise present in the image is purely random(uncorrelated) and thus has zero average value. So, if we average n noisy images of same source, the noise will cancel out and what we get is approximately the original image.

Applicability Conditions: Images should be taken under identical conditions with same camera settings like in the field of astronomy.

Advantages: Reduce noise without compromising image details unlike most other operations like filtering.

Disadvantages: Increases time and storage as now one needs to take multiple photos of the same object. Only applicable for random noise. Must follow the above applicability condition.

Below is the code where first I generated 20 images by adding random noise to the original image and then average these images to get the approx. original image.

cv2.randn(image, mean, standard deviation) fills the image with normally distributed random numbers with specified mean and standard deviation.

Noisy
Averaged

Image Subtraction

This is mainly used to enhance the difference between images. Used for background subtraction for detecting moving objects, in medical science for detecting blockage in the veins etc a field known as mask mode radiography. In this, we take 2 images, one before injecting a contrast medium and other after injecting. Then we subtract these 2 images to know how that medium propagated, is there any blockage or not.

Image Multiplication

This can be used to extract Region of interest (ROI) from an image. We simply create a mask and multiply the image with the mask to get the area of interest. Other applications can be shading correction which we will discuss in detail in the next blogs.

In the next blog, we will discuss intensity transformation, a spatial domain image enhancement technique. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Image Enhancement

Till now, we learned the basics of an image. From now onwards, we will learn what actually is known as image processing. In this blog, we will learn what is image enhancement, different methods to perform image enhancement and then we will learn how we can perform this on real images.

According to MathWorks, Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. It is basically a preprocessing step.

Image enhancement can be done either in the spatial domain or transform domain. Spatial domain means we perform all operations directly on pixels while in transform domain we first transform an image into another domain (like frequency) do processing there and convert it back to the spatial domain by some inverse operations. We will be discussing these in detail in the next blogs.

Both spatial and transform domain have their own importance which we will discuss later. Generally, operations in spatial domain are more computationally efficient.

Processing in spatial domain can be divided into two main categories – one that operates on single pixels known as Intensity transformation and other known as Spatial filtering that works on the neighborhood of every pixel

The following example will motivate you about what we are going to study in the next few blogs

Before Contrast Enhancement
After Contrast Enhancement

In the next blog, we will discuss how basic arithmetic operations like addition, subtraction etc can be used for image enhancement. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.