Tag Archives: image enhancement

Intensity-level Slicing

Intensity level slicing means highlighting a specific range of intensities in an image. In other words, we segment certain gray level regions from the rest of the image.

Suppose in an image, your region of interest always take value between say 80 to 150. So, intensity level slicing highlights this range and now instead of looking at the whole image, one can now focus on the highlighted region of interest.

Since, one can think of it as piecewise linear transformation function so this can be implemented in several ways. Here, we will discuss the two basic type of slicing that is more often used.

  • In the first type, we display the desired range of intensities in white and suppress all other intensities to black or vice versa. This results in a binary image. The transformation function for both the cases is shown below.
  • In the second type, we brighten or darken the desired range of intensities(a to b as shown below) and leave other intensities unchanged or vice versa. The transformation function for both the cases, first where the desired range is changed and second where it is unchanged, is shown below.

Let’s see how to do intensity level slicing using OpenCV-Python. Below code is for type 1 as discussed above

For color image, either you convert into greyscale or specify the minimum and maximum range as list of BGR values.

Applications: Mostly used for enhancing features in satellite and X-ray images.

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Arithmetic Operations for Image Enhancement

In this blog, we will learn how simple arithmetic operations like addition, subtraction etc can be used for image enhancement. First, let’s start with image addition also known as Image averaging.

Image Averaging

This is based on the assumption that noise present in the image is purely random(uncorrelated) and thus has zero average value. So, if we average n noisy images of same source, the noise will cancel out and what we get is approximately the original image.

Applicability Conditions: Images should be taken under identical conditions with same camera settings like in the field of astronomy.

Advantages: Reduce noise without compromising image details unlike most other operations like filtering.

Disadvantages: Increases time and storage as now one needs to take multiple photos of the same object. Only applicable for random noise. Must follow the above applicability condition.

Below is the code where first I generated 20 images by adding random noise to the original image and then average these images to get the approx. original image.

cv2.randn(image, mean, standard deviation) fills the image with normally distributed random numbers with specified mean and standard deviation.

Noisy
Averaged

Image Subtraction

This is mainly used to enhance the difference between images. Used for background subtraction for detecting moving objects, in medical science for detecting blockage in the veins etc a field known as mask mode radiography. In this, we take 2 images, one before injecting a contrast medium and other after injecting. Then we subtract these 2 images to know how that medium propagated, is there any blockage or not.

Image Multiplication

This can be used to extract Region of interest (ROI) from an image. We simply create a mask and multiply the image with the mask to get the area of interest. Other applications can be shading correction which we will discuss in detail in the next blogs.

In the next blog, we will discuss intensity transformation, a spatial domain image enhancement technique. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Image Enhancement

Till now, we learned the basics of an image. From now onwards, we will learn what actually is known as image processing. In this blog, we will learn what is image enhancement, different methods to perform image enhancement and then we will learn how we can perform this on real images.

According to MathWorks, Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis. It is basically a preprocessing step.

Image enhancement can be done either in the spatial domain or transform domain. Spatial domain means we perform all operations directly on pixels while in transform domain we first transform an image into another domain (like frequency) do processing there and convert it back to the spatial domain by some inverse operations. We will be discussing these in detail in the next blogs.

Both spatial and transform domain have their own importance which we will discuss later. Generally, operations in spatial domain are more computationally efficient.

Processing in spatial domain can be divided into two main categories – one that operates on single pixels known as Intensity transformation and other known as Spatial filtering that works on the neighborhood of every pixel

The following example will motivate you about what we are going to study in the next few blogs

Before Contrast Enhancement
After Contrast Enhancement

In the next blog, we will discuss how basic arithmetic operations like addition, subtraction etc can be used for image enhancement. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.