Multi-Label Classification

In the previous blogs, we discussed binary and multi-class classification problems. Both of these are almost similar. The basic assumption underlying these two problems is that each image can contain only one class. For instance, for the dogs vs cats classification, it was assumed that the image can contain either cat or dog but not both. So, in this blog, we will discuss the case where more than one classes can be present in a single image. This type of classification is known as Multi-label classification. Below picture explains this concept beautifully.

Source: cse-iitk

Some of the most common techniques for solving multi-label classification problems are

  • Problem Transformation
  • Adapted Algorithm
  • Ensemble approaches

Here, we will only discuss only Binary Relevance, a method that falls under the Problem Transformation category. If you are curious about other methods, you can read this amazing review paper.

In binary relevance, we try to break the problem into a number of binary classification problems. So, now for each class available, we will ask if it is present in the image or not. As we already know that the binary classification uses ‘sigmoid‘ as the last layer activation function and ‘binary_crossentropy‘ as the loss function. So, here we will also use the same. Rest all things are the same.

Now, let’s take a dataset and see how to implement multi-label classification.

Problem Definition

Here, we will take the most common Movie Genre classification based on the poster images problem. Because a movie can belong to more than one genre, for instance, comedy, romance, etc. and hence is a multi-label classification problem.

Dataset

You can download the original dataset from here. This contains two files.

  • Movie_Poster_Dataset.zip – The poster images
  • Movie_Poster_Metadata.zip – Metadata of each poster image like ID, genres, box office, etc.

To prepare the dataset, we need images and corresponding genre information. For this, we need to extract the genre information from the Movie_Poster_Metadata.zip file corresponding to each poster image. Let’s see how to do this.

Note: This dataset contains some missing items. For instance, check the “1982” folder in the Movie_Poster_Dataset.zip and Movie_Poster_Metadata.zip. The number of poster images and the corresponding genre information is missing for some movies. So, we need to perform EDA and remove these files.

Steps to perform EDA:

  1. First, we will extract the movie name and corresponding genre information from the Movie_Poster_Metadata.zip file and create a Pandas dataframe using these.
  2. Then we will loop over the poster images in the Movie_Poster_Dataset.zip file and check if it is present in the dataframe created above. If the poster is not present, we will remove that movie from the dataframe.

These two steps will ensure that we are only left with movies that have poster images and genre information. Below is the code for this.

Because the encoding of some files is different, that’s why 2 for loops. Below are the steps performed in the code.

  • First, open the metadata file
  • Read line by line
  • Extract the information corresponding to the ‘Genre’ and ‘imdbID’
  • Append them into the list and create a dataframe

Now for the second step, we first append all the poster images filenames in the list.

Then check if the name is present in the dataframe or not. If not, we will remove the rows from the dataframe or create a new dataframe.

Be sure that we have no duplicates in the dataframe.

So, finally, we are ready with our cleaned dataset with 8052 images containing overall 25 classes. The dataframe is shown below.

Format 1

One can also convert this dataframe into the common format as shown below

Format 2

This can be done using the following code.

In this post, we will be using Format 1. You can use any. Here, we will be using the Keras flow_from_dataframe method. For this, we need to place all the images under one directory. Currently, all the images are in separate folders such as 1980, 1981, etc. Below is the code that places all the poster images in a single folder ‘original_train‘.

Model Architecture

Since this is a sparse multilabel classification problem, accuracy is not a good metric for this. The reason for this is shown below.

if the predicted output was [0, 0, 0, 0, 0, 1] and the correct output was [0, 0, 0, 0, 0, 0], my accuracy would still be 5/6.

So, you can use other metrics like precision, recall, f1 score, hamming loss, top_k_categorical_accuracy, etc.

Here, I’ve used both to show how accuracy instantly reaches 90+ from the starting epoch and thus is not a correct metric.

flow_from_dataframe()

Here, I split the data into training and validation sets using the validation_split argument of ImageDataGenerator. You can read more about the ImageDataGenerator here.

Below are some of the poster images all resized into (400,300,3).

You can also check which labels are assigned to which class using the following code.

This prints a dictionary containing class names as keys and labels as values.

Let’s start training…

See how accuracy is reaching 90+ within few epochs. As stated earlier this is not a good evaluation metric for multi-label classification. On the other hand, top_k_categorical_accuracy is showing us the true picture.

Clearly, we are doing a pretty decent job. Considering the fact that training data is small and the complexity of the problem is large(25 classes). Moreover, some classes like comedy, etc dominate the training data. Play with the model architecture and other hyperparameters and check how the accuracy varies.

Prediction time

For each image, let’s predict the top three predicted classes. Below is the code for this.

The actual label for this can be found out as

You can see that our model is doing a decent job considering the complexity of the problem

Let’s try another example “tt0465602.jpg“. For this the predicted labels are

By looking at the poster most of us will predict the labels as predicted by our algorithm. Actually, these are pretty close to the true labels that are [Action, Comedy, Crime].

That’s all for multi-label classification problem. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

1 thought on “Multi-Label Classification

  1. Ashish Mani Tripathi

    Hi there

    Thanks for giving this example, it was indeed a nice example for new learners like me.

    However, I have a case where I want to enable ‘click’ action on the predicted labels. Is it possible?

    Please let me know your thoughts.

    Regards
    Ashish

    Reply

Leave a Reply