Tag Archives: BLEU

BLEU Score – Bilingual Evaluation Understudy

Introduction

The BLEU score, which stands for Bilingual Evaluation Understudy, is a metric commonly used to evaluate the quality of machine-generated translations compared to human translations. It measures the similarity between the machine-generated translation and one or more reference translations, assigning a numerical score between 0 and 1. The higher the BLEU score, the closer the machine translation is to the reference translations, indicating better translation quality. BLEU score takes into account factors such as n-gram precision and brevity penalty, providing a useful quantitative measure for comparing different translation systems or assessing improvements in machine translation over time. Don’t worry, we will discuss these terms as we go along with the blog.

Precision

Input Sentence: “Hay un tigre en el bosque”
Human Reference: “There is a tiger in the woods”

Lets assume machine translated output is: “the the the the the”
Accuracy of the machine-generated translation compared to the reference translations can be calculated using precision. Precision basically checks for each word in generated output if it is present in reference sentence or not. So in the given example it will be 5/5. It gives high value even the machine translated output is far away from reference sentence. There comes modified precision. In modified precision we calculate the maximum frequency of word present in the reference sentence. Which will compute to 1/5. This one was for unigram (one word at a time). Similarly it is calculated for n-gram.

Formula

The formula for BLEU score with brevity penalty is as follows:

BLEU = BP * exp(sum(n-gram precision) / N)

Where:

  • BP (Brevity Penalty) is a penalty term that adjusts the BLEU score based on the brevity of the machine generated translation compared to the reference translations.
  • n-gram precision is the precision of n-grams (substrings of length n) in the machine generated translation, which is the count of n-gram matches between the machine generated and reference translations divided by the count of n-grams in the machine generated translation.
  • N is the maximum n-gram order considered in the calculation (typically 4).

The brevity penalty term BP is calculated as:

BP = 1, if c > r
BP = exp(1 – r/c), if c ≤ r

Where:

  • c is the length (in words) of the machine generated translation.
  • r is the length (in words) of the closest reference translation.

In this formula, the brevity penalty is applied to adjust the BLEU score based on the difference in length between the candidate and reference translations. If the candidate translation is shorter than the reference, the penalty term encourages longer translations, and if the candidate translation is longer, it discourages excessively long translations.

Implementation

Here’s a breakdown of the code:

  1. Tokenization:
    • The tokenize function splits a given sentence into individual words or tokens.
  2. N-gram Calculation:
    • The calculate_ngram function takes a list of tokens (words) and an integer n as input, and it returns a list of n-grams (contiguous sequences of n tokens) from the input list.
  3. Precision Calculation:
    • The calculate_precision function computes the precision score for a given candidate sentence in comparison to one or more reference sentences. It uses n-grams for this calculation.
    • It counts the occurrences of n-grams in both the candidate and reference sentences and computes a precision value.
  4. BLEU Calculation:
    • The calculate_bleu function takes a candidate sentence, a list of reference sentences, and a list of weights as input.
    • It tokenizes the input sentences, calculates precision for different n-gram sizes, and combines them using a weighted geometric mean.
    • The BLEU score is a combination of precision values for different n-gram sizes, and the weights are used to assign importance to each n-gram size.
  5. Example Usage:
    • An example is provided at the end, where a candidate sentence (“The cat is on the mat”) is compared to two reference sentences (“There is a cat on the mat” and “The mat has a cat”).
    • The weights for different n-gram sizes are set to equal values (0.25 each), and the BLEU score is calculated using the calculate_bleu function.
    • The final BLEU score is printed out.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.