Introduction
The BLEU score, which stands for Bilingual Evaluation Understudy, is a metric commonly used to evaluate the quality of machine-generated translations compared to human translations. It measures the similarity between the machine-generated translation and one or more reference translations, assigning a numerical score between 0 and 1. The higher the BLEU score, the closer the machine translation is to the reference translations, indicating better translation quality. BLEU score takes into account factors such as n-gram precision and brevity penalty, providing a useful quantitative measure for comparing different translation systems or assessing improvements in machine translation over time. Don’t worry, we will discuss these terms as we go along with the blog.
Precision
Input Sentence: “Hay un tigre en el bosque”
Human Reference: “There is a tiger in the woods”
Lets assume machine translated output is: “the the the the the”
Accuracy of the machine-generated translation compared to the reference translations can be calculated using precision. Precision basically checks for each word in generated output if it is present in reference sentence or not. So in the given example it will be 5/5. It gives high value even the machine translated output is far away from reference sentence. There comes modified precision. In modified precision we calculate the maximum frequency of word present in the reference sentence. Which will compute to 1/5. This one was for unigram (one word at a time). Similarly it is calculated for n-gram.
Formula
The formula for BLEU score with brevity penalty is as follows:
BLEU = BP * exp(sum(n-gram precision) / N)
Where:
- BP (Brevity Penalty) is a penalty term that adjusts the BLEU score based on the brevity of the machine generated translation compared to the reference translations.
- n-gram precision is the precision of n-grams (substrings of length n) in the machine generated translation, which is the count of n-gram matches between the machine generated and reference translations divided by the count of n-grams in the machine generated translation.
- N is the maximum n-gram order considered in the calculation (typically 4).
The brevity penalty term BP is calculated as:
BP = 1, if c > r
BP = exp(1 – r/c), if c ≤ r
Where:
- c is the length (in words) of the machine generated translation.
- r is the length (in words) of the closest reference translation.
In this formula, the brevity penalty is applied to adjust the BLEU score based on the difference in length between the candidate and reference translations. If the candidate translation is shorter than the reference, the penalty term encourages longer translations, and if the candidate translation is longer, it discourages excessively long translations.
Implementation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
import nltk nltk.download('punkt') import math from collections import Counter def tokenize(sentence): return nltk.word_tokenize(sentence) def calculate_ngram(candidate, n): ngrams = [] for i in range(len(candidate)-n+1): ngram = tuple(candidate[i:i+n]) ngrams.append(ngram) return ngrams def calculate_precision(candidate, references, n): candidate_ngrams = calculate_ngram(candidate, n) reference_ngrams = [calculate_ngram(ref, n) for ref in references] candidate_counter = Counter(candidate_ngrams) reference_counters = [Counter(ref) for ref in reference_ngrams] clipped_counts = dict() for ngram, count in candidate_counter.items(): max_reference_count = max(ref_counter[ngram] for ref_counter in reference_counters) clipped_counts[ngram] = min(count, max_reference_count) numerator = sum(clipped_counts.values()) denominator = max(1, sum(candidate_counter.values())) precision = numerator / denominator return precision def calculate_bleu(candidate, references, weights): candidate_tokens = tokenize(candidate) reference_tokens = [tokenize(ref) for ref in references] precisions = [] for n in range(1, len(weights) + 1): precision = calculate_precision(candidate_tokens, reference_tokens, n) precisions.append(precision) # Handling NaN or infinite values in precision precisions = [p if not math.isnan(p) and p != 0.0 else 1e-10 for p in precisions] geo_mean = math.exp(sum((w * math.log(p) for w, p in zip(weights, precisions))) / len(weights)) brevity_penalty = min(1.0, len(candidate_tokens) / min(len(ref) for ref in reference_tokens)) bleu = brevity_penalty * geo_mean return bleu # Example usage candidate = "The cat is on the mat" references = ["There is a cat on the mat", "The mat has a cat"] weights = [0.25, 0.25, 0.25, 0.25] bleu_score = calculate_bleu(candidate, references, weights) print("BLEU score:", bleu_score) |
Here’s a breakdown of the code:
- Tokenization:
- The
tokenize
function splits a given sentence into individual words or tokens.
- The
- N-gram Calculation:
- The
calculate_ngram
function takes a list of tokens (words) and an integern
as input, and it returns a list of n-grams (contiguous sequences of n tokens) from the input list.
- The
- Precision Calculation:
- The
calculate_precision
function computes the precision score for a given candidate sentence in comparison to one or more reference sentences. It uses n-grams for this calculation. - It counts the occurrences of n-grams in both the candidate and reference sentences and computes a precision value.
- The
- BLEU Calculation:
- The
calculate_bleu
function takes a candidate sentence, a list of reference sentences, and a list of weights as input. - It tokenizes the input sentences, calculates precision for different n-gram sizes, and combines them using a weighted geometric mean.
- The BLEU score is a combination of precision values for different n-gram sizes, and the weights are used to assign importance to each n-gram size.
- The
- Example Usage:
- An example is provided at the end, where a candidate sentence (“The cat is on the mat”) is compared to two reference sentences (“There is a cat on the mat” and “The mat has a cat”).
- The weights for different n-gram sizes are set to equal values (0.25 each), and the BLEU score is calculated using the
calculate_bleu
function. - The final BLEU score is printed out.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.