Bleu#
- class ignite.metrics.Bleu(ngram=4, smooth='no_smooth', output_transform=<function Bleu.<lambda>>, device=device(type='cpu'), average='macro')[source]#
Calculates the BLEU score.
$\text{BLEU} = b_{p} \cdot \exp \left( \sum_{n=1}^{N} w_{n} \: \log p_{n} \right)$where $N$ is the order of n-grams, $b_{p}$ is a sentence brevety penalty, $w_{n}$ are positive weights summing to one and $p_{n}$ are modified n-gram precisions.
More details can be found in Papineni et al. 2002.
In addition, a review of smoothing techniques can be found in Chen et al. 2014
update
must receive output of the form(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.y_pred (list(list(str))) - a list of hypotheses sentences.
y (list(list(list(str))) - a corpus of lists of reference sentences w.r.t hypotheses.
Remark :
This implementation is inspired by nltk
- Parameters
ngram (int) – order of n-grams.
smooth (str) – enable smoothing. Valid are
no_smooth
,smooth1
,nltk_smooth2
orsmooth2
. Default:no_smooth
.output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.device (Union[str, torch.device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.average (str) – specifies which type of averaging to use (macro or micro) for more details refer https://www.nltk.org/_modules/nltk/translate/bleu_score.html Default: “macro”
Examples
from ignite.metrics.nlp import Bleu m = Bleu(ngram=4, smooth="smooth1") y_pred = "the the the the the the the" y = ["the cat is on the mat", "there is a cat on the mat"] m.update(([y_pred.split()], [[_y.split() for _y in y]])) print(m.compute())
New in version 0.4.5.
Changed in version 0.4.7:
update
method has changed and now works on batch of inputs.added
average
option to handle micro and macro averaging modes.
Methods
Computes the metric based on it's accumulated state.
Resets the metric to it's initial state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on it’s accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.