Bleu#
- class ignite.metrics.Bleu(ngram=4, smooth='no_smooth', output_transform=<function Bleu.<lambda>>, device=device(type='cpu'))[source]#
Calculates the BLEU score.
$\text{BLEU} = b_{p} \cdot \exp \left( \sum_{n=1}^{N} w_{n} \: \log p_{n} \right)$where $N$ is the order of n-grams, $b_{p}$ is a sentence brevety penalty, $w_{n}$ are positive weights summing to one and $p_{n}$ are modified n-gram precisions.
More details can be found in Papineni et al. 2002.
In addition, a review of smoothing techniques can be found in Chen et al. 2014
Remark :
This implementation is inspired by nltk
- Parameters
ngram (int) – order of n-grams.
smooth (str) – enable smoothing. Valid are
no_smooth
,smooth1
,nltk_smooth2
orsmooth2
. Default:no_smooth
.output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.
Example:
from ignite.metrics.nlp import Bleu m = Bleu(ngram=4, smooth="smooth1") y_pred = "the the the the the the the" y = ["the cat is on the mat", "there is a cat on the mat"] m.update((y_pred.split(), [y.split()])) print(m.compute())
New in version 0.4.5.
Methods
Computes the metric based on it's accumulated state.
Resets the metric to it's initial state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on it’s accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.