RougeN#
- class ignite.metrics.RougeN(ngram=4, multiref='average', alpha=0, output_transform=<function RougeN.<lambda>>, device=device(type='cpu'))[source]#
Calculates the Rouge-N score.
The Rouge-N is based on the ngram co-occurences of candidates and references.
More details can be found in Lin 2004.
update
must receive output of the form(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.y_pred (list(list(str))) must be a sequence of tokens.
y (list(list(list(str))) must be a list of sequence of tokens.
- Parameters
ngram (int) – ngram order (default: 4).
multiref (str) – reduces scores for multi references. Valid values are “best” and “average” (default: “average”).
alpha (float) – controls the importance between recall and precision (alpha -> 0: recall is more important, alpha -> 1: precision is more important)
output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.
Examples
For more information on how metric works with
Engine
, visit Attach Engine API.from ignite.metrics import RougeN m = RougeN(ngram=2, multiref="best") candidate = "the cat is not there".split() references = [ "the cat is on the mat".split(), "there is a cat on the mat".split() ] m.update(([candidate], [references])) print(m.compute())
{'Rouge-2-P': 0.5, 'Rouge-2-R': 0.4, 'Rouge-2-F': 0.4}
New in version 0.4.5.
Methods