Fbeta#
- ignite.metrics.Fbeta(beta, average=True, precision=None, recall=None, output_transform=None, device=device(type='cpu'))[source]#
Calculates F-beta score.
where is a positive real factor.
update
must receive output of the form(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.y_pred must be in the following shape (batch_size, num_categories, …) or (batch_size, …).
y must be in the following shape (batch_size, …).
- Parameters
beta (float) – weight of precision in harmonic mean
average (bool) – if True, F-beta score is computed as the unweighted average (across all classes in multiclass case), otherwise, returns a tensor with F-beta score for each class in multiclass case.
precision (Optional[Precision]) – precision object metric with average=False to compute F-beta score
recall (Optional[Recall]) – recall object metric with average=False to compute F-beta score
output_transform (Optional[Callable]) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. It is used only if precision or recall are not provided.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.
- Returns
MetricsLambda, F-beta metric
- Return type
Examples
For more information on how metric works with
Engine
, visit Attach Engine API.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.utils import * from ignite.contrib.metrics.regression import * from ignite.contrib.metrics import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
Binary case
P = Precision(average=False) R = Recall(average=False) metric = Fbeta(beta=1.0, precision=P, recall=R) metric.attach(default_evaluator, "f-beta") y_true = torch.tensor([1, 0, 1, 1, 0, 1]) y_pred = torch.tensor([1, 0, 1, 0, 1, 1]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics["f-beta"])
0.7499...
Multiclass case
P = Precision(average=False) R = Recall(average=False) metric = Fbeta(beta=1.0, precision=P, recall=R) metric.attach(default_evaluator, "f-beta") y_true = torch.tensor([2, 0, 2, 1, 0, 1]) y_pred = torch.tensor([ [0.0266, 0.1719, 0.3055], [0.6886, 0.3978, 0.8176], [0.9230, 0.0197, 0.8395], [0.1785, 0.2670, 0.6084], [0.8448, 0.7177, 0.7288], [0.7748, 0.9542, 0.8573], ]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics["f-beta"])
0.5222...
F-beta can be computed for each class as done below:
P = Precision(average=False) R = Recall(average=False) metric = Fbeta(beta=1.0, average=False, precision=P, recall=R) metric.attach(default_evaluator, "f-beta") y_true = torch.tensor([2, 0, 2, 1, 0, 1]) y_pred = torch.tensor([ [0.0266, 0.1719, 0.3055], [0.6886, 0.3978, 0.8176], [0.9230, 0.0197, 0.8395], [0.1785, 0.2670, 0.6084], [0.8448, 0.7177, 0.7288], [0.7748, 0.9542, 0.8573], ]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics["f-beta"])
tensor([0.5000, 0.6667, 0.4000], dtype=torch.float64)
The elements of y and y_pred should have 0 or 1 values. Thresholding of predictions can be done as below:
def thresholded_output_transform(output): y_pred, y = output y_pred = torch.round(y_pred) return y_pred, y P = Precision(average=False, output_transform=thresholded_output_transform) R = Recall(average=False, output_transform=thresholded_output_transform) metric = Fbeta(beta=1.0, precision=P, recall=R) metric.attach(default_evaluator, "f-beta") y_true = torch.tensor([1, 0, 1, 1, 0, 1]) y_pred = torch.tensor([0.6, 0.2, 0.9, 0.4, 0.7, 0.65]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics["f-beta"])
0.7499...