Frequency#
- class ignite.metrics.Frequency(output_transform=<function Frequency.<lambda>>, device=device(type='cpu'), skip_unrolling=False)[source]#
Provides metrics for the number of examples processed per second.
- Parameters
output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.skip_unrolling (bool) – specifies whether output should be unrolled before being fed to update method. Should be true for multi-output model, for example, if
y_pred
contains multi-ouput as(y_pred_a, y_pred_b)
Alternatively,output_transform
can be used to handle this.
Examples
For more information on how metric works with
Engine
, visit Attach Engine API.# Compute number of tokens processed wps_metric = Frequency(output_transform=lambda x: x['ntokens']) wps_metric.attach(trainer, name='wps') # Logging with TQDM ProgressBar(persist=True).attach(trainer, metric_names=['wps']) # Progress bar will look like # Epoch [2/10]: [12/24] 50%|█████ , wps=400 [00:17<1:23]
To compute examples processed per second every 50th iteration:
# Compute number of tokens processed wps_metric = Frequency(output_transform=lambda x: x['ntokens']) wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=50)) # Logging with TQDM ProgressBar(persist=True).attach(trainer, metric_names=['wps']) # Progress bar will look like # Epoch [2/10]: [50/100] 50%|█████ , wps=400 [00:17<00:35]
Changed in version 0.5.1:
skip_unrolling
argument is added.Methods
Attaches current metric to provided engine.
Helper method to compute metric's value and put into the engine.
Computes the metric based on its accumulated state.
Resets the metric to its initial state.
Updates the metric's state using the passed batch output.
- attach(engine, name, event_name=Events.ITERATION_COMPLETED)[source]#
Attaches current metric to provided engine. On the end of engine’s run, engine.state.metrics dictionary will contain computed metric’s value under provided name.
- Parameters
engine (Engine) – the engine to which the metric must be attached
name (str) – the name of the metric to attach
usage – the usage of the metric. Valid string values should be
ignite.metrics.metric.EpochWise.usage_name
(default) orignite.metrics.metric.BatchWise.usage_name
.event_name (Events) –
- Return type
None
Examples
metric = ... metric.attach(engine, "mymetric") assert "mymetric" in engine.run(data).metrics assert metric.is_attached(engine)
Example with usage:
metric = ... metric.attach(engine, "mymetric", usage=BatchWise.usage_name) assert "mymetric" in engine.run(data).metrics assert metric.is_attached(engine, usage=BatchWise.usage_name)
- completed(engine, name)[source]#
Helper method to compute metric’s value and put into the engine. It is automatically attached to the engine with
attach()
. If metrics’ value is torch tensor, it is explicitly sent to CPU device.- Parameters
- Return type
None
Changed in version 0.4.3: Added dict in metrics results.
Changed in version 0.4.5: metric’s value is put on CPU if torch tensor.
- compute()[source]#
Computes the metric based on its accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.