GpuInfo#
- class ignite.metrics.GpuInfo[source]#
Provides GPU information: a) used memory percentage, b) gpu utilization percentage values as Metric on each iterations. This metric requires pynvml package of version <12.
Note
In case if gpu utilization reports “N/A” on a given GPU, corresponding metric value is not set.
Examples
# Default GPU measurements GpuInfo().attach(trainer, name='gpu') # metric names are 'gpu:X mem(%)', 'gpu:X util(%)' # Logging with TQDM ProgressBar(persist=True).attach(trainer, metric_names=['gpu:0 mem(%)', 'gpu:0 util(%)']) # Progress bar will looks like # Epoch [2/10]: [12/24] 50%|█████ , gpu:0 mem(%)=79, gpu:0 util(%)=59 [00:17<1:23] # Logging with Tensorboard tb_logger.attach(trainer, log_handler=OutputHandler(tag="training", metric_names='all'), event_name=Events.ITERATION_COMPLETED)
Methods
Attaches current metric to provided engine.
Helper method to compute metric's value and put into the engine.
Computes the metric based on its accumulated state.
Resets the metric to its initial state.
Updates the metric's state using the passed batch output.
- attach(engine, name='gpu', event_name=Events.ITERATION_COMPLETED)[source]#
Attaches current metric to provided engine. On the end of engine’s run, engine.state.metrics dictionary will contain computed metric’s value under provided name.
- Parameters
engine (Engine) – the engine to which the metric must be attached
name (str) – the name of the metric to attach
usage – the usage of the metric. Valid string values should be
ignite.metrics.metric.EpochWise.usage_name
(default) orignite.metrics.metric.BatchWise.usage_name
.
- Return type
None
Examples
metric = ... metric.attach(engine, "mymetric") assert "mymetric" in engine.run(data).metrics assert metric.is_attached(engine)
Example with usage:
metric = ... metric.attach(engine, "mymetric", usage=BatchWise.usage_name) assert "mymetric" in engine.run(data).metrics assert metric.is_attached(engine, usage=BatchWise.usage_name)
- completed(engine, name)[source]#
Helper method to compute metric’s value and put into the engine. It is automatically attached to the engine with
attach()
. If metrics’ value is torch tensor, it is explicitly sent to CPU device.- Parameters
- Return type
None
Changed in version 0.4.3: Added dict in metrics results.
Changed in version 0.4.5: metric’s value is put on CPU if torch tensor.
- compute()[source]#
Computes the metric based on its accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.