supervised_evaluation_step_amp#
- ignite.engine.supervised_evaluation_step_amp(model, device=None, non_blocking=False, prepare_batch=<function _prepare_batch>, output_transform=<function <lambda>>)[source]#
Factory function for supervised evaluation using
torch.cuda.amp
.- Parameters
model (Module) – the model to train.
device (Optional[Union[str, device]]) – device type specification (default: None). Applies to batches after starting the engine. Model will not be moved.
non_blocking (bool) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.
prepare_batch (Callable) – function that receives batch, device, non_blocking and outputs tuple of tensors (batch_x, batch_y).
output_transform (Callable) – function that receives ‘x’, ‘y’, ‘y_pred’ and returns value to be assigned to engine’s state.output after each iteration. Default is returning (y_pred, y,) which fits output expected by metrics. If you change it you should use output_transform in metrics.
- Returns
Inference function.
- Return type
Note
engine.state.output for this engine is defined by output_transform parameter and is a tuple of (batch_pred, batch_y) by default.
Warning
The internal use of device has changed. device will now only be used to move the input data to the correct device. The model should be moved by the user before creating an optimizer.
New in version 0.4.5.