Shortcuts

ignite.utils#

Module with helper methods

convert_tensor

Move tensors to relevant device.

apply_to_tensor

Apply a function on a tensor or mapping, or sequence of tensors.

apply_to_type

Apply a function on an object of input_type or mapping, or sequence of objects of input_type.

to_onehot

Convert a tensor of indices of any shape (N, ...) to a tensor of one-hot indicators of shape (N, num_classes, ...) and of type uint8.

setup_logger

Setups logger: name, level, format etc.

manual_seed

Setup random state from a seed for torch, random and optionally numpy (if can be imported).

hash_checkpoint

Hash the checkpoint file in the format of <filename>-<hash>.<ext> to be used with check_hash of torch.hub.load_state_dict_from_url().

ignite.utils.apply_to_tensor(x, func)[source]#

Apply a function on a tensor or mapping, or sequence of tensors.

Parameters
Return type

Union[Tensor, Sequence, Mapping, str, bytes]

ignite.utils.apply_to_type(x, input_type, func)[source]#

Apply a function on an object of input_type or mapping, or sequence of objects of input_type.

Parameters
Return type

Union[Any, Sequence, Mapping, str, bytes]

ignite.utils.convert_tensor(x, device=None, non_blocking=False)[source]#

Move tensors to relevant device.

Parameters
  • x (Union[Tensor, Sequence, Mapping, str, bytes]) – input tensor or mapping, or sequence of tensors.

  • device (Optional[Union[str, device]]) – device type to move x.

  • non_blocking (bool) – convert a CPU Tensor with pinned memory to a CUDA Tensor asynchronously with respect to the host if possible

Return type

Union[Tensor, Sequence, Mapping, str, bytes]

ignite.utils.hash_checkpoint(checkpoint_path, output_dir)[source]#

Hash the checkpoint file in the format of <filename>-<hash>.<ext> to be used with check_hash of torch.hub.load_state_dict_from_url().

Parameters
  • checkpoint_path (Union[str, Path]) – Path to the checkpoint file.

  • output_dir (Union[str, Path]) – Output directory to store the hashed checkpoint file (will be created if not exist).

Returns

Path to the hashed checkpoint file, the first 8 digits of SHA256 hash.

Return type

Tuple[Path, str]

New in version 0.4.8.

ignite.utils.manual_seed(seed)[source]#

Setup random state from a seed for torch, random and optionally numpy (if can be imported).

Parameters

seed (int) – Random state seed

Return type

None

Changed in version 0.4.3: Added torch.cuda.manual_seed_all(seed).

Changed in version 0.4.5: Added torch_xla.core.xla_model.set_rng_state(seed).

ignite.utils.setup_logger(name='ignite', level=20, stream=None, format='%(asctime)s %(name)s %(levelname)s: %(message)s', filepath=None, distributed_rank=None, reset=False)[source]#

Setups logger: name, level, format etc.

Parameters
  • name (Optional[str]) – new name for the logger. If None, the standard logger is used.

  • level (int) – logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG.

  • stream (Optional[TextIO]) – logging stream. If None, the standard stream is used (sys.stderr).

  • format (str) – logging format. By default, %(asctime)s %(name)s %(levelname)s: %(message)s.

  • filepath (Optional[str]) – Optional logging file path. If not None, logs are written to the file.

  • distributed_rank (Optional[int]) – Optional, rank in distributed configuration to avoid logger setup for workers. If None, distributed_rank is initialized to the rank of process.

  • reset (bool) – if True, reset an existing logger rather than keep format, handlers, and level.

Returns

logging.Logger

Return type

Logger

Examples

Improve logs readability when training with a trainer and evaluator:

from ignite.utils import setup_logger

trainer = ...
evaluator = ...

trainer.logger = setup_logger("trainer")
evaluator.logger = setup_logger("evaluator")

trainer.run(data, max_epochs=10)

# Logs will look like
# 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.
# 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23
# 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.
# 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02
# ...

Every existing logger can be reset if needed

logger = setup_logger(name="my-logger", format="=== %(name)s %(message)s")
logger.info("first message")
setup_logger(name="my-logger", format="+++ %(name)s %(message)s", reset=True)
logger.info("second message")

# Logs will look like
# === my-logger first message
# +++ my-logger second message

Change the level of an existing internal logger

setup_logger(
    name="ignite.distributed.launcher.Parallel",
    level=logging.WARNING
)

Changed in version 0.4.3: Added stream parameter.

Changed in version 0.4.5: Added reset parameter.

ignite.utils.to_onehot(indices, num_classes)[source]#

Convert a tensor of indices of any shape (N, …) to a tensor of one-hot indicators of shape (N, num_classes, …) and of type uint8. Output’s device is equal to the input’s device`.

Parameters
  • indices (Tensor) – input tensor to convert.

  • num_classes (int) – number of classes for one-hot tensor.

Return type

Tensor

Changed in version 0.4.3: This functions is now torchscriptable.