ignite.utils#
Module with helper methods
Move tensors to relevant device. |
|
Apply a function on a tensor or mapping, or sequence of tensors. |
|
Apply a function on an object of input_type or mapping, or sequence of objects of input_type. |
|
Convert a tensor of indices of any shape (N, ...) to a tensor of one-hot indicators of shape (N, num_classes, ...) and of type uint8. |
|
Setups logger: name, level, format etc. |
|
Setup random state from a seed for torch, random and optionally numpy (if can be imported). |
- ignite.utils.apply_to_tensor(x, func)[source]#
Apply a function on a tensor or mapping, or sequence of tensors.
- ignite.utils.apply_to_type(x, input_type, func)[source]#
Apply a function on an object of input_type or mapping, or sequence of objects of input_type.
- ignite.utils.convert_tensor(x, device=None, non_blocking=False)[source]#
Move tensors to relevant device.
- Parameters
- Return type
- ignite.utils.manual_seed(seed)[source]#
Setup random state from a seed for torch, random and optionally numpy (if can be imported).
- Parameters
seed (int) – Random state seed
- Return type
None
Changed in version 0.4.3: Added
torch.cuda.manual_seed_all(seed)
.Changed in version 0.4.5: Added
torch_xla.core.xla_model.set_rng_state(seed)
.
- ignite.utils.setup_logger(name='ignite', level=20, stream=None, format='%(asctime)s %(name)s %(levelname)s: %(message)s', filepath=None, distributed_rank=None, reset=False)[source]#
Setups logger: name, level, format etc.
- Parameters
name (Optional[str]) – new name for the logger. If None, the standard logger is used.
level (int) – logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG.
stream (Optional[TextIO]) – logging stream. If None, the standard stream is used (sys.stderr).
format (str) – logging format. By default, %(asctime)s %(name)s %(levelname)s: %(message)s.
filepath (Optional[str]) – Optional logging file path. If not None, logs are written to the file.
distributed_rank (Optional[int]) – Optional, rank in distributed configuration to avoid logger setup for workers. If None, distributed_rank is initialized to the rank of process.
reset (bool) – if True, reset an existing logger rather than keep format, handlers, and level.
- Returns
logging.Logger
- Return type
For example, to improve logs readability when training with a trainer and evaluator:
from ignite.utils import setup_logger trainer = ... evaluator = ... trainer.logger = setup_logger("trainer") evaluator.logger = setup_logger("evaluator") trainer.run(data, max_epochs=10) # Logs will look like # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5. # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23 # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1. # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02 # ...
Every existing logger can be reset if needed
logger = setup_logger(name="my-logger", format="=== %(name)s %(message)s") logger.info("first message") setup_logger(name="my-logger", format="+++ %(name)s %(message)s", reset=True) logger.info("second message") # Logs will look like # === my-logger first message # +++ my-logger second message
Example to change the level of an existing internal logger
setup_logger( name="ignite.distributed.launcher.Parallel", level=logging.WARNING )
Changed in version 0.4.3: Added
stream
parameter.Changed in version 0.4.5: Added
reset
parameter.
- ignite.utils.to_onehot(indices, num_classes)[source]#
Convert a tensor of indices of any shape (N, …) to a tensor of one-hot indicators of shape (N, num_classes, …) and of type uint8. Output’s device is equal to the input’s device`.
- Parameters
- Return type
Changed in version 0.4.3: This functions is now torchscriptable.