Shortcuts

Conv3dNet

class torchrl.modules.Conv3dNet(in_features: ~typing.Optional[int] = None, depth: ~typing.Optional[int] = None, num_cells: ~typing.Optional[~typing.Union[~typing.Sequence[int], int]] = None, kernel_sizes: ~typing.Union[~typing.Sequence[int], int] = 3, strides: ~typing.Union[~typing.Sequence[int], int] = 1, paddings: ~typing.Union[~typing.Sequence[int], int] = 0, activation_class: ~typing.Union[~typing.Type[~torch.nn.modules.module.Module], ~typing.Callable] = <class 'torch.nn.modules.activation.ELU'>, activation_kwargs: ~typing.Optional[~typing.Union[dict, ~typing.List[dict]]] = None, norm_class: ~typing.Optional[~typing.Union[~typing.Type[~torch.nn.modules.module.Module], ~typing.Callable]] = None, norm_kwargs: ~typing.Optional[~typing.Union[dict, ~typing.List[dict]]] = None, bias_last_layer: bool = True, aggregator_class: ~typing.Optional[~typing.Union[~typing.Type[~torch.nn.modules.module.Module], ~typing.Callable]] = <class 'torchrl.modules.models.utils.SquashDims'>, aggregator_kwargs: ~typing.Optional[dict] = None, squeeze_output: bool = False, device: ~typing.Optional[~typing.Union[~torch.device, str, int]] = None)[source]

A 3D-convolutional neural network.

Parameters:
  • in_features (int, optional) – number of input features. A lazy implementation that automatically retrieves the input size will be used if none is provided.

  • depth (int, optional) – depth of the network. A depth of 1 will produce a single linear layer network with the desired input size, and with an output size equal to the last element of the num_cells argument. If no depth is indicated, the depth information should be contained in the num_cells argument (see below). If num_cells is an iterable and depth is indicated, both should match: len(num_cells) must be equal to the depth.

  • num_cells (int or sequence of int, optional) – number of cells of every layer in between the input and output. If an integer is provided, every layer will have the same number of cells and the depth will be retrieved from depth. If an iterable is provided, the linear layers out_features will match the content of num_cells. Defaults to [32, 32, 32] or [32] * depth` is depth is not ``None.

  • kernel_sizes (int, sequence of int, optional) – Kernel size(s) of the conv network. If iterable, the length must match the depth, defined by the num_cells or depth arguments. Defaults to 3.

  • strides (int or sequence of int) – Stride(s) of the conv network. If iterable, the length must match the depth, defined by the num_cells or depth arguments. Defaults to 1.

  • activation_class (Type[nn.Module] or callable) – activation class or constructor to be used. Defaults to Tanh.

  • activation_kwargs (dict or list of dicts, optional) – kwargs to be used with the activation class. A list of kwargs of length depth with one element per layer can also be provided.

  • norm_class (Type or callable, optional) – normalization class, if any.

  • norm_kwargs (dict or list of dicts, optional) – kwargs to be used with the normalization layers. A list of kwargs of length depth with one element per layer can also be provided.

  • bias_last_layer (bool) – if True, the last Linear layer will have a bias parameter. Defaults to True.

  • aggregator_class (Type[nn.Module] or callable) – aggregator class or constructor to use at the end of the chain. Defaults to SquashDims.

  • aggregator_kwargs (dict, optional) – kwargs for the aggregator_class constructor.

  • squeeze_output (bool) – whether the output should be squeezed of its singleton dimensions. Defaults to False.

  • device (torch.device, optional) – device to create the module on.

Examples

>>> # All of the following examples provide valid, working MLPs
>>> cnet = Conv3dNet(in_features=3, depth=1, num_cells=[32,])
>>> print(cnet)
Conv3dNet(
    (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (1): ELU(alpha=1.0)
    (2): SquashDims()
)
>>> cnet = Conv3dNet(in_features=3, depth=4, num_cells=32)
>>> print(cnet)
Conv3dNet(
    (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (1): ELU(alpha=1.0)
    (2): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (3): ELU(alpha=1.0)
    (4): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (5): ELU(alpha=1.0)
    (6): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (7): ELU(alpha=1.0)
    (8): SquashDims()
)
>>> cnet = Conv3dNet(in_features=3, num_cells=[32, 33, 34, 35])  # defines the depth by the num_cells arg
>>> print(cnet)
Conv3dNet(
    (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (1): ELU(alpha=1.0)
    (2): Conv3d(32, 33, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (3): ELU(alpha=1.0)
    (4): Conv3d(33, 34, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (5): ELU(alpha=1.0)
    (6): Conv3d(34, 35, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (7): ELU(alpha=1.0)
    (8): SquashDims()
)
>>> cnet = Conv3dNet(in_features=3, num_cells=[32, 33, 34, 35], kernel_sizes=[3, 4, 5, (2, 3, 4)])  # defines kernels, possibly rectangular
>>> print(cnet)
Conv3dNet(
    (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1))
    (1): ELU(alpha=1.0)
    (2): Conv3d(32, 33, kernel_size=(4, 4, 4), stride=(1, 1, 1))
    (3): ELU(alpha=1.0)
    (4): Conv3d(33, 34, kernel_size=(5, 5, 5), stride=(1, 1, 1))
    (5): ELU(alpha=1.0)
    (6): Conv3d(34, 35, kernel_size=(2, 3, 4), stride=(1, 1, 1))
    (7): ELU(alpha=1.0)
    (8): SquashDims()
)
forward(inputs: Tensor) Tensor[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources