Shortcuts

MLP

class torchvision.ops.MLP(in_channels: int, hidden_channels: ~typing.List[int], norm_layer: ~typing.Optional[~typing.Callable[[...], ~torch.nn.modules.module.Module]] = None, activation_layer: ~typing.Optional[~typing.Callable[[...], ~torch.nn.modules.module.Module]] = <class 'torch.nn.modules.activation.ReLU'>, inplace: ~typing.Optional[bool] = None, bias: bool = True, dropout: float = 0.0)[source]

This block implements the multi-layer perceptron (MLP) module.

Parameters:
  • in_channels (int) – Number of channels of the input

  • hidden_channels (List[int]) – List of the hidden channel dimensions

  • norm_layer (Callable[..., torch.nn.Module], optional) – Norm layer that will be stacked on top of the linear layer. If None this layer won’t be used. Default: None

  • activation_layer (Callable[..., torch.nn.Module], optional) – Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the linear layer. If None this layer won’t be used. Default: torch.nn.ReLU

  • inplace (bool, optional) – Parameter for the activation layer, which can optionally do the operation in-place. Default is None, which uses the respective default values of the activation_layer and Dropout layer.

  • bias (bool) – Whether to use bias in the linear layer. Default True

  • dropout (float) – The probability for the dropout layer. Default: 0.0

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources