MLP¶
-
class
torchvision.ops.
MLP
(in_channels: int, hidden_channels: List[int], norm_layer: Optional[Callable[[...], torch.nn.modules.module.Module]] = None, activation_layer: Optional[Callable[[...], torch.nn.modules.module.Module]] = <class 'torch.nn.modules.activation.ReLU'>, inplace: Optional[bool] = True, bias: bool = True, dropout: float = 0.0)[source]¶ This block implements the multi-layer perceptron (MLP) module.
- Parameters
in_channels (int) – Number of channels of the input
hidden_channels (List[int]) – List of the hidden channel dimensions
norm_layer (Callable[.., torch.nn.Module], optional) – Norm layer that will be stacked on top of the convolution layer. If
None
this layer wont be used. Default:None
activation_layer (Callable[.., torch.nn.Module], optional) – Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If
None
this layer wont be used. Default:torch.nn.ReLU
inplace (bool) – Parameter for the activation layer, which can optionally do the operation in-place. Default
True
bias (bool) – Whether to use bias in the linear layer. Default
True
dropout (float) – The probability for the dropout layer. Default: 0.0