braindecode.modules.MLP#
- class braindecode.modules.MLP(in_features: int, hidden_features=None, out_features=None, activation=<class 'torch.nn.modules.activation.GELU'>, drop=0.0, normalize=False)[source]#
Multilayer Perceptron (MLP) with GELU activation and optional dropout.
Also known as fully connected feedforward network, an MLP is a sequence of non-linear parametric functions
\[h_{i + 1} = a_{i + 1}(h_i W_{i + 1}^T + b_{i + 1}),\]over feature vectors \(h_i\), with the input and output feature vectors \(x = h_0\) and \(y = h_L\), respectively. The non-linear functions \(a_i\) are called activation functions. The trainable parameters of an MLP are its weights and biases \(\\phi = \{W_i, b_i | i = 1, \dots, L\}\).