braindecode.modules.MLP#

class braindecode.modules.MLP(in_features: int, hidden_features=None, out_features=None, activation=<class 'torch.nn.modules.activation.GELU'>, drop=0.0, normalize=False)[source]#

Multilayer Perceptron (MLP) with GELU activation and optional dropout.

Also known as fully connected feedforward network, an MLP is a sequence of non-linear parametric functions

hi+1=ai+1(hiWi+1T+bi+1),

over feature vectors hi, with the input and output feature vectors x=h0 and y=hL, respectively. The non-linear functions ai are called activation functions. The trainable parameters of an MLP are its weights and biases phi={Wi,bi|i=1,,L}.