braindecode.models.EEGInceptionMI#
- class braindecode.models.EEGInceptionMI(n_chans=None, n_outputs=None, input_window_seconds=4.5, sfreq=250, n_convs=5, n_filters=48, kernel_unit_s=0.1, activation=ReLU(), chs_info=None, n_times=None, in_channels=None, n_classes=None, input_window_s=None, add_log_softmax=True)[source]#
EEG Inception for Motor Imagery, as proposed in [1]
The model is strongly based on the original InceptionNet for computer vision. The main goal is to extract features in parallel with different scales. The network has two blocks made of 3 inception modules with a skip connection.
The model is fully described in [1].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
input_window_seconds (float, optional) – Size of the input, in seconds. Set to 4.5 s as in [1] for dataset BCI IV 2a.
sfreq (float, optional) – EEG sampling frequency in Hz. Defaults to 250 Hz as in [1] for dataset BCI IV 2a.
n_convs (int, optional) – Number of convolution per inception wide branching. Defaults to 5 as in [1] for dataset BCI IV 2a.
n_filters (int, optional) – Number of convolutional filters for all layers of this type. Set to 48 as in [1] for dataset BCI IV 2a.
kernel_unit_s (float, optional) – Size in seconds of the basic 1D convolutional kernel used in inception modules. Each convolutional layer in such modules have kernels of increasing size, odd multiples of this value (e.g. 0.1, 0.3, 0.5, 0.7, 0.9 here for `n_convs`=5). Defaults to 0.1 s.
activation (nn.Module) – Activation function. Defaults to ReLU activation.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
in_channels (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_window_s (float, optional) – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented bosed on the paper [1].
References
Methods
- forward(X: Tensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
X – The description is missing.