braindecode.models.EEGInceptionMI#
- class braindecode.models.EEGInceptionMI(n_chans=None, n_outputs=None, input_window_seconds=None, sfreq=250, n_convs=5, n_filters=48, kernel_unit_s=0.1, activation=<class 'torch.nn.modules.activation.ReLU'>, chs_info=None, n_times=None)[source]#
EEG Inception for Motor Imagery, as proposed in Zhang et al. (2021) [1]
Convolution
The model is strongly based on the original InceptionNet for computer vision. The main goal is to extract features in parallel with different scales. The network has two blocks made of 3 inception modules with a skip connection.
The model is fully described in [1].
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented bosed on the paper [1].
- Parameters:
input_window_seconds (float, optional) – Size of the input, in seconds. Set to 4.5 s as in [1] for dataset BCI IV 2a.
sfreq (float, optional) – EEG sampling frequency in Hz. Defaults to 250 Hz as in [1] for dataset BCI IV 2a.
n_convs (int, optional) – Number of convolution per inception wide branching. Defaults to 5 as in [1] for dataset BCI IV 2a.
n_filters (int, optional) – Number of convolutional filters for all layers of this type. Set to 48 as in [1] for dataset BCI IV 2a.
kernel_unit_s (float, optional) – Size in seconds of the basic 1D convolutional kernel used in inception modules. Each convolutional layer in such modules have kernels of increasing size, odd multiples of this value (e.g. 0.1, 0.3, 0.5, 0.7, 0.9 here for
n_convs=5). Defaults to 0.1 s.activation (nn.Module) – Activation function. Defaults to ReLU activation.
References
Methods
- forward(X)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Return type: