braindecode.models.EEGTCNet#

class braindecode.models.EEGTCNet(n_chans=None, n_outputs=None, n_times=None, chs_info=None, input_window_seconds=None, sfreq=None, activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.ELU'>, depth_multiplier: int = 2, filter_1: int = 8, kern_length: int = 64, drop_prob: float = 0.5, depth: int = 2, kernel_size: int = 4, filters: int = 12, max_norm_const: float = 0.25)[source]#

EEGTCNet model from Ingolfsson et al. (2020) [ingolfsson2020].

EEGTCNet Architecture

Combining EEGNet and TCN blocks.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • activation (nn.Module, optional) – Activation function to use. Default is nn.ELU().

  • depth_multiplier (int, optional) – Depth multiplier for the depthwise convolution. Default is 2.

  • filter_1 (int, optional) – Number of temporal filters in the first convolutional layer. Default is 8.

  • kern_length (int, optional) – Length of the temporal kernel in the first convolutional layer. Default is 64.

  • drop_prob – The description is missing.

  • depth (int, optional) – Number of residual blocks in the TCN. Default is 2.

  • kernel_size (int, optional) – Size of the temporal convolutional kernel in the TCN. Default is 4.

  • filters (int, optional) – Number of filters in the TCN convolutional layers. Default is 12.

  • max_norm_const (float) – Maximum L2-norm constraint imposed on weights of the last fully-connected layer. Defaults to 0.25.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[ingolfsson2020]

Ingolfsson, T. M., Hersche, M., Wang, X., Kobayashi, N., Cavigelli, L., & Benini, L. (2020). EEG-TCNet: An accurate temporal convolutional network for embedded motor-imagery brain–machine interfaces. https://doi.org/10.48550/arXiv.2006.00622

Methods

forward(x: Tensor) Tensor[source]#

Forward pass of the EEGTCNet model.

Parameters:

x (torch.Tensor) – Input tensor of shape (batch_size, n_chans, n_times).

Returns:

Output tensor of shape (batch_size, n_outputs).

Return type:

torch.Tensor