braindecode.models.TIDNet#

class braindecode.models.TIDNet(n_chans=None, n_outputs=None, n_times=None, input_window_seconds=None, sfreq=None, chs_info=None, s_growth=24, t_filters=32, drop_prob=0.4, pooling=15, temp_layers=2, spat_layers=2, temp_span=0.05, bottleneck=3, summary=-1, activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.LeakyReLU'>)[source]#

Thinker Invariance DenseNet model from Kostas et al. (2020) [TIDNet].

TIDNet Architecture

See [TIDNet] for details.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • s_growth (int) – DenseNet-style growth factor (added filters per DenseFilter)

  • t_filters (int) – Number of temporal filters.

  • drop_prob (float) – Dropout probability

  • pooling (int) – Max temporal pooling (width and stride)

  • temp_layers (int) – Number of temporal layers

  • spat_layers (int) – Number of DenseFilters

  • temp_span (float) – Percentage of n_times that defines the temporal filter length: temp_len = ceil(temp_span * n_times) e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal filter of length 75.

  • bottleneck (int) – Bottleneck factor within Densefilter

  • summary (int) – Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated automatically (n_times // pooling).

  • activation (nn.Module, default=nn.LeakyReLU) – Activation function class to apply. Should be a PyTorch activation module class like nn.ReLU or nn.ELU. Default is nn.LeakyReLU.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

Code adapted from: SPOClab-ca/ThinkerInvariance

References

[TIDNet] (1,2)

Kostas, D. & Rudzicz, F. Thinker invariance: enabling deep neural networks for BCI across more people. J. Neural Eng. 17, 056008 (2020). doi: 10.1088/1741-2552/abb7a7.

Methods

forward(x)[source]#

Forward pass.

Parameters:

x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).