braindecode.models.TIDNet#

class braindecode.models.TIDNet(n_chans=None, n_outputs=None, n_times=None, in_chans=None, n_classes=None, input_window_samples=None, s_growth=24, t_filters=32, drop_prob=0.4, pooling=15, temp_layers=2, spat_layers=2, temp_span=0.05, bottleneck=3, summary=-1, add_log_softmax=True)[source]#

Thinker Invariance DenseNet model from Kostas et al 2020.

See [TIDNet] for details.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • in_chans – Alias for n_chans.

  • n_classes – Alias for n_outputs.

  • input_window_samples – Alias for n_times.

  • s_growth (int) – DenseNet-style growth factor (added filters per DenseFilter)

  • t_filters (int) – Number of temporal filters.

  • drop_prob (float) – Dropout probability

  • pooling (int) – Max temporal pooling (width and stride)

  • temp_layers (int) – Number of temporal layers

  • spat_layers (int) – Number of DenseFilters

  • temp_span (float) – Percentage of n_times that defines the temporal filter length: temp_len = ceil(temp_span * n_times) e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal filter of length 75.

  • bottleneck (int) – Bottleneck factor within Densefilter

  • summary (int) – Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated automatically (n_times // pooling).

  • add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

Code adapted from: SPOClab-ca/ThinkerInvariance

References

[TIDNet]

Kostas, D. & Rudzicz, F. Thinker invariance: enabling deep neural networks for BCI across more people. J. Neural Eng. 17, 056008 (2020). doi: 10.1088/1741-2552/abb7a7.

Methods

forward(x)[source]#

Forward pass.

Parameters:

x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).