braindecode.models.Deep4Net#

class braindecode.models.Deep4Net(n_chans=None, n_outputs=None, n_times=None, final_conv_length='auto', n_filters_time=25, n_filters_spat=25, filter_time_length=10, pool_time_length=3, pool_time_stride=3, n_filters_2=50, filter_length_2=10, n_filters_3=100, filter_length_3=10, n_filters_4=200, filter_length_4=10, first_conv_nonlin=<function elu>, first_pool_mode='max', first_pool_nonlin=<function identity>, later_conv_nonlin=<function elu>, later_pool_mode='max', later_pool_nonlin=<function identity>, drop_prob=0.5, split_first_layer=True, batch_norm=True, batch_norm_alpha=0.1, stride_before_pool=False, chs_info=None, input_window_seconds=None, sfreq=None, in_chans=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#

Deep ConvNet model from Schirrmeister et al 2017.

Model described in [Schirrmeister2017].

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • final_conv_length (int | str) – Length of the final convolution layer. If set to “auto”, n_times must not be None. Default: “auto”.

  • n_filters_time (int) – Number of temporal filters.

  • n_filters_spat (int) – Number of spatial filters.

  • filter_time_length (int) – Length of the temporal filter in layer 1.

  • pool_time_length (int) – Length of temporal pooling filter.

  • pool_time_stride (int) – Length of stride between temporal pooling filters.

  • n_filters_2 (int) – Number of temporal filters in layer 2.

  • filter_length_2 (int) – Length of the temporal filter in layer 2.

  • n_filters_3 (int) – Number of temporal filters in layer 3.

  • filter_length_3 (int) – Length of the temporal filter in layer 3.

  • n_filters_4 (int) – Number of temporal filters in layer 4.

  • filter_length_4 (int) – Length of the temporal filter in layer 4.

  • first_conv_nonlin (callable) – Non-linear activation function to be used after convolution in layer 1.

  • first_pool_mode (str) – Pooling mode in layer 1. “max” or “mean”.

  • first_pool_nonlin (callable) – Non-linear activation function to be used after pooling in layer 1.

  • later_conv_nonlin (callable) – Non-linear activation function to be used after convolution in later layers.

  • later_pool_mode (str) – Pooling mode in later layers. “max” or “mean”.

  • later_pool_nonlin (callable) – Non-linear activation function to be used after pooling in later layers.

  • drop_prob (float) – Dropout probability.

  • split_first_layer (bool) – Split first layer into temporal and spatial layers (True) or just use temporal (False). There would be no non-linearity between the split layers.

  • batch_norm (bool) – Whether to use batch normalisation.

  • batch_norm_alpha (float) – Momentum for BatchNorm2d.

  • stride_before_pool (bool) – Stride before pooling.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • in_chans – Alias for n_chans.

  • n_classes – Alias for n_outputs.

  • input_window_samples – Alias for n_times.

  • add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[Schirrmeister2017]

Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730

Examples using braindecode.models.Deep4Net#

Convolutional neural network regression model on fake data.

Convolutional neural network regression model on fake data.

Benchmarking eager and lazy loading

Benchmarking eager and lazy loading