braindecode.models.ShallowFBCSPNet#

class braindecode.models.ShallowFBCSPNet(n_chans=None, n_outputs=None, n_times=None, n_filters_time=40, filter_time_length=25, n_filters_spat=40, pool_time_length=75, pool_time_stride=15, final_conv_length='auto', conv_nonlin=<function square>, pool_mode='mean', activation_pool_nonlin=<class 'braindecode.modules.activation.SafeLog'>, split_first_layer=True, batch_norm=True, batch_norm_alpha=0.1, drop_prob=0.5, chs_info=None, input_window_seconds=None, sfreq=None)[source]#

Shallow ConvNet model from Schirrmeister et al (2017) [Schirrmeister2017].

Convolution

ShallowNet Architecture

Model described in [Schirrmeister2017].

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • n_filters_time (int) – Number of temporal filters.

  • filter_time_length (int) – Length of the temporal filter.

  • n_filters_spat (int) – Number of spatial filters.

  • pool_time_length (int) – Length of temporal pooling filter.

  • pool_time_stride (int) – Length of stride between temporal pooling filters.

  • final_conv_length (int | str) – Length of the final convolution layer. If set to “auto”, length of the input signal must be specified.

  • conv_nonlin (callable) – Non-linear function to be used after convolution layers.

  • pool_mode (str) – Method to use on pooling layers. “max” or “mean”.

  • activation_pool_nonlin (callable) – Non-linear function to be used after pooling layers.

  • split_first_layer (bool) – Split first layer into temporal and spatial layers (True) or just use temporal (False). There would be no non-linearity between the split layers.

  • batch_norm (bool) – Whether to use batch normalisation.

  • batch_norm_alpha (float) – Momentum for BatchNorm2d.

  • drop_prob (float) – Dropout probability.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

Raises:

ValueError – If some input signal-related parameters are not specified: and can not be inferred.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

Hugging Face Hub integration

When the optional huggingface_hub package is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:

pip install braindecode[hug]

Pushing a model to the Hub:

from braindecode.models import EEGNetv4

# Train your model
model = EEGNetv4(n_chans=22, n_outputs=4, n_times=1000)
# ... training code ...

# Push to the Hub
model.push_to_hub(
    repo_id="username/my-eegnet-model", commit_message="Initial model upload"
)

Loading a model from the Hub:

from braindecode.models import EEGNetv4

# Load pretrained model
model = EEGNetv4.from_pretrained("username/my-eegnet-model")

The integration automatically handles EEG-specific parameters (n_chans, n_times, sfreq, chs_info, etc.) by saving them in a config file alongside the model weights. This ensures that loaded models are correctly configured for their original data specifications.

Important

Currently, only EEG-specific parameters (n_outputs, n_chans, n_times, input_window_seconds, sfreq, chs_info) are saved to the Hub. Model-specific parameters (e.g., dropout rates, activation functions, number of filters) are not preserved and will use their default values when loading from the Hub.

To use non-default model parameters, specify them explicitly when calling from_pretrained():

model = EEGNet.from_pretrained("user/model", dropout=0.3, activation='relu')

Full parameter serialization will be addressed in a future update.

References

[Schirrmeister2017] (1,2)

Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730

Examples using braindecode.models.ShallowFBCSPNet#

Simple training on MNE epochs

Simple training on MNE epochs

Cropped Decoding on BCIC IV 2a Dataset

Cropped Decoding on BCIC IV 2a Dataset

Basic Brain Decoding on EEG Data

Basic Brain Decoding on EEG Data

How to train, test and tune your model?

How to train, test and tune your model?

Hyperparameter tuning with scikit-learn

Hyperparameter tuning with scikit-learn

Convolutional neural network regression model on fake data.

Convolutional neural network regression model on fake data.

Training a Braindecode model in PyTorch

Training a Braindecode model in PyTorch

Benchmarking eager and lazy loading

Benchmarking eager and lazy loading

Fingers flexion cropped decoding on BCIC IV 4 ECoG Dataset

Fingers flexion cropped decoding on BCIC IV 4 ECoG Dataset

Data Augmentation on BCIC IV 2a Dataset

Data Augmentation on BCIC IV 2a Dataset

Searching the best data augmentation on BCIC IV 2a Dataset

Searching the best data augmentation on BCIC IV 2a Dataset

Fingers flexion decoding on BCIC IV 4 ECoG Dataset

Fingers flexion decoding on BCIC IV 4 ECoG Dataset