braindecode.models.SincShallowNet#
- class braindecode.models.SincShallowNet(num_time_filters=32, time_filter_len=33, depth_multiplier=2, activation=<class 'torch.nn.modules.activation.ELU'>, drop_prob=0.5, first_freq=5.0, min_freq=1.0, freq_stride=1.0, padding='same', bandwidth=4.0, pool_size=55, pool_stride=12, n_chans=None, n_outputs=None, n_times=None, input_window_seconds=None, sfreq=None, chs_info=None)[source]#
Sinc-ShallowNet from Borra, D et al (2020) [borra2020].
Convolution Interpretability
The Sinc-ShallowNet architecture has these fundamental blocks:
Block 1: Spectral and Spatial Feature Extraction
Temporal Sinc-Convolutional Layer: Uses parametrized sinc functions to learn band-pass filters, significantly reducing the number of trainable parameters by only learning the lower and upper cutoff frequencies for each filter.
Spatial Depthwise Convolutional Layer: Applies depthwise convolutions to learn spatial filters for each temporal feature map independently, further reducing parameters and enhancing interpretability.
Batch Normalization
Block 2: Temporal Aggregation
Activation Function: ELU
Average Pooling Layer: Aggregation by averaging spatial dim
Dropout Layer
Flatten Layer
Block 3: Classification
Fully Connected Layer: Maps the feature vector to n_outputs.
Implementation Notes:
- The sinc-convolutional layer initializes cutoff frequencies uniformly
within the desired frequency range and updates them during training while ensuring the lower cutoff is less than the upper cutoff.
- Parameters:
num_time_filters (
int) – Number of temporal filters in the SincFilter layer.time_filter_len (
int) – Size of the temporal filters.depth_multiplier (
int) – Depth multiplier for spatial filtering.activation (
type[Module] |None) – Activation function to use. Default is nn.ELU().drop_prob (
float) – Dropout probability. Default is 0.5.first_freq (
float) – The starting frequency for the first Sinc filter. Default is 5.0.min_freq (
float) – Minimum frequency allowed for the low frequencies of the filters. Default is 1.0.freq_stride (
float) – Frequency stride for the Sinc filters. Controls the spacing between the filter frequencies. Default is 1.0.padding (
str) – Padding mode for convolution, either ‘same’ or ‘valid’. Default is ‘same’.bandwidth (
float) – Initial bandwidth for each Sinc filter. Default is 4.0.pool_size (
int) – Size of the pooling window for the average pooling layer. Default is 55.pool_stride (
int) – Stride of the pooling operation. Default is 12.n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]. Refer tomne.Infofor more details.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
Notes
This implementation is based on the implementation from [sincshallowcode].
References
[borra2020]Borra, D., Fantozzi, S., & Magosso, E. (2020). Interpretable and lightweight convolutional neural network for EEG decoding: Application to movement execution and imagination. Neural Networks, 129, 55-74.
[sincshallowcode]Sinc-ShallowNet re-implementation source code: https://github.com/marcellosicbaldi/SincNet-Tensorflow
Hugging Face Hub integration
When the optional
huggingface_hubpackage is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:pip install braindecode[hub]
Pushing a model to the Hub:
from braindecode.models import SincShallowNet # Train your model model = SincShallowNet(n_chans=22, n_outputs=4, n_times=1000) # ... training code ... # Push to the Hub model.push_to_hub( repo_id="username/my-sincshallownet-model", commit_message="Initial model upload", )
Loading a model from the Hub:
from braindecode.models import SincShallowNet # Load pretrained model model = SincShallowNet.from_pretrained("username/my-sincshallownet-model") # Load with a different number of outputs (head is rebuilt automatically) model = SincShallowNet.from_pretrained("username/my-sincshallownet-model", n_outputs=4)
Extracting features and replacing the head:
import torch x = torch.randn(1, model.n_chans, model.n_times) # Extract encoder features (consistent dict across all models) out = model(x, return_features=True) features = out["features"] # Replace the classification head model.reset_head(n_outputs=10)
Saving and restoring full configuration:
import json config = model.get_config() # all __init__ params with open("config.json", "w") as f: json.dump(config, f) model2 = SincShallowNet.from_config(config) # reconstruct (no weights)
All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.
See Loading and Adapting Pretrained Foundation Models for a complete tutorial.
Methods