braindecode.models.SyncNet#

class braindecode.models.SyncNet(n_chans=None, n_times=None, n_outputs=None, chs_info=None, input_window_seconds=None, sfreq=None, num_filters=1, filter_width=40, pool_size=40, activation=<class 'torch.nn.modules.activation.ReLU'>, ampli_init_values=(-0.05, 0.05), omega_init_values=(0.0, 1.0), beta_init_values=(0.0, 0.05), phase_init_values=(0.0, 0.05))[source]#

Synchronization Network (SyncNet) from Li, Y et al (2017) [Li2017].

Interpretability

SyncNet Architecture

SyncNet uses parameterized 1-dimensional convolutional filters inspired by the Morlet wavelet to extract features from EEG signals. The filters are dynamically generated based on learnable parameters that control the oscillation and decay characteristics.

The filter for channel c and filter k is defined as:

\[\begin{split}f_c^{(k)}(\\tau) = amplitude_c^{(k)} \\cos(\\omega^{(k)} \\tau + \\phi_c^{(k)}) \\exp(-\\beta^{(k)} \\tau^2)\end{split}\]

where: - \(amplitude_c^{(k)}\) is the amplitude parameter (channel-specific). - \(\\omega^{(k)}\) is the frequency parameter (shared across channels). - \(\\phi_c^{(k)}\) is the phase shift (channel-specific). - \(\\beta^{(k)}\) is the decay parameter (shared across channels). - \(\\tau\) is the time index.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_times (int) – Number of time samples of the input window.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • num_filters (int, optional) – Number of filters in the convolutional layer. Default is 1.

  • filter_width (int, optional) – Width of the convolutional filters. Default is 40.

  • pool_size (int, optional) – Size of the pooling window. Default is 40.

  • activation (type[Module]) – Activation function to apply after pooling. Default is nn.ReLU.

  • ampli_init_values (tuple[float, float]) – The initialization range for amplitude parameter using uniform distribution. Default is (-0.05, 0.05).

  • omega_init_values (tuple[float, float]) – The initialization range for omega parameters using uniform distribution. Default is (0, 1).

  • beta_init_values (tuple[float, float]) – The initialization range for beta parameters using uniform distribution. Default is (0, 1). Default is (0, 0.05).

  • phase_init_values (tuple[float, float]) – The initialization range for phase parameters using normal distribution. Default is (0, 1). Default is (0, 0.05).

Raises:

ValueError – If some input signal-related parameters are not specified: and can not be inferred.

Notes

This implementation is not guaranteed to be correct! it has not been checked by original authors. The modifications are based on derivated code from [CodeICASSP2025].

References

[Li2017]

Li, Y., Dzirasa, K., Carin, L., & Carlson, D. E. (2017). Targeting EEG/LFP synchrony with neural nets. Advances in neural information processing systems, 30.

[CodeICASSP2025]

Code from Baselines for EEG-Music Emotion Recognition Grand Challenge at ICASSP 2025. https://github.com/SalvoCalcagno/eeg-music-challenge-icassp-2025-baselines

Hugging Face Hub integration

When the optional huggingface_hub package is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:

pip install braindecode[hub]

Pushing a model to the Hub:

from braindecode.models import SyncNet

# Train your model
model = SyncNet(n_chans=22, n_outputs=4, n_times=1000)
# ... training code ...

# Push to the Hub
model.push_to_hub(
    repo_id="username/my-syncnet-model",
    commit_message="Initial model upload",
)

Loading a model from the Hub:

from braindecode.models import SyncNet

# Load pretrained model
model = SyncNet.from_pretrained("username/my-syncnet-model")

# Load with a different number of outputs (head is rebuilt automatically)
model = SyncNet.from_pretrained("username/my-syncnet-model", n_outputs=4)

Extracting features and replacing the head:

import torch

x = torch.randn(1, model.n_chans, model.n_times)
# Extract encoder features (consistent dict across all models)
out = model(x, return_features=True)
features = out["features"]

# Replace the classification head
model.reset_head(n_outputs=10)

Saving and restoring full configuration:

import json

config = model.get_config()            # all __init__ params
with open("config.json", "w") as f:
    json.dump(config, f)

model2 = SyncNet.from_config(config)    # reconstruct (no weights)

All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.

See Loading and Adapting Pretrained Foundation Models for a complete tutorial.

Methods

forward(x)[source]#

Forward pass of the SyncNet model.

Parameters:

x (torch.Tensor) – Input tensor of shape (batch_size, n_chans, n_times)

Returns:

out – Output tensor of shape (batch_size, n_outputs).

Return type:

torch.Tensor