braindecode.models.SyncNet#
- class braindecode.models.SyncNet(n_chans=None, n_times=None, n_outputs=None, chs_info=None, input_window_seconds=None, sfreq=None, num_filters=1, filter_width=40, pool_size=40, activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.ReLU'>, ampli_init_values: tuple[float, float] = (-0.05, 0.05), omega_init_values: tuple[float, float] = (0.0, 1.0), beta_init_values: tuple[float, float] = (0.0, 0.05), phase_init_values: tuple[float, float] = (0.0, 0.05))[source]#
Synchronization Network (SyncNet) from Li, Y et al (2017) [Li2017].
SyncNet uses parameterized 1-dimensional convolutional filters inspired by the Morlet wavelet to extract features from EEG signals. The filters are dynamically generated based on learnable parameters that control the oscillation and decay characteristics.
The filter for channel
c
and filterk
is defined as:\[f_c^{(k)}(\tau) = amplitude_c^{(k)} \cos(\omega^{(k)} \tau + \phi_c^{(k)}) \exp(-\beta^{(k)} \tau^2)\]where: - \(amplitude_c^{(k)}\) is the amplitude parameter (channel-specific). - \(\omega^{(k)}\) is the frequency parameter (shared across channels). - \(\phi_c^{(k)}\) is the phase shift (channel-specific). - \(\beta^{(k)}\) is the decay parameter (shared across channels). - \(\tau\) is the time index.
- Parameters:
n_chans (int) – Number of EEG channels.
n_times (int) – Number of time samples of the input window.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
num_filters (int, optional) – Number of filters in the convolutional layer. Default is 1.
filter_width (int, optional) – Width of the convolutional filters. Default is 40.
pool_size (int, optional) – Size of the pooling window. Default is 40.
activation (nn.Module, optional) – Activation function to apply after pooling. Default is
nn.ReLU
.ampli_init_values (tuple of float, optional) – The initialization range for amplitude parameter using uniform distribution. Default is (-0.05, 0.05).
omega_init_values (tuple of float, optional) – The initialization range for omega parameters using uniform distribution. Default is (0, 1).
beta_init_values (tuple of float, optional) – The initialization range for beta parameters using uniform distribution. Default is (0, 1). Default is (0, 0.05).
phase_init_values (tuple of float, optional) – The initialization range for phase parameters using normal distribution. Default is (0, 1). Default is (0, 0.05).
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct! it has not been checked by original authors. The modifications are based on derivated code from [CodeICASSP2025].
References
[Li2017]Li, Y., Dzirasa, K., Carin, L., & Carlson, D. E. (2017). Targeting EEG/LFP synchrony with neural nets. Advances in neural information processing systems, 30.
[CodeICASSP2025]Code from Baselines for EEG-Music Emotion Recognition Grand Challenge at ICASSP 2025. SalvoCalcagno/eeg-music-challenge-icassp-2025-baselines
Methods
- forward(x)[source]#
Forward pass of the SyncNet model.
- Parameters:
x (torch.Tensor) – Input tensor of shape (batch_size, n_chans, n_times)
- Returns:
out – Output tensor of shape (batch_size, n_outputs).
- Return type: