braindecode.models.SCCNet#

class braindecode.models.SCCNet(n_chans=None, n_outputs=None, n_times=None, chs_info=None, input_window_seconds=None, sfreq=None, n_spatial_filters=22, n_spatial_filters_smooth=20, drop_prob=0.5, activation=<class 'braindecode.modules.activation.LogActivation'>, batch_norm_momentum=0.1)[source]#

SCCNet from Wei, C S (2019) [sccnet].

Convolution

Spatial component-wise convolutional network (SCCNet) for motor-imagery EEG classification.

Spatial component-wise convolutional network

Architectural Overview

SCCNet is a spatial-first convolutional layer that fixes temporal kernels in seconds to make its filters correspond to neurophysiologically aligned windows. The model comprises four stages:

  1. Spatial Component Analysis: Performs convolution spatial filtering

    across all EEG channels to extract spatial components, effectively reducing the channel dimension.

  2. Spatio-Temporal Filtering: Applies convolution across the spatial

    components and temporal domain to capture spatio-temporal patterns.

  3. Temporal Smoothing (Pooling): Uses average pooling over time to smooth the features and reduce the temporal dimension, focusing on longer-term patterns.

  4. Classification: Flattens the features and applies a fully connected layer.

Macro Components

  • SCCNet.spatial_conv (spatial component analysis)

    • Operations.

    • Conv2d with kernel (n_chans, N_t) and stride (1, 1) on an input reshaped to (B, 1, n_chans, T); typical choice N_t=1 yields a pure across-channel projection (montage-wide linear spatial filter).

    • Zero padding to preserve time, BatchNorm2d; output has N_u component signals shaped (B, 1, N_u, T) after a permute step.

Interpretability/robustness. Mimics CSP-like spatial filtering: each learned filter is a channel-weighted component, easing inspection and reducing channel noise.

  • SCCNet.spatial_filt_conv (spatio-temporal filtering)

  • Role. Learns frequency-selective energy features and inter-component interactions within a 0.1 s context (beta/alpha cycle scale).

  • SCCNet.temporal_smoothing (aggregation + readout)

    • Operations.

    • AvgPool2d with size (1, 62) (~ 0.5 s) for temporal smoothing and downsampling

    • Flatten

    • Linear to n_outputs.

Convolutional Details

  • Temporal (where time-domain patterns are learned).

    The second block’s kernel length is fixed to 12 samples (≈ 100 ms) and slides with stride 1; average pooling (1, 62) (≈ 500 ms) integrates power over longer spans. These choices bake in short-cycle detection followed by half-second trend smoothing.

  • Spatial (how electrodes are processed).

    The first block’s kernel spans all electrodes (n_chans, N_t). With N_t=1, it reduces to a montage-wide linear projection, mapping channels → N_u components. The second block mixes across components via kernel height N_u.

  • Spectral (how frequency information is captured).

    No explicit transform is used; learned temporal kernels serve as bandpass-like filters, and the square/log power nonlinearity plus 0.5 s averaging approximate band-power estimation (ERD/ERS-style features).

Attention / Sequential Modules

This model contains no attention and no recurrent units.

Additional Mechanisms

  • BatchNorm2d and zero-padding are applied to both convolutions; L2 weight decay was used in the original paper; dropout p=0.5 combats overfitting.

  • Contrasting with other compact neural network, in EEGNet performs a temporal depthwise conv followed by a depthwise spatial conv (separable), learning temporal filters first. SCCNet inverts this order: it performs a full spatial projection first (CSP-like), then a short spatio-temporal conv with an explicit 0.1 s kernel, followed by power-like nonlinearity and longer temporal averaging. EEGNet’s ELU and separable design favor parameter efficiency; SCCNet’s second-scale kernels and square/log emphasize interpretable band-power features.

Usage and Configuration

  • Training from the original authors.

  • Match window length so that T is comfortably larger than pooling length

    (e.g., > 1.5-2 s for MI).

  • Start with standard MI augmentations (channel dropout/shuffle, time reverse)

    and tune n_spatial_filters before deeper changes.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • n_spatial_filters (int, optional) – Number of spatial filters in the first convolutional layer, variable N_u from the original paper. Default is 22.

  • n_spatial_filters_smooth (int, optional) – Number of spatial filters used as filter in the second convolutional layer. Default is 20.

  • drop_prob (float, optional) – Dropout probability. Default is 0.5.

  • activation (nn.Module, optional) – Activation function after the second convolutional layer. Default is logarithm activation.

  • batch_norm_momentum (float) – The description is missing.

Raises:

ValueError – If some input signal-related parameters are not specified: and can not be inferred.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[sccnet]

Wei, C. S., Koike-Akino, T., & Wang, Y. (2019, March). Spatial component-wise convolutional network (SCCNet) for motor-imagery EEG classification. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER) (pp. 328-331). IEEE.

[sccnetcode]

Hsieh, C. Y., Chou, J. L., Chang, Y. H., & Wei, C. S. XBrainLab: An Open-Source Software for Explainable Artificial Intelligence-Based EEG Analysis. In NeurIPS 2023 AI for Science Workshop.

Methods

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor) – The description is missing.

Return type:

Tensor