braindecode.models.EEGNeX#

class braindecode.models.EEGNeX(n_chans=None, n_outputs=None, n_times=None, chs_info=None, input_window_seconds=None, sfreq=None, activation=<class 'torch.nn.modules.activation.ELU'>, depth_multiplier=2, filter_1=8, filter_2=32, drop_prob=0.5, kernel_block_1_2=64, kernel_block_4=16, dilation_block_4=2, avg_pool_block4=4, kernel_block_5=16, dilation_block_5=4, avg_pool_block5=8, max_norm_conv=1.0, max_norm_linear=0.25)[source]#

EEGNeX model from Chen et al. (2024) [eegnex].

Convolution

EEGNeX Architecture

Architectural Overview

EEGNeX is a purely convolutional architecture that refines the EEGNet-style stem and deepens the temporal stack with dilated temporal convolutions. The end-to-end flow is:

  • (i) Block-1/2: two temporal convolutions (1 x L) with BN refine a learned FIR-like temporal filter bank (no pooling yet);

  • (ii) Block-3: depthwise spatial convolution across electrodes (n_chans x 1) with max-norm constraint, followed by ELU → AvgPool (time) → Dropout;

  • (iii) Block-4/5: two additional temporal convolutions with increasing dilation to expand the receptive field; the last block applies ELU → AvgPool → Dropout → Flatten;

    1. Classifier: a max-norm–constrained linear layer.

The published work positions EEGNeX as a compact, conv-only alternative that consistently outperforms prior baselines across MOABB-style benchmarks, with the popular “EEGNeX-8,32” shorthand denoting 8 temporal filters and kernel length 32.

Macro Components

Role: Learns per-filter spatial patterns over the full montage while temporal

pooling stabilizes and compresses features; max-norm encourages well-behaved spatial weights similar to EEGNet practice.

Role: Expands the temporal receptive field efficiently to capture rhythms and long-range context after condensation.

Convolutional Details

  • Temporal (where time-domain patterns are learned). Blocks 1-2 learn the primary filter bank (oscillations/transients), while Blocks 4-5 use dilation to integrate over longer horizons without extra pooling. The final AvgPool in Block-5 sets the output token rate and helps noise suppression.

  • Spatial (how electrodes are processed). A single depthwise spatial conv (Block-3) spans the entire electrode set (kernel (n_chans, 1)), producing per-temporal-filter topographies; no cross-filter mixing occurs at this stage, aiding interpretability.

  • Spectral (how frequency content is captured). Frequency selectivity emerges from the learned temporal kernels; dilation broadens effective bandwidth coverage by composing multiple scales.

Additional Mechanisms

  • EEGNeX-8,32 naming. “8,32” indicates 8 temporal filters and kernel length 32, reflecting the paper’s ablation path from EEGNet-8,2 toward thicker temporal kernels and a deeper conv stack.

  • Max-norm constraints. Spatial (Block-3) and final linear layers use max-norm regularization—standard in EEG CNNs—to reduce overfitting and encourage stable spatial patterns.

Usage and Configuration

  • Kernel schedule. Start with the canonical EEGNeX-8,32 (filter_1=8, kernel_block_1_2=32) and keep Block-3 depth multiplier modest (e.g., 2) to match the paper’s “pure conv” profile.

  • Pooling vs. dilation. Use pooling in Blocks 3 and 5 to control compute and variance; increase dilations (Blocks 4-5) to widen temporal context when windows are short.

  • Regularization. Combine dropout (Blocks 3 & 5) with max-norm on spatial and classifier layers; prefer ELU activations for stable training on small EEG datasets.

  • The braindecode implementation follows the paper’s conv-only design with five blocks and reproduces the depthwise spatial step and dilated temporal stack. See the class reference for exact kernel sizes, dilations, and pooling defaults. You can check the original implementation at [EEGNexCode].

Added in version 1.1.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • activation (nn.Module, optional) – Activation function to use. Default is nn.ELU.

  • depth_multiplier (int, optional) – Depth multiplier for the depthwise convolution. Default is 2.

  • filter_1 (int, optional) – Number of filters in the first convolutional layer. Default is 8.

  • filter_2 (int, optional) – Number of filters in the second convolutional layer. Default is 32.

  • drop_prob (float, optional) – Dropout rate. Default is 0.5.

  • kernel_block_1_2 (int) – The description is missing.

  • kernel_block_4 (tuple[int, int], optional) – Kernel size for block 4. Default is (1, 16).

  • dilation_block_4 (tuple[int, int], optional) – Dilation rate for block 4. Default is (1, 2).

  • avg_pool_block4 (tuple[int, int], optional) – Pooling size for block 4. Default is (1, 4).

  • kernel_block_5 (tuple[int, int], optional) – Kernel size for block 5. Default is (1, 16).

  • dilation_block_5 (tuple[int, int], optional) – Dilation rate for block 5. Default is (1, 4).

  • avg_pool_block5 (tuple[int, int], optional) – Pooling size for block 5. Default is (1, 8).

  • max_norm_conv (float) – The description is missing.

  • max_norm_linear (float) – The description is missing.

Raises:

ValueError – If some input signal-related parameters are not specified: and can not be inferred.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[eegnex]

Chen, X., Teng, X., Chen, H., Pan, Y., & Geyer, P. (2024). Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeX. Biomedical Signal Processing and Control, 87, 105475.

[EEGNexCode]

Chen, X., Teng, X., Chen, H., Pan, Y., & Geyer, P. (2024). Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeX. chenxiachan/EEGNeX

Methods

forward(x)[source]#

Forward pass of the EEGNeX model.

Parameters:

x (torch.Tensor) – Input tensor of shape (batch_size, n_chans, n_times).

Returns:

Output tensor of shape (batch_size, n_outputs).

Return type:

torch.Tensor