braindecode.models.FBLightConvNet#
- class braindecode.models.FBLightConvNet(n_chans=None, n_outputs=None, chs_info=None, n_times=None, input_window_seconds=None, sfreq=None, n_bands=9, n_filters_spat: int = 32, n_dim: int = 3, stride_factor: int = 4, win_len: int = 250, heads: int = 8, weight_softmax: bool = True, bias: bool = False, activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.ELU'>, verbose: bool = False, filter_parameters: dict | None = None)[source]#
LightConvNet from Ma, X et al (2023) [lightconvnet].
A lightweight convolutional neural network incorporating temporal dependency learning and attention mechanisms. The architecture is designed to efficiently capture spatial and temporal features through specialized convolutional layers and multi-head attention.
The network architecture consists of four main modules:
- Spatial and Spectral Information Learning:
Applies filterbank and spatial convolutions. This module is followed by batch normalization and an activation function to enhance feature representation.
- Temporal Segmentation and Feature Extraction:
Divides the processed data into non-overlapping temporal windows. Within each window, a variance-based layer extracts discriminative features, which are then log-transformed to stabilize variance before being passed to the attention module.
- Temporal Attention Module: Utilizes a multi-head attention
mechanism with depthwise separable convolutions to capture dependencies across different temporal segments. The attention weights are normalized using softmax and aggregated to form a comprehensive temporal representation.
- Final Layer: Flattens the aggregated features and passes them
through a linear layer to with kernel sizes matching the input dimensions to integrate features across different channels generate the final output predictions.
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
n_bands (int or None or list of tuple of int, default=8) – Number of frequency bands or a list of frequency band tuples. If a list of tuples is provided, each tuple defines the lower and upper bounds of a frequency band.
n_filters_spat (int, default=32) – Number of spatial filters in the depthwise convolutional layer.
n_dim (int, default=3) – Number of dimensions for the temporal reduction layer.
stride_factor (int, default=4) – Stride factor used for reshaping the temporal dimension.
win_len – The description is missing.
heads (int, default=8) – Number of attention heads in the multi-head attention mechanism.
weight_softmax (bool, default=True) – If True, applies softmax to the attention weights.
bias (bool, default=False) – If True, includes a bias term in the convolutional layers.
activation (nn.Module, default=nn.ELU) – Activation function class to apply after convolutional layers.
verbose (bool, default=False) – If True, enables verbose output during filter creation using mne.
filter_parameters (dict, default={}) – Additional parameters for the FilterBankLayer.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
Notes
This implementation is not guaranteed to be correct and has not been checked by the original authors; it is a braindecode adaptation from the Pytorch source-code [lightconvnetcode].
References
[lightconvnet]Ma, X., Chen, W., Pei, Z., Liu, J., Huang, B., & Chen, J. (2023). A temporal dependency learning CNN with attention mechanism for MI-EEG decoding. IEEE Transactions on Neural Systems and Rehabilitation Engineering.
[lightconvnetcode]Link to source-code: Ma-Xinzhi/LightConvNet
Methods
- forward(x: Tensor) Tensor [source]#
Forward pass of the FBLightConvNet model.
- Parameters:
x (torch.Tensor) – Input tensor with shape (batch_size, n_chans, n_times).
- Returns:
Output tensor with shape (batch_size, n_outputs).
- Return type: