braindecode.models.SleepStagerEldele2021¶
- class braindecode.models.SleepStagerEldele2021(sfreq, n_tce=2, d_model=80, d_ff=120, n_attn_heads=5, dropout=0.1, input_size_s=30, n_classes=5, after_reduced_cnn_size=30, return_feats=False)¶
Sleep Staging Architecture from Eldele et al 2021.
Attention based Neural Net for sleep staging as described in [Eldele2021]. The code for the paper and this model is also available at [1]. Takes single channel EEG as input. Feature extraction module based on multi-resolution convolutional neural network (MRCNN) and adaptive feature recalibration (AFR). The second module is the temporal context encoder (TCE) that leverages a multi-head attention mechanism to capture the temporal dependencies among the extracted features.
Warning - This model was designed for signals of 30 seconds at 100Hz or 125Hz (in which case the reference architecture from [1] which was validated on SHHS dataset [2] will be used) to use any other input is likely to make the model perform in unintended ways.
- Parameters
- sfreqfloat
EEG sampling frequency.
- n_tceint
Number of TCE clones.
- d_modelint
Input dimension for the TCE. Also the input dimension of the first FC layer in the feed forward and the output of the second FC layer in the same. Increase for higher sampling rate/signal length. It should be divisible by n_attn_heads
- d_ffint
Output dimension of the first FC layer in the feed forward and the input dimension of the second FC layer in the same.
- n_attn_headsint
Number of attention heads. It should be a factor of d_model
- dropoutfloat
Dropout rate in the PositionWiseFeedforward layer and the TCE layers.
- input_size_sfloat
Size of the input, in seconds.
- n_classesint
Number of classes.
- after_reduced_cnn_sizeint
Number of output channels produced by the convolution in the AFR module.
- return_featsbool
If True, return the features, i.e. the output of the feature extractor (before the final linear layer). If False, pass the features through the final linear layer.
References
- Eldele2021
E. Eldele et al., “An Attention-Based Deep Learning Approach for Sleep Stage Classification With Single-Channel EEG,” in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 29, pp. 809-818, 2021, doi: 10.1109/TNSRE.2021.3076234.
- 1(1,2)
- 2
Methods
- forward(x)¶
Forward pass.
- Parameters
- x: torch.Tensor
Batch of EEG windows of shape (batch_size, n_channels, n_times).