- class braindecode.models.SleepStagerChambon2018(n_channels, sfreq, n_conv_chs=8, time_conv_size_s=0.5, max_pool_size_s=0.125, pad_size_s=0.25, input_size_s=30, n_classes=5, dropout=0.25, apply_batch_norm=False, return_feats=False)[source]#
Sleep staging architecture from Chambon et al 2018.
Convolutional neural network for sleep staging described in [Chambon2018].
n_channels (int) – Number of EEG channels.
sfreq (float) – EEG sampling frequency.
n_conv_chs (int) – Number of convolutional channels. Set to 8 in [Chambon2018].
time_conv_size_s (float) – Size of filters in temporal convolution layers, in seconds. Set to 0.5 in [Chambon2018] (64 samples at sfreq=128).
max_pool_size_s (float) – Max pooling size, in seconds. Set to 0.125 in [Chambon2018] (16 samples at sfreq=128).
pad_size_s (float) – Padding size, in seconds. Set to 0.25 in [Chambon2018] (half the temporal convolution kernel size).
input_size_s (float) – Size of the input, in seconds.
n_classes (int) – Number of classes.
dropout (float) – Dropout rate before the output dense layer.
apply_batch_norm (bool) – If True, apply batch normalization after both temporal convolutional layers.
return_feats (bool) – If True, return the features, i.e. the output of the feature extractor (before the final linear layer). If False, pass the features through the final linear layer.
Chambon, S., Galtier, M. N., Arnal, P. J., Wainrib, G., & Gramfort, A. (2018). A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(4), 758-769.
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
Sleep staging on the Sleep Physionet dataset using Chambon2018 network
Self-supervised learning on EEG with relative positioning