braindecode.models.SleepStagerBlanco2020#
- class braindecode.models.SleepStagerBlanco2020(n_chans=None, sfreq=None, n_conv_chans=20, input_window_seconds=30, n_outputs=5, n_groups=2, max_pool_size=2, dropout=0.5, apply_batch_norm=False, return_feats=False, chs_info=None, n_times=None, n_channels=None, n_classes=None, input_size_s=None, add_log_softmax=True)[source]#
Sleep staging architecture from Blanco et al 2020.
Convolutional neural network for sleep staging described in [Blanco2020]. A series of seven convolutional layers with kernel sizes running down from 7 to 3, in an attempt to extract more general features at the beginning, while more specific and complex features were extracted in the final stages.
- Parameters:
n_chans (int) – Number of EEG channels.
sfreq (float) – Sampling frequency of the EEG recordings.
n_conv_chans (int) – Number of convolutional channels. Set to 20 in [Blanco2020].
input_window_seconds (float) – Length of the input window in seconds.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_groups (int) – Number of groups for the convolution. Set to 2 in [Blanco2020] for 2 Channel EEG. controls the connections between inputs and outputs. n_channels and n_conv_chans must be divisible by n_groups.
max_pool_size – The description is missing.
dropout (float) – Dropout rate before the output dense layer.
apply_batch_norm (bool) – If True, apply batch normalization after both temporal convolutional layers.
return_feats (bool) – If True, return the features, i.e. the output of the feature extractor (before the final linear layer). If False, pass the features through the final linear layer.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
n_channels (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_size_s (float) – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Blanco2020] (1,2,3)Fernandez-Blanco, E., Rivero, D. & Pazos, A. Convolutional neural networks for sleep stage scoring on a two-channel EEG signal. Soft Comput 24, 4067–4079 (2020). https://doi.org/10.1007/s00500-019-04174-1
Methods
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).