braindecode.models.USleep#

class braindecode.models.USleep(n_chans=2, sfreq=128, depth=12, n_time_filters=5, complexity_factor=1.67, with_skip_connection=True, n_outputs=5, input_window_seconds=30, time_conv_size_s=0.0703125, ensure_odd_conv_size=False, chs_info=None, n_times=None, in_chans=None, n_classes=None, input_size_s=None, add_log_softmax=False)[source]#

Sleep staging architecture from Perslev et al 2021.

U-Net (autoencoder with skip connections) feature-extractor for sleep staging described in [1].

For the encoder (‘down’):

– the temporal dimension shrinks (via maxpooling in the time-domain) – the spatial dimension expands (via more conv1d filters in the

time-domain)

For the decoder (‘up’):

– the temporal dimension expands (via upsampling in the time-domain) – the spatial dimension shrinks (via fewer conv1d filters in the

time-domain)

Both do so at exponential rates.

Parameters:
  • n_chans (int) – Number of EEG or EOG channels. Set to 2 in [1] (1 EEG, 1 EOG).

  • sfreq (float) – EEG sampling frequency. Set to 128 in [1].

  • depth (int) – Number of conv blocks in encoding layer (number of 2x2 max pools) Note: each block halve the spatial dimensions of the features.

  • n_time_filters (int) – Initial number of convolutional filters. Set to 5 in [1].

  • complexity_factor (float) – Multiplicative factor for number of channels at each layer of the U-Net. Set to 2 in [1].

  • with_skip_connection (bool) – If True, use skip connections in decoder blocks.

  • n_outputs (int) – Number of outputs/classes. Set to 5.

  • input_window_seconds (float) – Size of the input, in seconds. Set to 30 in [1].

  • time_conv_size_s (float) – Size of the temporal convolution kernel, in seconds. Set to 9 / 128 in [1].

  • ensure_odd_conv_size (bool) – If True and the size of the convolutional kernel is an even number, one will be added to it to ensure it is odd, so that the decoder blocks can work. This can ne useful when using different sampling rates from 128 or 100 Hz.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • n_times (int) – Number of time samples of the input window.

  • in_chans (int) – Alias for n_chans.

  • n_classes (int) – Alias for n_outputs.

  • input_size_s (float) – Alias for input_window_seconds.

  • add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[1] (1,2,3,4,5,6,7)

Perslev M, Darkner S, Kempfner L, Nikolic M, Jennum PJ, Igel C. U-Sleep: resilient high-frequency sleep staging. npj Digit. Med. 4, 72 (2021). perslev/U-Time

Methods

forward(x)[source]#

If input x has shape (B, S, C, T), return y_pred of shape (B, n_classes, S). If input x has shape (B, C, T), return y_pred of shape (B, n_classes).

Parameters:

x – The description is missing.

Examples using braindecode.models.USleep#

Sleep staging on the Sleep Physionet dataset using U-Sleep network

Sleep staging on the Sleep Physionet dataset using U-Sleep network