braindecode.models.USleep#

class braindecode.models.USleep(n_chans=None, sfreq=None, depth=12, n_time_filters=5, complexity_factor=1.67, with_skip_connection=True, n_outputs=5, input_window_seconds=None, time_conv_size_s=0.0703125, ensure_odd_conv_size=False, activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.ELU'>, chs_info=None, n_times=None)[source]#

Sleep staging architecture from Perslev et al. (2021) [1].

USleep Architecture

U-Net (autoencoder with skip connections) feature-extractor for sleep staging described in [1].

For the encoder (‘down’):
  • the temporal dimension shrinks (via maxpooling in the time-domain)

  • the spatial dimension expands (via more conv1d filters in the time-domain)

For the decoder (‘up’):
  • the temporal dimension expands (via upsampling in the time-domain)

  • the spatial dimension shrinks (via fewer conv1d filters in the time-domain)

Both do so at exponential rates.

Parameters:
  • n_chans (int) – Number of EEG or EOG channels. Set to 2 in [1] (1 EEG, 1 EOG).

  • sfreq (float) – EEG sampling frequency. Set to 128 in [1].

  • depth (int) – Number of conv blocks in encoding layer (number of 2x2 max pools). Note: each block halves the spatial dimensions of the features.

  • n_time_filters (int) – Initial number of convolutional filters. Set to 5 in [1].

  • complexity_factor (float) – Multiplicative factor for the number of channels at each layer of the U-Net. Set to 2 in [1].

  • with_skip_connection (bool) – If True, use skip connections in decoder blocks.

  • n_outputs (int) – Number of outputs/classes. Set to 5.

  • input_window_seconds (float) – Size of the input, in seconds. Set to 30 in [1].

  • time_conv_size_s (float) – Size of the temporal convolution kernel, in seconds. Set to 9 / 128 in [1].

  • ensure_odd_conv_size (bool) – If True and the size of the convolutional kernel is an even number, one will be added to it to ensure it is odd, so that the decoder blocks can work. This can be useful when using different sampling rates from 128 or 100 Hz.

  • activation (nn.Module, default=nn.ELU) – Activation function class to apply. Should be a PyTorch activation module class like nn.ReLU or nn.ELU. Default is nn.ELU.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • n_times (int) – Number of time samples of the input window.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[1] (1,2,3,4,5,6,7,8)

Perslev M, Darkner S, Kempfner L, Nikolic M, Jennum PJ, Igel C. U-Sleep: resilient high-frequency sleep staging. npj Digit. Med. 4, 72 (2021). perslev/U-Time

Methods

forward(x)[source]#

If input x has shape (B, S, C, T), return y_pred of shape (B, n_classes, S). If input x has shape (B, C, T), return y_pred of shape (B, n_classes).

Parameters:

x – The description is missing.

Examples using braindecode.models.USleep#

Sleep staging on the Sleep Physionet dataset using U-Sleep network

Sleep staging on the Sleep Physionet dataset using U-Sleep network