braindecode.models.ContraWR#

class braindecode.models.ContraWR(n_chans: int | None = None, n_outputs: int | None = None, sfreq: int | None = None, emb_size: int = 256, res_channels: list[int] = [32, 64, 128], steps=20, activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.ELU'>, drop_prob: float = 0.5, chs_info: list[dict[~typing.Any, ~typing.Any]] | None = None, n_times: int | None = None, input_window_seconds: float | None = None)[source]#

Contrast with the World Representation ContraWR from Yang et al (2021) [Yang2021].

This model is a convolutional neural network that uses a spectral representation with a series of convolutional layers and residual blocks. The model is designed to learn a representation of the EEG signal that can be used for sleep staging.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • emb_size (int, optional) – Embedding size for the final layer, by default 256.

  • res_channels (list[int], optional) – Number of channels for each residual block, by default [32, 64, 128].

  • steps (int, optional) – Number of steps to take the frequency decomposition hop_length parameters by default 20.

  • activation (nn.Module, default=nn.ELU) – Activation function class to apply. Should be a PyTorch activation module class like nn.ReLU or nn.ELU. Default is nn.ELU.

  • drop_prob (float, default=0.5) – The dropout rate for regularization. Values should be between 0 and 1.

  • versionadded: (..) – 0.9:

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • n_times (int) – Number of time samples of the input window.

  • input_window_seconds (float) – Length of the input window in seconds.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

This implementation is not guaranteed to be correct, has not been checked by original authors. The modifications are minimal and the model is expected to work as intended. the original code from [Code2023].

References

[Yang2021]

Yang, C., Xiao, C., Westover, M. B., & Sun, J. (2023). Self-supervised electroencephalogram representation learning for automatic sleep staging: model development and evaluation study. JMIR AI, 2(1), e46769.

[Code2023]

Yang, C., Westover, M.B. and Sun, J., 2023. BIOT Biosignal Transformer for Cross-data Learning in the Wild. GitHub ycq091044/BIOT (accessed 2024-02-13)

Methods

forward(X)[source]#

Forward pass.

Parameters:

X (Tensor) – Input tensor of shape (batch_size, n_channels, n_times).

Returns:

Output tensor of shape (batch_size, n_outputs).

Return type:

Tensor

torch_stft(x)[source]#

Compute the Short-Time Fourier Transform (STFT) of the input tensor.

EEG Signal is expected to be of shape (batch_size, n_channels, n_times).

Parameters:

X (Tensor) – Input tensor of shape (batch_size, n_channels, n_times).

Returns:

Output tensor of shape (batch_size, n_channels, n_freqs, n_times).

Return type:

Tensor