braindecode.models.HybridNet#

class braindecode.models.HybridNet(n_chans=None, n_outputs=None, n_times=None, in_chans=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#

Hybrid ConvNet model from Schirrmeister et al 2017.

See [Schirrmeister2017] for details.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • in_chans – The description is missing.

  • n_classes – The description is missing.

  • input_window_samples – The description is missing.

  • add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.

Raises:
  • ValueError – If some input signal-related parameters are not specified: and can not be inferred.

  • FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[Schirrmeister2017]

Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730

Methods

forward(x)[source]#

Forward pass.

Parameters:

x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).