braindecode.models package#
Some predefined network architectures for EEG decoding.
Submodules#
braindecode.models.atcnet module#
- class braindecode.models.atcnet.ATCNet(n_chans=None, n_outputs=None, input_window_seconds=4.5, sfreq=250.0, conv_block_n_filters=16, conv_block_kernel_length_1=64, conv_block_kernel_length_2=16, conv_block_pool_size_1=8, conv_block_pool_size_2=7, conv_block_depth_mult=2, conv_block_dropout=0.3, n_windows=5, att_head_dim=8, att_num_heads=2, att_dropout=0.5, tcn_depth=2, tcn_kernel_size=4, tcn_n_filters=32, tcn_dropout=0.3, tcn_activation=ELU(alpha=1.0), concat=False, max_norm_const=0.25, chs_info=None, n_times=None, n_channels=None, n_classes=None, input_size_s=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Module
ATCNet model from [1]
Pytorch implementation based on official tensorflow code [2].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
input_window_seconds (float, optional) – Time length of inputs, in seconds. Defaults to 4.5 s, as in BCI-IV 2a dataset.
sfreq (int, optional) – Sampling frequency of the inputs, in Hz. Default to 250 Hz, as in BCI-IV 2a dataset.
conv_block_n_filters (int) – Number temporal filters in the first convolutional layer of the convolutional block, denoted F1 in figure 2 of the paper [1]. Defaults to 16 as in [1].
conv_block_kernel_length_1 (int) – Length of temporal filters in the first convolutional layer of the convolutional block, denoted Kc in table 1 of the paper [1]. Defaults to 64 as in [1].
conv_block_kernel_length_2 (int) – Length of temporal filters in the last convolutional layer of the convolutional block. Defaults to 16 as in [1].
conv_block_pool_size_1 (int) – Length of first average pooling kernel in the convolutional block. Defaults to 8 as in [1].
conv_block_pool_size_2 (int) – Length of first average pooling kernel in the convolutional block, denoted P2 in table 1 of the paper [1]. Defaults to 7 as in [1].
conv_block_depth_mult (int) – Depth multiplier of depthwise convolution in the convolutional block, denoted D in table 1 of the paper [1]. Defaults to 2 as in [1].
conv_block_dropout (float) – Dropout probability used in the convolution block, denoted pc in table 1 of the paper [1]. Defaults to 0.3 as in [1].
n_windows (int) – Number of sliding windows, denoted n in [1]. Defaults to 5 as in [1].
att_head_dim (int) – Embedding dimension used in each self-attention head, denoted dh in table 1 of the paper [1]. Defaults to 8 as in [1].
att_num_heads (int) – Number of attention heads, denoted H in table 1 of the paper [1]. Defaults to 2 as in [1_.
att_dropout (float) – Dropout probability used in the attention block, denoted pa in table 1 of the paper [1]. Defaults to 0.5 as in [1].
tcn_depth (int) – Depth of Temporal Convolutional Network block (i.e. number of TCN Residual blocks), denoted L in table 1 of the paper [1]. Defaults to 2 as in [1].
tcn_kernel_size (int) – Temporal kernel size used in TCN block, denoted Kt in table 1 of the paper [1]. Defaults to 4 as in [1].
tcn_n_filters (int) – Number of filters used in TCN convolutional layers (Ft). Defaults to 32 as in [1].
tcn_dropout (float) – Dropout probability used in the TCN block, denoted pt in table 1 of the paper [1]. Defaults to 0.3 as in [1].
tcn_activation (torch.nn.Module) – Nonlinear activation to use. Defaults to nn.ELU().
concat (bool) – When
True
, concatenates each slidding window embedding before feeding it to a fully-connected layer, as done in [1]. WhenFalse
, maps each slidding window to n_outputs logits and average them. Defaults toFalse
contrary to what is reported in [1], but matching what the official code does [2].max_norm_const (float) – Maximum L2-norm constraint imposed on weights of the last fully-connected layer. Defaults to 0.25.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
n_channels – Alias for n_chans.
n_classes – Alias for n_outputs.
input_size_s – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[1] (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29)H. Altaheri, G. Muhammad and M. Alsulaiman, “Physics-informed attention temporal convolutional network for EEG-based motor imagery classification,” in IEEE Transactions on Industrial Informatics, 2022, doi: 10.1109/TII.2022.3197419.
- forward(X)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
X – The description is missing.
braindecode.models.base module#
- class braindecode.models.base.EEGModuleMixin(n_outputs: int | None = None, n_chans: int | None = None, chs_info: List[Dict] | None = None, n_times: int | None = None, input_window_seconds: float | None = None, sfreq: float | None = None, add_log_softmax: bool | None = False)[source]#
Bases:
object
Mixin class for all EEG models in braindecode.
- Parameters:
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_chans (int) – Number of EEG channels.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
- property add_log_softmax#
- property chs_info#
- get_output_shape() Tuple[int] [source]#
Returns shape of neural network output for batch size equal 1.
- Returns:
output_shape – shape of the network output for batch_size==1 (1, …)
- Return type:
Tuple[int]
- get_torchinfo_statistics(col_names: Iterable[str] | None = ('input_size', 'output_size', 'num_params', 'kernel_size'), row_settings: Iterable[str] | None = ('var_names', 'depth')) ModelStatistics [source]#
Generate table describing the model using torchinfo.summary.
- Parameters:
col_names (tuple, optional) – Specify which columns to show in the output, see torchinfo for details, by default (“input_size”, “output_size”, “num_params”, “kernel_size”)
row_settings (tuple, optional) – Specify which features to show in a row, see torchinfo for details, by default (“var_names”, “depth”)
- Returns:
ModelStatistics generated by torchinfo.summary.
- Return type:
torchinfo.ModelStatistics
- property input_window_seconds#
- mapping = None#
- property n_chans#
- property n_outputs#
- property n_times#
- property sfreq#
- to_dense_prediction_model(axis: Tuple[int] = (2, 3)) None [source]#
Transform a sequential model with strides to a model that outputs dense predictions by removing the strides and instead inserting dilations. Modifies model in-place.
- Parameters:
axis (int or (int,int)) – Axis to transform (in terms of intermediate output axes) can either be 2, 3, or (2,3).
Notes
Does not yet work correctly for average pooling. Prior to version 0.1.7, there had been a bug that could move strides backwards one layer.
braindecode.models.deep4 module#
- class braindecode.models.deep4.Deep4Net(n_chans=None, n_outputs=None, n_times=None, final_conv_length='auto', n_filters_time=25, n_filters_spat=25, filter_time_length=10, pool_time_length=3, pool_time_stride=3, n_filters_2=50, filter_length_2=10, n_filters_3=100, filter_length_3=10, n_filters_4=200, filter_length_4=10, first_conv_nonlin=<function elu>, first_pool_mode='max', first_pool_nonlin=<function identity>, later_conv_nonlin=<function elu>, later_pool_mode='max', later_pool_nonlin=<function identity>, drop_prob=0.5, split_first_layer=True, batch_norm=True, batch_norm_alpha=0.1, stride_before_pool=False, chs_info=None, input_window_seconds=None, sfreq=None, in_chans=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Sequential
Deep ConvNet model from Schirrmeister et al 2017.
Model described in [Schirrmeister2017].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
final_conv_length (int | str) – Length of the final convolution layer. If set to “auto”, n_times must not be None. Default: “auto”.
n_filters_time (int) – Number of temporal filters.
n_filters_spat (int) – Number of spatial filters.
filter_time_length (int) – Length of the temporal filter in layer 1.
pool_time_length (int) – Length of temporal pooling filter.
pool_time_stride (int) – Length of stride between temporal pooling filters.
n_filters_2 (int) – Number of temporal filters in layer 2.
filter_length_2 (int) – Length of the temporal filter in layer 2.
n_filters_3 (int) – Number of temporal filters in layer 3.
filter_length_3 (int) – Length of the temporal filter in layer 3.
n_filters_4 (int) – Number of temporal filters in layer 4.
filter_length_4 (int) – Length of the temporal filter in layer 4.
first_conv_nonlin (callable) – Non-linear activation function to be used after convolution in layer 1.
first_pool_mode (str) – Pooling mode in layer 1. “max” or “mean”.
first_pool_nonlin (callable) – Non-linear activation function to be used after pooling in layer 1.
later_conv_nonlin (callable) – Non-linear activation function to be used after convolution in later layers.
later_pool_mode (str) – Pooling mode in later layers. “max” or “mean”.
later_pool_nonlin (callable) – Non-linear activation function to be used after pooling in later layers.
drop_prob (float) – Dropout probability.
split_first_layer (bool) – Split first layer into temporal and spatial layers (True) or just use temporal (False). There would be no non-linearity between the split layers.
batch_norm (bool) – Whether to use batch normalisation.
batch_norm_alpha (float) – Momentum for BatchNorm2d.
stride_before_pool (bool) – Stride before pooling.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
in_chans – Alias for n_chans.
n_classes – Alias for n_outputs.
input_window_samples – Alias for n_times.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Schirrmeister2017]Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730
braindecode.models.deepsleepnet module#
- class braindecode.models.deepsleepnet.DeepSleepNet(n_outputs=5, return_feats=False, n_chans=None, chs_info=None, n_times=None, input_window_seconds=None, sfreq=None, n_classes=None)[source]#
Bases:
EEGModuleMixin
,Module
Sleep staging architecture from Supratak et al 2017.
Convolutional neural network and bidirectional-Long Short-Term for single channels sleep staging described in [Supratak2017].
- Parameters:
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
return_feats (bool) – If True, return the features, i.e. the output of the feature extractor (before the final linear layer). If False, pass the features through the final linear layer.
n_chans (int) – Number of EEG channels.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
n_classes – Alias for n_outputs.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Supratak2017]Supratak, A., Dong, H., Wu, C., & Guo, Y. (2017). DeepSleepNet: A model for automatic sleep stage scoring based on raw single-channel EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(11), 1998-2008.
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
braindecode.models.eegconformer module#
- class braindecode.models.eegconformer.EEGConformer(n_outputs=None, n_chans=None, n_filters_time=40, filter_time_length=25, pool_time_length=75, pool_time_stride=15, drop_prob=0.5, att_depth=6, att_heads=10, att_drop_prob=0.5, final_fc_length=2440, return_features=False, n_times=None, chs_info=None, input_window_seconds=None, sfreq=None, n_classes=None, n_channels=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Module
EEG Conformer.
Convolutional Transformer for EEG decoding.
The paper and original code with more details about the methodological choices are available at the [Song2022] and [ConformerCode].
This neural network architecture receives a traditional braindecode input. The input shape should be three-dimensional matrix representing the EEG signals.
(batch_size, n_channels, n_timesteps).
- The EEG Conformer architecture is composed of three modules:
PatchEmbedding
TransformerEncoder
ClassificationHead
- Parameters:
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_chans (int) – Number of EEG channels.
n_filters_time (int) – Number of temporal filters, defines also embedding size.
filter_time_length (int) – Length of the temporal filter.
pool_time_length (int) – Length of temporal pooling filter.
pool_time_stride (int) – Length of stride between temporal pooling filters.
drop_prob (float) – Dropout rate of the convolutional layer.
att_depth (int) – Number of self-attention layers.
att_heads (int) – Number of attention heads.
att_drop_prob (float) – Dropout rate of the self-attention layer.
final_fc_length (int | str) – The dimension of the fully connected layer.
return_features (bool) – If True, the forward method returns the features before the last classification layer. Defaults to False.
n_times (int) – Number of time samples of the input window.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
n_classes – Alias for n_outputs.
n_channels – Alias for n_chans.
input_window_samples – Alias for n_times.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
The authors recommend using data augmentation before using Conformer, e.g. segmentation and recombination, Please refer to the original paper and code for more details.
The model was initially tuned on 4 seconds of 250 Hz data. Please adjust the scale of the temporal convolutional layer, and the pooling layer for better performance.
New in version 0.8.
We aggregate the parameters based on the parts of the models, or when the parameters were used first, e.g. n_filters_time.
References
[Song2022]Song, Y., Zheng, Q., Liu, B. and Gao, X., 2022. EEG conformer: Convolutional transformer for EEG decoding and visualization. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31, pp.710-719. https://ieeexplore.ieee.org/document/9991178
[ConformerCode]Song, Y., Zheng, Q., Liu, B. and Gao, X., 2022. EEG conformer: Convolutional transformer for EEG decoding and visualization. eeyhsong/EEG-Conformer.
- forward(x: Tensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
x – The description is missing.
braindecode.models.eeginception module#
- class braindecode.models.eeginception.EEGInception(n_chans=None, n_outputs=None, n_times=1000, sfreq=128, drop_prob=0.5, scales_samples_s=(0.5, 0.25, 0.125), n_filters=8, activation=ELU(alpha=1.0), batch_norm_alpha=0.01, depth_multiplier=2, pooling_sizes=(4, 2, 2, 2), chs_info=None, input_window_seconds=None, in_channels=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Sequential
EEG Inception for ERP-based classification
–> DEPRECATED <– THIS CLASS IS DEPRECATED AND WILL BE REMOVED IN THE RELEASE 0.9 OF BRAINDECODE. PLEASE USE braindecode.models.EEGInceptionERP INSTEAD IN THE FUTURE.
The code for the paper and this model is also available at [Santamaria2020] and an adaptation for PyTorch [2].
The model is strongly based on the original InceptionNet for an image. The main goal is to extract features in parallel with different scales. The authors extracted three scales proportional to the window sample size. The network had three parts: 1-larger inception block largest, 2-smaller inception block followed by 3-bottleneck for classification.
One advantage of the EEG-Inception block is that it allows a network to learn simultaneous components of low and high frequency associated with the signal. The winners of BEETL Competition/NeurIps 2021 used parts of the model [Reb937fb413bc-beetl].
The model is fully described in [Santamaria2020].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
sfreq (float) – Sampling frequency of the EEG recordings.
drop_prob (float) – Dropout rate inside all the network.
scales_samples_s – The description is missing.
n_filters (int) – Initial number of convolutional filters. Set to 8 in [Santamaria2020].
activation (nn.Module) – Activation function, default: ELU activation.
batch_norm_alpha (float) – Momentum for BatchNorm2d.
depth_multiplier (int) – Depth multiplier for the depthwise convolution.
pooling_sizes (list(int)) – Pooling sizes for the inception block.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
in_channels (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_window_samples (int) – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented from the paper based on [2].
References
[Santamaria2020] (1,2,3)Santamaria-Vazquez, E., Martinez-Cagigal, V., Vaquerizo-Villar, F., & Hornero, R. (2020). EEG-inception: A novel deep convolutional neural network for assistive ERP-based brain-computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering , v. 28. Online: http://dx.doi.org/10.1109/TNSRE.2020.3048106
braindecode.models.eeginception_erp module#
- class braindecode.models.eeginception_erp.EEGInceptionERP(n_chans=None, n_outputs=None, n_times=1000, sfreq=128, drop_prob=0.5, scales_samples_s=(0.5, 0.25, 0.125), n_filters=8, activation=ELU(alpha=1.0), batch_norm_alpha=0.01, depth_multiplier=2, pooling_sizes=(4, 2, 2, 2), chs_info=None, input_window_seconds=None, in_channels=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Sequential
EEG Inception for ERP-based classification
The code for the paper and this model is also available at [Santamaria2020] and an adaptation for PyTorch [2].
The model is strongly based on the original InceptionNet for an image. The main goal is to extract features in parallel with different scales. The authors extracted three scales proportional to the window sample size. The network had three parts: 1-larger inception block largest, 2-smaller inception block followed by 3-bottleneck for classification.
One advantage of the EEG-Inception block is that it allows a network to learn simultaneous components of low and high frequency associated with the signal. The winners of BEETL Competition/NeurIps 2021 used parts of the model [R6ac5fd57fa8e-beetl].
The model is fully described in [Santamaria2020].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int, optional) – Size of the input, in number of samples. Set to 128 (1s) as in [Santamaria2020].
sfreq (float, optional) – EEG sampling frequency. Defaults to 128 as in [Santamaria2020].
drop_prob (float, optional) – Dropout rate inside all the network. Defaults to 0.5 as in [Santamaria2020].
scales_samples_s (list(float), optional) – Windows for inception block. Temporal scale (s) of the convolutions on each Inception module. This parameter determines the kernel sizes of the filters. Defaults to 0.5, 0.25, 0.125 seconds, as in [Santamaria2020].
n_filters (int, optional) – Initial number of convolutional filters. Defaults to 8 as in [Santamaria2020].
activation (nn.Module, optional) – Activation function. Defaults to ELU activation as in [Santamaria2020].
batch_norm_alpha (float, optional) – Momentum for BatchNorm2d. Defaults to 0.01.
depth_multiplier (int, optional) – Depth multiplier for the depthwise convolution. Defaults to 2 as in [Santamaria2020].
pooling_sizes (list(int), optional) – Pooling sizes for the inception blocks. Defaults to 4, 2, 2 and 2, as in [Santamaria2020].
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
in_channels (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_window_samples (int) – Alias for n_times.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented from the paper based on [2].
References
[Santamaria2020] (1,2,3,4,5,6,7,8,9,10)Santamaria-Vazquez, E., Martinez-Cagigal, V., Vaquerizo-Villar, F., & Hornero, R. (2020). EEG-inception: A novel deep convolutional neural network for assistive ERP-based brain-computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering , v. 28. Online: http://dx.doi.org/10.1109/TNSRE.2020.3048106
braindecode.models.eeginception_mi module#
- class braindecode.models.eeginception_mi.EEGInceptionMI(n_chans=None, n_outputs=None, input_window_seconds=4.5, sfreq=250, n_convs=5, n_filters=48, kernel_unit_s=0.1, activation=ReLU(), chs_info=None, n_times=None, in_channels=None, n_classes=None, input_window_s=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Module
EEG Inception for Motor Imagery, as proposed in [1]
The model is strongly based on the original InceptionNet for computer vision. The main goal is to extract features in parallel with different scales. The network has two blocks made of 3 inception modules with a skip connection.
The model is fully described in [1].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
input_window_seconds (float, optional) – Size of the input, in seconds. Set to 4.5 s as in [1] for dataset BCI IV 2a.
sfreq (float, optional) – EEG sampling frequency in Hz. Defaults to 250 Hz as in [1] for dataset BCI IV 2a.
n_convs (int, optional) – Number of convolution per inception wide branching. Defaults to 5 as in [1] for dataset BCI IV 2a.
n_filters (int, optional) – Number of convolutional filters for all layers of this type. Set to 48 as in [1] for dataset BCI IV 2a.
kernel_unit_s (float, optional) – Size in seconds of the basic 1D convolutional kernel used in inception modules. Each convolutional layer in such modules have kernels of increasing size, odd multiples of this value (e.g. 0.1, 0.3, 0.5, 0.7, 0.9 here for `n_convs`=5). Defaults to 0.1 s.
activation (nn.Module) – Activation function. Defaults to ReLU activation.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
in_channels (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_window_s (float, optional) – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented bosed on the paper [1].
References
- forward(X: Tensor) Tensor [source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
X – The description is missing.
braindecode.models.eegitnet module#
- class braindecode.models.eegitnet.EEGITNet(n_outputs=None, n_chans=None, n_times=None, drop_prob=0.4, chs_info=None, input_window_seconds=None, sfreq=None, n_classes=None, in_channels=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Sequential
- EEG-ITNet: An Explainable Inception Temporal
Convolutional Network for motor imagery classification from Salami et. al 2022.
See [Salami2022] for details.
Code adapted from abbassalami/eeg-itnet
- Parameters:
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_chans (int) – Number of EEG channels.
n_times (int) – Number of time samples of the input window.
drop_prob (float) – Dropout probability.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
n_classes (int) – Alias for n_outputs.
in_channels (int) – Alias for n_chans.
input_window_samples (int) – Alias for n_times.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented from the paper based on author implementation.
References
[Salami2022]Salami, J. Andreu-Perez and H. Gillmeister, “EEG-ITNet: An Explainable
Inception Temporal Convolutional Network for motor imagery classification,” in IEEE Access, doi: 10.1109/ACCESS.2022.3161489.
braindecode.models.eegnet module#
- class braindecode.models.eegnet.Conv2dWithConstraint(*args, max_norm=1, **kwargs)[source]#
Bases:
Conv2d
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.eegnet.EEGNetv1(n_chans=None, n_outputs=None, n_times=None, final_conv_length='auto', pool_mode='max', second_kernel_size=(2, 32), third_kernel_size=(8, 4), drop_prob=0.25, chs_info=None, input_window_seconds=None, sfreq=None, in_chans=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Sequential
EEGNet model from Lawhern et al. 2016.
See details in [EEGNet].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
final_conv_length – The description is missing.
pool_mode – The description is missing.
second_kernel_size – The description is missing.
third_kernel_size – The description is missing.
drop_prob – The description is missing.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
in_chans – Alias for n_chans.
n_classes – Alias for n_outputs.
input_window_samples – Alias for n_times.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented from the paper description.
References
[EEGNet]Lawhern, V. J., Solon, A. J., Waytowich, N. R., Gordon, S. M., Hung, C. P., & Lance, B. J. (2016). EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces. arXiv preprint arXiv:1611.08024.
- class braindecode.models.eegnet.EEGNetv4(n_chans=None, n_outputs=None, n_times=None, final_conv_length='auto', pool_mode='mean', F1=8, D=2, F2=16, kernel_length=64, third_kernel_size=(8, 4), drop_prob=0.25, chs_info=None, input_window_seconds=None, sfreq=None, in_chans=None, n_classes=None, input_window_samples=None)[source]#
Bases:
EEGModuleMixin
,Sequential
EEGNet v4 model from Lawhern et al 2018.
See details in [EEGNet4].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
final_conv_length (int | "auto") – If int, final length of convolutional filters.
pool_mode – The description is missing.
F1 – The description is missing.
D – The description is missing.
F2 – The description is missing.
kernel_length – The description is missing.
third_kernel_size – The description is missing.
drop_prob – The description is missing.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
in_chans – Alias for n_chans.
n_classes – Alias for n_outputs.
input_window_samples – Alias for n_times.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented from the paper description.
References
[EEGNet4]Lawhern, V. J., Solon, A. J., Waytowich, N. R., Gordon, S. M., Hung, C. P., & Lance, B. J. (2018). EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces. arXiv preprint arXiv:1611.08024.
braindecode.models.eegresnet module#
- class braindecode.models.eegresnet.EEGResNet(n_chans=None, n_outputs=None, n_times=None, final_pool_length=None, n_first_filters=None, n_layers_per_block=2, first_filter_length=3, nonlinearity=<function elu>, split_first_layer=True, batch_norm_alpha=0.1, batch_norm_epsilon=0.0001, conv_weight_init_fn=<function EEGResNet.<lambda>>, chs_info=None, input_window_seconds=None, sfreq=None, in_chans=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Sequential
Residual Network for EEG.
XXX missing reference
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
final_pool_length – The description is missing.
n_first_filters – The description is missing.
n_layers_per_block – The description is missing.
first_filter_length – The description is missing.
nonlinearity – The description is missing.
split_first_layer – The description is missing.
batch_norm_alpha – The description is missing.
batch_norm_epsilon – The description is missing.
conv_weight_init_fn – The description is missing.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
in_chans –
Alias for n_chans. n_classes :
Alias for n_outputs.
- input_window_samples :
Alias for n_times.
n_classes – The description is missing.
input_window_samples – The description is missing.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
braindecode.models.functions module#
- braindecode.models.functions.safe_log(x, eps=1e-06)[source]#
Prevents \(log(0)\) by using \(log(max(x, eps))\).
- braindecode.models.functions.squeeze_final_output(x)[source]#
- Removes empty dimension at end and potentially removes empty time
dimension. It does not just use squeeze as we never want to remove first dimension.
- Returns:
x – squeezed tensor
- Return type:
braindecode.models.hybrid module#
- class braindecode.models.hybrid.HybridNet(n_chans=None, n_outputs=None, n_times=None, in_chans=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Module
Hybrid ConvNet model from Schirrmeister et al 2017.
See [Schirrmeister2017] for details.
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
in_chans – The description is missing.
n_classes – The description is missing.
input_window_samples – The description is missing.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Schirrmeister2017]Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
braindecode.models.modules module#
- class braindecode.models.modules.AvgPool2dWithConv(kernel_size, stride, dilation=1, padding=0)[source]#
Bases:
Module
Compute average pooling using a convolution, to have the dilation parameter.
- Parameters:
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.modules.CausalConv1d(in_channels, out_channels, kernel_size, dilation=1, **kwargs)[source]#
Bases:
Conv1d
Causal 1-dimensional convolution
Code modified from [1] and [2].
- Parameters:
in_channels (int) – Input channels.
out_channels (int) – Output channels (number of filters).
kernel_size (int) – Kernel size.
dilation (int, optional) – Dilation (number of elements to skip within kernel multiplication). Default to 1.
**kwargs – Other keyword arguments to pass to torch.nn.Conv1d, except for padding!!
References
- forward(X)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.modules.CombinedConv(in_chans, n_filters_time=40, n_filters_spat=40, filter_time_length=25, bias_time=True, bias_spat=True)[source]#
Bases:
Module
Merged convolutional layer for temporal and spatial convs in Deep4/ShallowFBCSP
Numerically equivalent to the separate sequential approach, but this should be faster.
- Parameters:
in_chans (int) – Number of EEG input channels.
n_filters_time (int) – Number of temporal filters.
filter_time_length (int) – Length of the temporal filter.
n_filters_spat (int) – Number of spatial filters.
bias_time (bool) – Whether to use bias in the temporal conv
bias_spat (bool) – Whether to use bias in the spatial conv
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.modules.Ensure4d(*args, **kwargs)[source]#
Bases:
Module
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.modules.Expression(expression_fn)[source]#
Bases:
Module
Compute given expression on forward pass.
- Parameters:
expression_fn (callable) – Should accept variable number of objects of type torch.autograd.Variable to compute its output.
- forward(*x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.modules.IntermediateOutputWrapper(to_select, model)[source]#
Bases:
Module
Wraps network model such that outputs of intermediate layers can be returned. forward() returns list of intermediate activations in a network during forward pass.
- Parameters:
to_select (list) – list of module names for which activation should be returned
model (model object) – network model
Examples
>>> model = Deep4Net() >>> select_modules = ['conv_spat','conv_2','conv_3','conv_4'] # Specify intermediate outputs >>> model_pert = IntermediateOutputWrapper(select_modules,model) # Wrap model
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.modules.MaxNormLinear(in_features, out_features, bias=True, max_norm_val=2, eps=1e-05, **kwargs)[source]#
Bases:
Linear
Linear layer with MaxNorm constraining on weights.
Equivalent of Keras tf.keras.Dense(…, kernel_constraint=max_norm()) [1, 2]_. Implemented as advised in [3].
- Parameters:
References
[3]https://discuss.pytorch.org/t/how-to-correctly-implement-in-place- max-norm-constraint/96769
- forward(X)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.modules.TimeDistributed(module)[source]#
Bases:
Module
Apply module on multiple windows.
Apply the provided module on a sequence of windows and return their concatenation. Useful with sequence-to-prediction models (e.g. sleep stager which must map a sequence of consecutive windows to the label of the middle window in the sequence).
- Parameters:
module (nn.Module) – Module to be applied to the input windows. Must accept an input of shape (batch_size, n_channels, n_times).
- forward(x)[source]#
- Parameters:
x (torch.Tensor) – Sequence of windows, of shape (batch_size, seq_len, n_channels, n_times).
- Returns:
Shape (batch_size, seq_len, output_size).
- Return type:
braindecode.models.shallow_fbcsp module#
- class braindecode.models.shallow_fbcsp.ShallowFBCSPNet(n_chans=None, n_outputs=None, n_times=None, n_filters_time=40, filter_time_length=25, n_filters_spat=40, pool_time_length=75, pool_time_stride=15, final_conv_length=30, conv_nonlin=<function square>, pool_mode='mean', pool_nonlin=<function safe_log>, split_first_layer=True, batch_norm=True, batch_norm_alpha=0.1, drop_prob=0.5, chs_info=None, input_window_seconds=None, sfreq=None, in_chans=None, n_classes=None, input_window_samples=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Sequential
Shallow ConvNet model from Schirrmeister et al 2017.
Model described in [Schirrmeister2017].
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
n_filters_time (int) – Number of temporal filters.
filter_time_length (int) – Length of the temporal filter.
n_filters_spat (int) – Number of spatial filters.
pool_time_length (int) – Length of temporal pooling filter.
pool_time_stride (int) – Length of stride between temporal pooling filters.
final_conv_length (int | str) – Length of the final convolution layer. If set to “auto”, length of the input signal must be specified.
conv_nonlin (callable) – Non-linear function to be used after convolution layers.
pool_mode (str) – Method to use on pooling layers. “max” or “mean”.
pool_nonlin (callable) – Non-linear function to be used after pooling layers.
split_first_layer (bool) – Split first layer into temporal and spatial layers (True) or just use temporal (False). There would be no non-linearity between the split layers.
batch_norm (bool) – Whether to use batch normalisation.
batch_norm_alpha (float) – Momentum for BatchNorm2d.
drop_prob (float) – Dropout probability.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
in_chans (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_window_samples (int | None) – Alias for n_times.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Schirrmeister2017]Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F. & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping , Aug. 2017. Online: http://dx.doi.org/10.1002/hbm.23730
braindecode.models.sleep_stager_blanco_2020 module#
- class braindecode.models.sleep_stager_blanco_2020.SleepStagerBlanco2020(n_chans=None, sfreq=None, n_conv_chans=20, input_window_seconds=30, n_outputs=5, n_groups=2, max_pool_size=2, dropout=0.5, apply_batch_norm=False, return_feats=False, chs_info=None, n_times=None, n_channels=None, n_classes=None, input_size_s=None, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Module
Sleep staging architecture from Blanco et al 2020.
Convolutional neural network for sleep staging described in [Blanco2020]. A series of seven convolutional layers with kernel sizes running down from 7 to 3, in an attempt to extract more general features at the beginning, while more specific and complex features were extracted in the final stages.
- Parameters:
n_chans (int) – Number of EEG channels.
sfreq (float) – Sampling frequency of the EEG recordings.
n_conv_chans (int) – Number of convolutional channels. Set to 20 in [Blanco2020].
input_window_seconds (float) – Length of the input window in seconds.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_groups (int) – Number of groups for the convolution. Set to 2 in [Blanco2020] for 2 Channel EEG. controls the connections between inputs and outputs. n_channels and n_conv_chans must be divisible by n_groups.
max_pool_size – The description is missing.
dropout (float) – Dropout rate before the output dense layer.
apply_batch_norm (bool) – If True, apply batch normalization after both temporal convolutional layers.
return_feats (bool) – If True, return the features, i.e. the output of the feature extractor (before the final linear layer). If False, pass the features through the final linear layer.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
n_channels (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_size_s (float) – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Blanco2020] (1,2,3)Fernandez-Blanco, E., Rivero, D. & Pazos, A. Convolutional neural networks for sleep stage scoring on a two-channel EEG signal. Soft Comput 24, 4067–4079 (2020). https://doi.org/10.1007/s00500-019-04174-1
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
braindecode.models.sleep_stager_chambon_2018 module#
- class braindecode.models.sleep_stager_chambon_2018.SleepStagerChambon2018(n_chans=None, sfreq=None, n_conv_chs=8, time_conv_size_s=0.5, max_pool_size_s=0.125, pad_size_s=0.25, input_window_seconds=30, n_outputs=5, dropout=0.25, apply_batch_norm=False, return_feats=False, chs_info=None, n_times=None, n_channels=None, input_size_s=None, n_classes=None)[source]#
Bases:
EEGModuleMixin
,Module
Sleep staging architecture from Chambon et al 2018.
Convolutional neural network for sleep staging described in [Chambon2018].
- Parameters:
n_chans (int) – Number of EEG channels.
sfreq (float) – Sampling frequency of the EEG recordings.
n_conv_chs (int) – Number of convolutional channels. Set to 8 in [Chambon2018].
time_conv_size_s (float) – Size of filters in temporal convolution layers, in seconds. Set to 0.5 in [Chambon2018] (64 samples at sfreq=128).
max_pool_size_s (float) – Max pooling size, in seconds. Set to 0.125 in [Chambon2018] (16 samples at sfreq=128).
pad_size_s (float) – Padding size, in seconds. Set to 0.25 in [Chambon2018] (half the temporal convolution kernel size).
input_window_seconds (float) – Length of the input window in seconds.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
dropout (float) – Dropout rate before the output dense layer.
apply_batch_norm (bool) – If True, apply batch normalization after both temporal convolutional layers.
return_feats (bool) – If True, return the features, i.e. the output of the feature extractor (before the final linear layer). If False, pass the features through the final linear layer.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
n_channels (int) – Alias for n_chans.
input_size_s – Alias for input_window_seconds.
n_classes – Alias for n_outputs.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Chambon2018] (1,2,3,4,5)Chambon, S., Galtier, M. N., Arnal, P. J., Wainrib, G., & Gramfort, A. (2018). A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(4), 758-769.
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
braindecode.models.sleep_stager_eldele_2021 module#
- class braindecode.models.sleep_stager_eldele_2021.SleepStagerEldele2021(sfreq=None, n_tce=2, d_model=80, d_ff=120, n_attn_heads=5, dropout=0.1, input_window_seconds=30, n_outputs=5, after_reduced_cnn_size=30, return_feats=False, chs_info=None, n_chans=None, n_times=None, n_classes=None, input_size_s=None)[source]#
Bases:
EEGModuleMixin
,Module
Sleep Staging Architecture from Eldele et al 2021.
Attention based Neural Net for sleep staging as described in [Eldele2021]. The code for the paper and this model is also available at [1]. Takes single channel EEG as input. Feature extraction module based on multi-resolution convolutional neural network (MRCNN) and adaptive feature recalibration (AFR). The second module is the temporal context encoder (TCE) that leverages a multi-head attention mechanism to capture the temporal dependencies among the extracted features.
Warning - This model was designed for signals of 30 seconds at 100Hz or 125Hz (in which case the reference architecture from [1] which was validated on SHHS dataset [2] will be used) to use any other input is likely to make the model perform in unintended ways.
- Parameters:
sfreq (float) – Sampling frequency of the EEG recordings.
n_tce (int) – Number of TCE clones.
d_model (int) – Input dimension for the TCE. Also the input dimension of the first FC layer in the feed forward and the output of the second FC layer in the same. Increase for higher sampling rate/signal length. It should be divisible by n_attn_heads
d_ff (int) – Output dimension of the first FC layer in the feed forward and the input dimension of the second FC layer in the same.
n_attn_heads (int) – Number of attention heads. It should be a factor of d_model
dropout (float) – Dropout rate in the PositionWiseFeedforward layer and the TCE layers.
input_window_seconds (float) – Length of the input window in seconds.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
after_reduced_cnn_size (int) – Number of output channels produced by the convolution in the AFR module.
return_feats (bool) – If True, return the features, i.e. the output of the feature extractor (before the final linear layer). If False, pass the features through the final linear layer.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_chans (int) – Number of EEG channels.
n_times (int) – Number of time samples of the input window.
n_classes (int) – Alias for n_outputs.
input_size_s (float) – Alias for input_window_seconds.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Eldele2021]E. Eldele et al., “An Attention-Based Deep Learning Approach for Sleep Stage Classification With Single-Channel EEG,” in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 29, pp. 809-818, 2021, doi: 10.1109/TNSRE.2021.3076234.
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
- return_feats#
if return_feats: raise ValueError(“return_feat == True is not accepted anymore”)
braindecode.models.tcn module#
- class braindecode.models.tcn.Chomp1d(chomp_size)[source]#
Bases:
Module
- extra_repr()[source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class braindecode.models.tcn.TCN(n_chans=None, n_outputs=None, n_blocks=None, n_filters=None, kernel_size=None, drop_prob=None, chs_info=None, n_times=None, input_window_seconds=None, sfreq=None, n_in_chans=None, add_log_softmax=False)[source]#
Bases:
EEGModuleMixin
,Module
Temporal Convolutional Network (TCN) from Bai et al 2018.
See [Bai2018] for details.
Code adapted from locuslab/TCN
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_blocks (int) – number of temporal blocks in the network
n_filters (int) – number of output filters of each convolution
kernel_size (int) – kernel size of the convolutions
drop_prob (float) – dropout probability
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
n_in_chans (int) – Alias for n_chans.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[Bai2018]Bai, S., Kolter, J. Z., & Koltun, V. (2018). An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271.
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
- class braindecode.models.tcn.TemporalBlock(n_inputs, n_outputs, kernel_size, stride, dilation, padding, drop_prob)[source]#
Bases:
Module
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
braindecode.models.tidnet module#
- class braindecode.models.tidnet.TIDNet(n_chans=None, n_outputs=None, n_times=None, in_chans=None, n_classes=None, input_window_samples=None, s_growth=24, t_filters=32, drop_prob=0.4, pooling=15, temp_layers=2, spat_layers=2, temp_span=0.05, bottleneck=3, summary=-1, add_log_softmax=True)[source]#
Bases:
EEGModuleMixin
,Module
Thinker Invariance DenseNet model from Kostas et al 2020.
See [TIDNet] for details.
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
in_chans – Alias for n_chans.
n_classes – Alias for n_outputs.
input_window_samples – Alias for n_times.
s_growth (int) – DenseNet-style growth factor (added filters per DenseFilter)
t_filters (int) – Number of temporal filters.
drop_prob (float) – Dropout probability
pooling (int) – Max temporal pooling (width and stride)
temp_layers (int) – Number of temporal layers
spat_layers (int) – Number of DenseFilters
temp_span (float) – Percentage of n_times that defines the temporal filter length: temp_len = ceil(temp_span * n_times) e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal filter of length 75.
bottleneck (int) – Bottleneck factor within Densefilter
summary (int) – Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated automatically (n_times // pooling).
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
Code adapted from: SPOClab-ca/ThinkerInvariance
References
[TIDNet]Kostas, D. & Rudzicz, F. Thinker invariance: enabling deep neural networks for BCI across more people. J. Neural Eng. 17, 056008 (2020). doi: 10.1088/1741-2552/abb7a7.
- forward(x)[source]#
Forward pass.
- Parameters:
x (torch.Tensor) – Batch of EEG windows of shape (batch_size, n_channels, n_times).
- property num_features#
braindecode.models.usleep module#
- class braindecode.models.usleep.USleep(n_chans=2, sfreq=128, depth=12, n_time_filters=5, complexity_factor=1.67, with_skip_connection=True, n_outputs=5, input_window_seconds=30, time_conv_size_s=0.0703125, ensure_odd_conv_size=False, chs_info=None, n_times=None, in_chans=None, n_classes=None, input_size_s=None, add_log_softmax=False)[source]#
Bases:
EEGModuleMixin
,Module
Sleep staging architecture from Perslev et al 2021.
U-Net (autoencoder with skip connections) feature-extractor for sleep staging described in [1].
- For the encoder (‘down’):
– the temporal dimension shrinks (via maxpooling in the time-domain) – the spatial dimension expands (via more conv1d filters in the
time-domain)
- For the decoder (‘up’):
– the temporal dimension expands (via upsampling in the time-domain) – the spatial dimension shrinks (via fewer conv1d filters in the
time-domain)
Both do so at exponential rates.
- Parameters:
n_chans (int) – Number of EEG or EOG channels. Set to 2 in [1] (1 EEG, 1 EOG).
depth (int) – Number of conv blocks in encoding layer (number of 2x2 max pools) Note: each block halve the spatial dimensions of the features.
n_time_filters (int) – Initial number of convolutional filters. Set to 5 in [1].
complexity_factor (float) – Multiplicative factor for number of channels at each layer of the U-Net. Set to 2 in [1].
with_skip_connection (bool) – If True, use skip connections in decoder blocks.
n_outputs (int) – Number of outputs/classes. Set to 5.
input_window_seconds (float) – Size of the input, in seconds. Set to 30 in [1].
time_conv_size_s (float) – Size of the temporal convolution kernel, in seconds. Set to 9 / 128 in [1].
ensure_odd_conv_size (bool) – If True and the size of the convolutional kernel is an even number, one will be added to it to ensure it is odd, so that the decoder blocks can work. This can ne useful when using different sampling rates from 128 or 100 Hz.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
in_chans (int) – Alias for n_chans.
n_classes (int) – Alias for n_outputs.
input_size_s (float) – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
braindecode.models.util module#
- braindecode.models.util.aggregate_probas(logits, n_windows_stride=1)[source]#
Aggregate predicted probabilities with self-ensembling.
Aggregate window-wise predicted probabilities obtained on overlapping sequences of windows using multiplicative voting as described in [Phan2018].
- Parameters:
logits (np.ndarray) – Array of shape (n_sequences, n_classes, n_windows) containing the logits (i.e. the raw unnormalized scores for each class) for each window of each sequence.
n_windows_stride (int) – Number of windows between two consecutive sequences. Default is 1 (maximally overlapping sequences).
- Returns:
Array of shape ((n_rows - 1) * stride + n_windows, n_classes) containing the aggregated predicted probabilities for each window contained in the input sequences.
- Return type:
np.ndarray
References
[Phan2018]Phan, H., Andreotti, F., Cooray, N., Chén, O. Y., & De Vos, M. (2018). Joint classification and prediction CNN framework for automatic sleep stage classification. IEEE Transactions on Biomedical Engineering, 66(5), 1285-1296.
- braindecode.models.util.get_output_shape(model, in_chans, input_window_samples)[source]#
Returns shape of neural network output for batch size equal 1.
- Returns:
output_shape – shape of the network output for batch_size==1 (1, …)
- Return type:
- braindecode.models.util.to_dense_prediction_model(model, axis=(2, 3))[source]#
Transform a sequential model with strides to a model that outputs dense predictions by removing the strides and instead inserting dilations. Modifies model in-place.
- Parameters:
model (torch.nn.Module) – Model which modules will be modified
axis (int or (int,int)) – Axis to transform (in terms of intermediate output axes) can either be 2, 3, or (2,3).
Notes
Does not yet work correctly for average pooling. Prior to version 0.1.7, there had been a bug that could move strides backwards one layer.