braindecode.models.EEGModuleMixin#
- class braindecode.models.EEGModuleMixin(n_outputs: int | None = None, n_chans: int | None = None, chs_info: List[Dict] | None = None, n_times: int | None = None, input_window_seconds: float | None = None, sfreq: float | None = None, add_log_softmax: bool | None = False)[source]#
Mixin class for all EEG models in braindecode.
- Parameters:
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_chans (int) – Number of EEG channels.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]
. Refer tomne.Info
for more details.n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
FutureWarning – If add_log_softmax is True, since LogSoftmax final layer: will be removed in the future.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
Methods
- get_output_shape() Tuple[int, ...] [source]#
Returns shape of neural network output for batch size equal 1.
- Returns:
output_shape – shape of the network output for batch_size==1 (1, …)
- Return type:
Tuple[int, …]
- get_torchinfo_statistics(col_names: Iterable[str] | None = ('input_size', 'output_size', 'num_params', 'kernel_size'), row_settings: Iterable[str] | None = ('var_names', 'depth')) ModelStatistics [source]#
Generate table describing the model using torchinfo.summary.
- Parameters:
col_names (tuple, optional) – Specify which columns to show in the output, see torchinfo for details, by default (“input_size”, “output_size”, “num_params”, “kernel_size”)
row_settings (tuple, optional) – Specify which features to show in a row, see torchinfo for details, by default (“var_names”, “depth”)
- Returns:
ModelStatistics generated by torchinfo.summary.
- Return type:
torchinfo.ModelStatistics
- to_dense_prediction_model(axis: Tuple[int, ...] | int = (2, 3)) None [source]#
Transform a sequential model with strides to a model that outputs dense predictions by removing the strides and instead inserting dilations. Modifies model in-place.
- Parameters:
axis (int or (int,int)) – Axis to transform (in terms of intermediate output axes) can either be 2, 3, or (2,3).
Notes
Does not yet work correctly for average pooling. Prior to version 0.1.7, there had been a bug that could move strides backwards one layer.
Examples using braindecode.models.EEGModuleMixin
#
Cropped Decoding on BCIC IV 2a Dataset
Basic Brain Decoding on EEG Data
How to train, test and tune your model?
Hyperparameter tuning with scikit-learn
Convolutional neural network regression model on fake data.
Training a Braindecode model in PyTorch
Fingers flexion cropped decoding on BCIC IV 4 ECoG Dataset
Data Augmentation on BCIC IV 2a Dataset
Searching the best data augmentation on BCIC IV 2a Dataset
Self-supervised learning on EEG with relative positioning
Fingers flexion decoding on BCIC IV 4 ECoG Dataset
Sleep staging on the Sleep Physionet dataset using Chambon2018 network
Sleep staging on the Sleep Physionet dataset using Eldele2021
Sleep staging on the Sleep Physionet dataset using U-Sleep network