braindecode.models.BDTCN#

class braindecode.models.BDTCN(n_chans=None, n_outputs=None, chs_info=None, n_times=None, sfreq=None, input_window_seconds=None, n_blocks=3, n_filters=30, kernel_size=5, drop_prob=0.5, activation=<class 'torch.nn.modules.activation.ReLU'>)[source]#

Braindecode TCN from Gemein, L et al (2020) [gemein2020].

Convolution Recurrent

Braindecode TCN Architecture

See [gemein2020] for details.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • n_times (int) – Number of time samples of the input window.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • input_window_seconds (float) – Length of the input window in seconds.

  • n_blocks (int) – number of temporal blocks in the network

  • n_filters (int) – number of output filters of each convolution

  • kernel_size (int) – kernel size of the convolutions

  • drop_prob (float) – dropout probability

  • activation (type[Module]) – Activation function class to apply. Should be a PyTorch activation module class like nn.ReLU or nn.ELU. Default is nn.ReLU.

Raises:

ValueError – If some input signal-related parameters are not specified: and can not be inferred.

Notes

If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.

References

[gemein2020] (1,2)

Gemein, L. A., Schirrmeister, R. T., Chrabąszcz, P., Wilson, D., Boedecker, J., Schulze-Bonhage, A., … & Ball, T. (2020). Machine-learning-based diagnostics of EEG pathology. NeuroImage, 220, 117021.

Hugging Face Hub integration

When the optional huggingface_hub package is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:

pip install braindecode[hub]

Pushing a model to the Hub:

from braindecode.models import BDTCN

# Train your model
model = BDTCN(n_chans=22, n_outputs=4, n_times=1000)
# ... training code ...

# Push to the Hub
model.push_to_hub(
    repo_id="username/my-bdtcn-model",
    commit_message="Initial model upload",
)

Loading a model from the Hub:

from braindecode.models import BDTCN

# Load pretrained model
model = BDTCN.from_pretrained("username/my-bdtcn-model")

# Load with a different number of outputs (head is rebuilt automatically)
model = BDTCN.from_pretrained("username/my-bdtcn-model", n_outputs=4)

Extracting features and replacing the head:

import torch

x = torch.randn(1, model.n_chans, model.n_times)
# Extract encoder features (consistent dict across all models)
out = model(x, return_features=True)
features = out["features"]

# Replace the classification head
model.reset_head(n_outputs=10)

Saving and restoring full configuration:

import json

config = model.get_config()            # all __init__ params
with open("config.json", "w") as f:
    json.dump(config, f)

model2 = BDTCN.from_config(config)    # reconstruct (no weights)

All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.

See Loading and Adapting Pretrained Foundation Models for a complete tutorial.

Methods

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x – The description is missing.