braindecode.models.EEGTCNet#
- class braindecode.models.EEGTCNet(n_chans=None, n_outputs=None, n_times=None, chs_info=None, input_window_seconds=None, sfreq=None, activation=<class 'torch.nn.modules.activation.ELU'>, depth_multiplier=2, filter_1=8, kern_length=64, drop_prob=0.5, depth=2, kernel_size=4, filters=12, max_norm_const=0.25)[source]#
 EEGTCNet model from Ingolfsson et al. (2020) [ingolfsson2020].
Convolution Recurrent
Combining EEGNet and TCN blocks.
- Parameters:
 n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]. Refer tomne.Infofor more details.input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
activation (nn.Module, optional) – Activation function to use. Default is nn.ELU().
depth_multiplier (int, optional) – Depth multiplier for the depthwise convolution. Default is 2.
filter_1 (int, optional) – Number of temporal filters in the first convolutional layer. Default is 8.
kern_length (int, optional) – Length of the temporal kernel in the first convolutional layer. Default is 64.
drop_prob (
float) – The description is missing.depth (int, optional) – Number of residual blocks in the TCN. Default is 2.
kernel_size (int, optional) – Size of the temporal convolutional kernel in the TCN. Default is 4.
filters (int, optional) – Number of filters in the TCN convolutional layers. Default is 12.
max_norm_const (float) – Maximum L2-norm constraint imposed on weights of the last fully-connected layer. Defaults to 0.25.
- Raises:
 ValueError – If some input signal-related parameters are not specified: and can not be inferred.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
Hugging Face Hub integration
When the optional
huggingface_hubpackage is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:pip install braindecode[hug]
Pushing a model to the Hub:
from braindecode.models import EEGNetv4 # Train your model model = EEGNetv4(n_chans=22, n_outputs=4, n_times=1000) # ... training code ... # Push to the Hub model.push_to_hub( repo_id="username/my-eegnet-model", commit_message="Initial model upload" )
Loading a model from the Hub:
from braindecode.models import EEGNetv4 # Load pretrained model model = EEGNetv4.from_pretrained("username/my-eegnet-model")
The integration automatically handles EEG-specific parameters (n_chans, n_times, sfreq, chs_info, etc.) by saving them in a config file alongside the model weights. This ensures that loaded models are correctly configured for their original data specifications.
Important
Currently, only EEG-specific parameters (n_outputs, n_chans, n_times, input_window_seconds, sfreq, chs_info) are saved to the Hub. Model-specific parameters (e.g., dropout rates, activation functions, number of filters) are not preserved and will use their default values when loading from the Hub.
To use non-default model parameters, specify them explicitly when calling
from_pretrained():model = EEGNet.from_pretrained("user/model", dropout=0.3, activation='relu')
Full parameter serialization will be addressed in a future update.
References
[ingolfsson2020]Ingolfsson, T. M., Hersche, M., Wang, X., Kobayashi, N., Cavigelli, L., & Benini, L. (2020). EEG-TCNet: An accurate temporal convolutional network for embedded motor-imagery brain–machine interfaces. https://doi.org/10.48550/arXiv.2006.00622
Methods
- forward(x)[source]#
 Forward pass of the EEGTCNet model.
- Parameters:
 x (torch.Tensor) – Input tensor of shape (batch_size, n_chans, n_times).
- Returns:
 Output tensor of shape (batch_size, n_outputs).
- Return type: