braindecode.models.TIDNet#
- class braindecode.models.TIDNet(n_chans=None, n_outputs=None, n_times=None, input_window_seconds=None, sfreq=None, chs_info=None, s_growth=24, t_filters=32, drop_prob=0.4, pooling=15, temp_layers=2, spat_layers=2, temp_span=0.05, bottleneck=3, summary=-1, activation=<class 'torch.nn.modules.activation.LeakyReLU'>)[source]#
Thinker Invariance DenseNet model from Kostas et al (2020) [TIDNet].
Convolution
See [TIDNet] for details.
- Parameters:
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]. Refer tomne.Infofor more details.s_growth (
int) – DenseNet-style growth factor (added filters per DenseFilter)t_filters (
int) – Number of temporal filters.drop_prob (
float) – Dropout probabilitypooling (
int) – Max temporal pooling (width and stride)temp_layers (
int) – Number of temporal layersspat_layers (
int) – Number of DenseFilterstemp_span (
float) – Percentage of n_times that defines the temporal filter length: temp_len = ceil(temp_span * n_times) e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal filter of length 75.bottleneck (
int) – Bottleneck factor within Densefiltersummary (
int) – Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated automatically (n_times // pooling).activation (
type[Module]) – Activation function class to apply. Should be a PyTorch activation module class likenn.ReLUornn.ELU. Default isnn.LeakyReLU.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
Notes
Code adapted from: https://github.com/SPOClab-ca/ThinkerInvariance/
References
Hugging Face Hub integration
When the optional
huggingface_hubpackage is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:pip install braindecode[hub]
Pushing a model to the Hub:
from braindecode.models import TIDNet # Train your model model = TIDNet(n_chans=22, n_outputs=4, n_times=1000) # ... training code ... # Push to the Hub model.push_to_hub( repo_id="username/my-tidnet-model", commit_message="Initial model upload", )
Loading a model from the Hub:
from braindecode.models import TIDNet # Load pretrained model model = TIDNet.from_pretrained("username/my-tidnet-model") # Load with a different number of outputs (head is rebuilt automatically) model = TIDNet.from_pretrained("username/my-tidnet-model", n_outputs=4)
Extracting features and replacing the head:
import torch x = torch.randn(1, model.n_chans, model.n_times) # Extract encoder features (consistent dict across all models) out = model(x, return_features=True) features = out["features"] # Replace the classification head model.reset_head(n_outputs=10)
Saving and restoring full configuration:
import json config = model.get_config() # all __init__ params with open("config.json", "w") as f: json.dump(config, f) model2 = TIDNet.from_config(config) # reconstruct (no weights)
All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.
See Loading and Adapting Pretrained Foundation Models for a complete tutorial.
Methods