braindecode.models.TSception#

class braindecode.models.TSception(n_chans=None, n_outputs=None, input_window_seconds=None, chs_info=None, n_times=None, sfreq=None, number_filter_temp=9, number_filter_spat=6, hidden_size=128, drop_prob=0.5, activation=<class 'torch.nn.modules.activation.LeakyReLU'>, pool_size=8, inception_windows=(0.5, 0.25, 0.125))[source]#

TSception model from Ding et al. (2020) from [ding2020].

Convolution

TSception: A deep learning framework for emotion detection using EEG.

TSception Architecture

The model consists of temporal and spatial convolutional layers (Tception and Sception) designed to learn temporal and spatial features from EEG data.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • input_window_seconds (float) – Length of the input window in seconds.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • n_times (int) – Number of time samples of the input window.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • number_filter_temp (int) – Number of temporal convolutional filters.

  • number_filter_spat (int) – Number of spatial convolutional filters.

  • hidden_size (int) – Number of units in the hidden fully connected layer.

  • drop_prob (float) – Dropout rate applied after the hidden layer.

  • activation (type[Module]) – Activation function class to apply. Should be a PyTorch activation module like nn.ReLU or nn.LeakyReLU. Default is nn.LeakyReLU.

  • pool_size (int) – Pooling size for the average pooling layers. Default is 8.

  • inception_windows (tuple[float, float, float]) – List of window sizes (in seconds) for the inception modules. Default is [0.5, 0.25, 0.125].

Raises:

ValueError – If some input signal-related parameters are not specified: and can not be inferred.

Notes

This implementation is not guaranteed to be correct, has not been checked by original authors. The modifications are minimal and the model is expected to work as intended. the original code from [code2020].

References

[ding2020]

Ding, Y., Robinson, N., Zeng, Q., Chen, D., Wai, A. A. P., Lee, T. S., & Guan, C. (2020, July). Tsception: a deep learning framework for emotion detection using EEG. In 2020 international joint conference on neural networks (IJCNN) (pp. 1-7). IEEE.

[code2020]

Ding, Y., Robinson, N., Zeng, Q., Chen, D., Wai, A. A. P., Lee, T. S., & Guan, C. (2020, July). Tsception: a deep learning framework for emotion detection using EEG. https://github.com/deepBrains/TSception/blob/master/Models.py

Hugging Face Hub integration

When the optional huggingface_hub package is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:

pip install braindecode[hub]

Pushing a model to the Hub:

from braindecode.models import TSception

# Train your model
model = TSception(n_chans=22, n_outputs=4, n_times=1000)
# ... training code ...

# Push to the Hub
model.push_to_hub(
    repo_id="username/my-tsception-model",
    commit_message="Initial model upload",
)

Loading a model from the Hub:

from braindecode.models import TSception

# Load pretrained model
model = TSception.from_pretrained("username/my-tsception-model")

# Load with a different number of outputs (head is rebuilt automatically)
model = TSception.from_pretrained("username/my-tsception-model", n_outputs=4)

Extracting features and replacing the head:

import torch

x = torch.randn(1, model.n_chans, model.n_times)
# Extract encoder features (consistent dict across all models)
out = model(x, return_features=True)
features = out["features"]

# Replace the classification head
model.reset_head(n_outputs=10)

Saving and restoring full configuration:

import json

config = model.get_config()            # all __init__ params
with open("config.json", "w") as f:
    json.dump(config, f)

model2 = TSception.from_config(config)    # reconstruct (no weights)

All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.

See Loading and Adapting Pretrained Foundation Models for a complete tutorial.

Methods

forward(x)[source]#

Forward pass of the TSception model.

Parameters:

x (Tensor) – Input tensor of shape (batch_size, n_channels, n_times).

Returns:

Output tensor of shape (batch_size, n_classes).

Return type:

Tensor