braindecode.models.IFNet#

class braindecode.models.IFNet(n_chans=None, n_outputs=None, n_times=None, chs_info=None, input_window_seconds=None, sfreq=None, bands=[(4.0, 16.0), (16, 40)], n_filters_spat=64, kernel_sizes=(63, 31), stride_factor=8, drop_prob=0.5, linear_max_norm=0.5, activation=<class 'torch.nn.modules.activation.GELU'>, verbose=False, filter_parameters=None)[source]#

IFNetV2 from Wang J et al (2023) [ifnet].

Convolution Filterbank

IFNetV2 Architecture

Overview of the Interactive Frequency Convolutional Neural Network architecture.

IFNetV2 is designed to effectively capture spectro-spatial-temporal features for motor imagery decoding from EEG data. The model consists of three stages: Spectro-Spatial Feature Representation, Cross-Frequency Interactions, and Classification.

  • Spectro-Spatial Feature Representation: The raw EEG signals are filtered into two characteristic frequency bands: low (4-16 Hz) and high (16-40 Hz), covering the most relevant motor imagery bands. Spectro-spatial features are then extracted through 1D point-wise spatial convolution followed by temporal convolution.

  • Cross-Frequency Interactions: The extracted spectro-spatial features from each frequency band are combined through an element-wise summation operation, which enhances feature representation while preserving distinct characteristics.

  • Classification: The aggregated spectro-spatial features are further reduced through temporal average pooling and passed through a fully connected layer followed by a softmax operation to generate output probabilities for each class.

Parameters:
  • n_chans (int) – Number of EEG channels.

  • n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.

  • n_times (int) – Number of time samples of the input window.

  • chs_info (list of dict) – Information about each individual EEG channel. This should be filled with info["chs"]. Refer to mne.Info for more details.

  • input_window_seconds (float) – Length of the input window in seconds.

  • sfreq (float) – Sampling frequency of the EEG recordings.

  • bands (list[tuple[float, float]] | int | None) – Frequency bands for filtering.

  • n_filters_spat (int) – The description is missing.

  • kernel_sizes (tuple[int, int]) – List of kernel sizes for temporal convolutions.

  • stride_factor (int) – The description is missing.

  • drop_prob (float) – Dropout probability.

  • linear_max_norm (float) – The description is missing.

  • activation (type[Module]) – Activation function after the InterFrequency Layer.

  • verbose (bool) – Verbose to control the filtering layer

  • filter_parameters (Optional[dict]) – Additional parameters for the filter bank layer.

Raises:

ValueError – If some input signal-related parameters are not specified: and can not be inferred.

Notes

This implementation is not guaranteed to be correct, has not been checked by original authors, only reimplemented from the paper description and Torch source code [ifnetv2code]. Version 2 is present only in the repository, and the main difference is one pooling layer, describe at the TABLE VII from the paper: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10070810

References

[ifnet]

Wang, J., Yao, L., & Wang, Y. (2023). IFNet: An interactive frequency convolutional neural network for enhancing motor imagery decoding from EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31, 1900-1911.

[ifnetv2code]

Wang, J., Yao, L., & Wang, Y. (2023). IFNet: An interactive frequency convolutional neural network for enhancing motor imagery decoding from EEG. https://github.com/Jiaheng-Wang/IFNet

Hugging Face Hub integration

When the optional huggingface_hub package is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:

pip install braindecode[hub]

Pushing a model to the Hub:

from braindecode.models import IFNet

# Train your model
model = IFNet(n_chans=22, n_outputs=4, n_times=1000)
# ... training code ...

# Push to the Hub
model.push_to_hub(
    repo_id="username/my-ifnet-model",
    commit_message="Initial model upload",
)

Loading a model from the Hub:

from braindecode.models import IFNet

# Load pretrained model
model = IFNet.from_pretrained("username/my-ifnet-model")

# Load with a different number of outputs (head is rebuilt automatically)
model = IFNet.from_pretrained("username/my-ifnet-model", n_outputs=4)

Extracting features and replacing the head:

import torch

x = torch.randn(1, model.n_chans, model.n_times)
# Extract encoder features (consistent dict across all models)
out = model(x, return_features=True)
features = out["features"]

# Replace the classification head
model.reset_head(n_outputs=10)

Saving and restoring full configuration:

import json

config = model.get_config()            # all __init__ params
with open("config.json", "w") as f:
    json.dump(config, f)

model2 = IFNet.from_config(config)    # reconstruct (no weights)

All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.

See Loading and Adapting Pretrained Foundation Models for a complete tutorial.

Methods

forward(x)[source]#

Forward pass of IFNet.

Parameters:

x (Tensor) – Input tensor with shape (batch_size, n_chans, n_times).

Returns:

Output tensor with shape (batch_size, n_outputs).

Return type:

Tensor