braindecode.models.SignalJEPA_PreLocal#
- class braindecode.models.SignalJEPA_PreLocal(n_outputs=None, n_chans=None, chs_info=None, n_times=None, input_window_seconds=None, sfreq=None, *, n_spat_filters=4, feature_encoder__conv_layers_spec=((8, 32, 8), (16, 2, 2), (32, 2, 2), (64, 2, 2), (64, 2, 2)), drop_prob=0.0, feature_encoder__mode='default', feature_encoder__conv_bias=False, activation=<class 'torch.nn.modules.activation.GELU'>, pos_encoder__spat_dim=30, pos_encoder__time_dim=34, pos_encoder__sfreq_features=1.0, pos_encoder__spat_kwargs=None, transformer__d_model=64, transformer__num_encoder_layers=8, transformer__num_decoder_layers=4, transformer__nhead=8, _init_feature_encoder=True)[source]#
 Pre-local downstream architecture introduced in signal-JEPA Guetschel, P et al (2024) [1].
Convolution Channel Large Brain Model
This architecture is one of the variants of
SignalJEPAthat can be used for classification purposes.
Added in version 0.9.
- Parameters:
 n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_chans (int) – Number of EEG channels.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]. Refer tomne.Infofor more details.n_times (int) – Number of time samples of the input window.
input_window_seconds (float) – Length of the input window in seconds.
sfreq (float) – Sampling frequency of the EEG recordings.
n_spat_filters (int) – Number of spatial filters.
feature_encoder__conv_layers_spec (list of tuple) –
tuples have shape
(dim, k, stride)where:dim: number of output channels of the layer (unrelated to EEG channels);k: temporal length of the layer’s kernel;stride: temporal stride of the layer’s kernel.
drop_prob (float)
feature_encoder__mode (str) – Normalisation mode. Either
defaultorlayer_norm.feature_encoder__conv_bias (bool)
activation (nn.Module) – Activation layer for the feature encoder.
pos_encoder__spat_dim (int) – Number of dimensions to use to encode the spatial position of the patch, i.e. the EEG channel.
pos_encoder__time_dim (int) – Number of dimensions to use to encode the temporal position of the patch.
pos_encoder__sfreq_features (float) – The “downsampled” sampling frequency returned by the feature encoder.
pos_encoder__spat_kwargs (dict) – Additional keyword arguments to pass to the
nn.Embeddinglayer used to embed the channel names.transformer__d_model (int) – The number of expected features in the encoder/decoder inputs.
transformer__num_encoder_layers (int) – The number of encoder layers in the transformer.
transformer__num_decoder_layers (int) – The number of decoder layers in the transformer.
transformer__nhead (int) – The number of heads in the multiheadattention models.
_init_feature_encoder (bool) – Do not change the default value (used for internal purposes).
- Raises:
 ValueError – If some input signal-related parameters are not specified: and can not be inferred.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
Hugging Face Hub integration
When the optional
huggingface_hubpackage is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:pip install braindecode[hug]
Pushing a model to the Hub:
from braindecode.models import EEGNetv4 # Train your model model = EEGNetv4(n_chans=22, n_outputs=4, n_times=1000) # ... training code ... # Push to the Hub model.push_to_hub( repo_id="username/my-eegnet-model", commit_message="Initial model upload" )
Loading a model from the Hub:
from braindecode.models import EEGNetv4 # Load pretrained model model = EEGNetv4.from_pretrained("username/my-eegnet-model")
The integration automatically handles EEG-specific parameters (n_chans, n_times, sfreq, chs_info, etc.) by saving them in a config file alongside the model weights. This ensures that loaded models are correctly configured for their original data specifications.
Important
Currently, only EEG-specific parameters (n_outputs, n_chans, n_times, input_window_seconds, sfreq, chs_info) are saved to the Hub. Model-specific parameters (e.g., dropout rates, activation functions, number of filters) are not preserved and will use their default values when loading from the Hub.
To use non-default model parameters, specify them explicitly when calling
from_pretrained():model = EEGNet.from_pretrained("user/model", dropout=0.3, activation='relu')
Full parameter serialization will be addressed in a future update.
References
[1]Guetschel, P., Moreau, T., & Tangermann, M. (2024). S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention. In 9th Graz Brain-Computer Interface Conference, https://www.doi.org/10.3217/978-3-99161-014-4-003
Methods
- forward(X)[source]#
 Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
 X – The description is missing.
- classmethod from_pretrained(model=None, n_outputs=None, n_spat_filters=4, **kwargs)[source]#
 Instantiate a new model from a pre-trained
SignalJEPAmodel or from Hub.- Parameters:
 model (SignalJEPA, str, Path, or None) – Either a pre-trained
SignalJEPAmodel, a string/Path to a local directory (for Hub-style loading), or None (for Hub loading via kwargs).n_outputs (int or None) – Number of classes for the new model. Required when loading from a SignalJEPA model, optional when loading from Hub (will be read from config).
n_spat_filters (int) – Number of spatial filters.
**kwargs – Additional keyword arguments passed to the parent class for Hub loading.