braindecode.models.LUNA#
- class braindecode.models.LUNA(n_outputs=None, n_chans=None, n_times=None, sfreq=None, chs_info=None, input_window_seconds=None, patch_size=40, num_queries=4, embed_dim=64, depth=8, num_heads=2, mlp_ratio=4.0, norm_layer=<class 'torch.nn.modules.normalization.LayerNorm'>, drop_path=0.0, drop_prob_chan=0.0, attn_drop=0.0, activation=<class 'torch.nn.modules.activation.GELU'>)[source]#
LUNA from Döner et al. [LUNA].
Convolution Large Brain Model Channel
LUNA is a topology-invariant EEG model that processes signals from varying numbers of channels using a channel-unification mechanism with learned queries.
The architecture consists of: 1. Patch Feature Extraction (temporal CNN + FFT-based features) 2. Channel-Unification Module (cross-attention with learned queries) 3. Patch-wise Temporal Encoder (RoPE-based transformer) 4. Decoder Heads (classification or reconstruction)
- Parameters:
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_chans (int) – Number of EEG channels.
n_times (int) – Number of time samples of the input window.
sfreq (float) – Sampling frequency of the EEG recordings.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]. Refer tomne.Infofor more details.input_window_seconds (float) – Length of the input window in seconds.
patch_size (int) – Number of time samples per patch. Default: 40.
num_queries (int) – Number of learned queries for channel unification. Paper uses: 4 (Base), 6 (Large), 8 (Huge). Default: 4.
embed_dim (int) – Embedding dimension for patch features. Paper uses: 64 (Base), 96 (Large), 128 (Huge). Default: 64.
depth (int) – Number of transformer encoder blocks. Paper uses: 8 (Base), 10 (Large), 24 (Huge). Default: 8.
num_heads (int) – Number of attention heads in channel unification. Default: 2.
mlp_ratio (float) – Ratio of MLP hidden dimension to embedding dimension. Default: 4.0.
norm_layer (nn.Module) – Normalization layer class. Default: nn.LayerNorm.
drop_path (float) – Stochastic depth rate. Default: 0.0.
drop_prob_chan (
float) – The description is missing.attn_drop (
float) – The description is missing.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
Hugging Face Hub integration
When the optional
huggingface_hubpackage is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:pip install braindecode[hug]
Pushing a model to the Hub:
from braindecode.models import EEGNetv4 # Train your model model = EEGNetv4(n_chans=22, n_outputs=4, n_times=1000) # ... training code ... # Push to the Hub model.push_to_hub( repo_id="username/my-eegnet-model", commit_message="Initial model upload" )
Loading a model from the Hub:
from braindecode.models import EEGNetv4 # Load pretrained model model = EEGNetv4.from_pretrained("username/my-eegnet-model")
The integration automatically handles EEG-specific parameters (n_chans, n_times, sfreq, chs_info, etc.) by saving them in a config file alongside the model weights. This ensures that loaded models are correctly configured for their original data specifications.
Important
Currently, only EEG-specific parameters (n_outputs, n_chans, n_times, input_window_seconds, sfreq, chs_info) are saved to the Hub. Model-specific parameters (e.g., dropout rates, activation functions, number of filters) are not preserved and will use their default values when loading from the Hub.
To use non-default model parameters, specify them explicitly when calling
from_pretrained():model = EEGNet.from_pretrained("user/model", dropout=0.3, activation='relu')
Full parameter serialization will be addressed in a future update.
References
[LUNA]Döner, B., Ingolfsson, T. M., Benini, L., & Li, Y. (2025). LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis. The Thirty-Ninth Annual Conference on Neural Information Processing Systems - NeurIPS. Retrieved from https://openreview.net/forum?id=uazfjnFL0G
Methods
- build_channel_location_template(num_channels)[source]#
Build channel location template for the model.
Attempts to extract channel locations from chs_info. Falls back to a default linear spacing along the x-axis if real locations are unavailable.
- Parameters:
num_channels (int) – Number of channels to generate locations for.
- Returns:
Tensor of shape (num_channels, 3) with channel locations in 3D space.
- Return type: