braindecode.models.LUNA#
- class braindecode.models.LUNA(n_outputs=None, n_chans=None, n_times=None, sfreq=None, chs_info=None, input_window_seconds=None, patch_size=40, num_queries=4, embed_dim=64, depth=8, num_heads=2, mlp_ratio=4.0, norm_layer=<class 'torch.nn.modules.normalization.LayerNorm'>, drop_path=0.0, drop_prob_chan=0.0, attn_drop=0.0, activation=<class 'torch.nn.modules.activation.GELU'>)[source]#
LUNA from Döner et al [LUNA].
Convolution Foundation Model Channel
LUNA is a topology-invariant EEG model that processes signals from varying numbers of channels using a channel-unification mechanism with learned queries.
The architecture consists of: 1. Patch Feature Extraction (temporal CNN + FFT-based features) 2. Channel-Unification Module (cross-attention with learned queries) 3. Patch-wise Temporal Encoder (RoPE-based transformer) 4. Decoder Heads (classification or reconstruction)
Important
Pre-trained Weights Available
This model has pre-trained weights available on the Hugging Face Hub at PulpBio/LUNA.
Available model variants:
LUNA_base.safetensors - Base model (embed_dim=64, num_queries=4, depth=8)
LUNA_large.safetensors - Large model (embed_dim=96, num_queries=6, depth=10)
LUNA_huge.safetensors - Huge model (embed_dim=128, num_queries=8, depth=24)
Example loading for fine-tuning:
from braindecode.models import LUNA # Load pre-trained base model from Hugging Face Hub model = LUNA.from_pretrained( "PulpBio/LUNA", filename="LUNA_base.safetensors", n_outputs=2, n_chans=22, n_times=1000, embed_dim=64, num_queries=4, depth=8, )
To push your own trained model to the Hub:
# After training your model model.push_to_hub( repo_id="username/my-luna-model", commit_message="Upload trained LUNA model" )
Requires installing
braindecode[hug]for Hub integration.- Parameters:
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
n_chans (int) – Number of EEG channels.
n_times (int) – Number of time samples of the input window.
sfreq (float) – Sampling frequency of the EEG recordings.
chs_info (list of dict) – Information about each individual EEG channel. This should be filled with
info["chs"]. Refer tomne.Infofor more details.input_window_seconds (float) – Length of the input window in seconds.
patch_size (int) – Number of time samples per patch. Default: 40.
num_queries (int) – Number of learned queries for channel unification. Paper uses: 4 (Base), 6 (Large), 8 (Huge). Default: 4.
embed_dim (int) – Embedding dimension for patch features. Paper uses: 64 (Base), 96 (Large), 128 (Huge). Default: 64.
depth (int) – Number of transformer encoder blocks. Paper uses: 8 (Base), 10 (Large), 24 (Huge). Default: 8.
num_heads (int) – Number of attention heads in channel unification. Default: 2.
mlp_ratio (float) – Ratio of MLP hidden dimension to embedding dimension. Default: 4.0.
norm_layer (nn.Module) – Normalization layer class. Default: nn.LayerNorm.
drop_path (float) – Stochastic depth rate. Default: 0.0.
drop_prob_chan (
float) – The description is missing.attn_drop (
float) – The description is missing.
- Raises:
ValueError – If some input signal-related parameters are not specified: and can not be inferred.
Notes
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
References
[LUNA]Döner, B., Ingolfsson, T. M., Benini, L., & Li, Y. (2025). LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis. The Thirty-Ninth Annual Conference on Neural Information Processing Systems - NeurIPS. Retrieved from https://openreview.net/forum?id=uazfjnFL0G
Hugging Face Hub integration
When the optional
huggingface_hubpackage is installed, all models automatically gain the ability to be pushed to and loaded from the Hugging Face Hub. Install with:pip install braindecode[hub]
Pushing a model to the Hub:
from braindecode.models import LUNA # Train your model model = LUNA(n_chans=22, n_outputs=4, n_times=1000) # ... training code ... # Push to the Hub model.push_to_hub( repo_id="username/my-luna-model", commit_message="Initial model upload", )
Loading a model from the Hub:
from braindecode.models import LUNA # Load pretrained model model = LUNA.from_pretrained("username/my-luna-model") # Load with a different number of outputs (head is rebuilt automatically) model = LUNA.from_pretrained("username/my-luna-model", n_outputs=4)
Extracting features and replacing the head:
import torch x = torch.randn(1, model.n_chans, model.n_times) # Extract encoder features (consistent dict across all models) out = model(x, return_features=True) features = out["features"] # Replace the classification head model.reset_head(n_outputs=10)
Saving and restoring full configuration:
import json config = model.get_config() # all __init__ params with open("config.json", "w") as f: json.dump(config, f) model2 = LUNA.from_config(config) # reconstruct (no weights)
All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.
See Loading and Adapting Pretrained Foundation Models for a complete tutorial.
Methods
- build_channel_location_template(num_channels)[source]#
Build channel location template for the model.
Attempts to extract channel locations from chs_info. Falls back to a default linear spacing along the x-axis if real locations are unavailable.
- Parameters:
num_channels (int) – Number of channels to generate locations for.
- Returns:
Tensor of shape (num_channels, 3) with channel locations in 3D space.
- Return type: