What’s new#

Current 1.5.0 (stable)#

Enhancements#

  • Add braindecode.datasets.BaseConcatDataset.set_target() to swap any per-window metadata column or per-record description field (e.g. a BIDS entity, a participants.tsv extra) into the dataset’s target y in one call, replacing the manual for ds in concat.datasets: ds.metadata.loc[:, 'target'] = ...; ds.y = ... loop. Dispatches on the subdataset type: writes metadata['target'] / ds.y for windowed records, and points target_name at the chosen description field for raw records. By Bruno Aristimunha.

  • Redesign the documentation landing page (docs/index.rst) in a pyhealth.dev-style layout: animated brain → EEG → net hero, fact strip highlighting MOABB / EEGDash interoperability, interactive model-zoo browser sourced from braindecode/models/summary.csv, Hugging Face Hub integration row, and a tutorial-thumbnail carousel. Adds sphinxext-opengraph to docs extras and a SoftwareApplication JSON-LD block. (#1007 by Bruno Aristimunha)

  • Tutorials now train for a few epochs then load pretrained weights from Hugging Face Hub to show full training curves and metrics. All 9 tutorial checkpoints published to huggingface.co/braindecode/. The offline training script used to produce the checkpoints is available as a gist: https://gist.github.com/bruAristimunha/27d74c8410fe9d0db258a03f42efa7c6. (#985 by Bruno Aristimunha)

  • Use F.scaled_dot_product_attention in braindecode.modules.MultiHeadAttention, enabling optimized attention kernels (flash-attention on CUDA, memory-efficient backends on other devices). By Léo Burgund and Bruno Aristimunha. (#902)

  • Add experimental channel interpolation feature: new braindecode.modules.ChannelInterpolationLayer plus the braindecode.models.InterpolatedModel() class factory project arbitrary user channel sets to a model’s canonical set via an MNE-backed (frozen by default) interpolation matrix. Ship pre-built variants braindecode.models.InterpolatedLaBraM, braindecode.models.InterpolatedSignalJEPA, and braindecode.models.InterpolatedBIOT for the corresponding pre-trained models. (#993 by Pierre Guetschel)

  • Add a tutorial walking through the experimental Interpolated* family (Loading Pretrained Foundation Models on Arbitrary Channel Sets): failure mode of the vanilla backbones on non-canonical channel sets, one-line fix via braindecode.models.InterpolatedLaBraM / braindecode.models.InterpolatedBIOT / braindecode.models.InterpolatedSignalJEPA, side-by-side visualisation of the name_match vs always interpolation matrices, and the trainable=True flag. (#994 by Pierre Guetschel)

  • Mark deterministic index buffers (braindecode.models.BIOT encoder’s index and braindecode.models.REVE’s position embedding bank) as non-persistent. They are rebuilt from __init__ arguments on every instantiation, so keeping them in state_dict only bloated checkpoints and caused spurious mismatches when n_chans (or the position-bank config) differed between save and load. (#993 by Pierre Guetschel)

  • Add braindecode.visualization interpretability utilities, all built on plain PyTorch autograd with no extra dependencies: saliency(), input_x_gradient(), integrated_gradients(), layer_grad_cam(), project_to_topomap() (thin wrapper around mne.viz.plot_topomap()), and compute_metrics() for quantitative attribution comparison. A new tutorial, Interpretability of EEG Decoders, walks through the full pipeline.

  • Add braindecode.models.MetaNeuromotorHand, a port of the handwriting decoder from Meta / CTRL-labs’ generic neuromotor interface (Kaifosh, Reardon et al., Nature 2025). The model takes raw 16-channel surface EMG from the wristband at 2 kHz and produces per-token scores for CTC decoding of handwritten text. The pipeline is a fixed multivariate power frequency (MPF) featurizer (channel-wise STFT, cross-spectral density, frequency-band averaging, and SPD matrix logarithm) followed by a circular rotation-invariant MLP and a 15-block causal conformer encoder. Meta’s pretrained checkpoint loads directly via load_state_dict (after stripping the network. prefix); the port is bit-exact to the upstream reference implementation on real sEMG. Distributed under CC BY-NC 4.0 to match the upstream repository; see the class docstring for the license warning and the pretrained-checkpoint loading recipe. By Bruno Aristimunha.

  • Add braindecode.models.EMG2QwertyNet, a port of the TDS-Conv-CTC touch-typing decoder from facebookresearch/emg2qwerty (Sivakumar et al., NeurIPS 2024). The model takes raw 32-channel surface EMG (two 16-electrode wristbands at 2 kHz) and emits per-frame scores over a 99-class typing vocabulary (98 keys + CTC blank); pass log_softmax=True to get log-probabilities directly consumable by CTCLoss. The pipeline is a parameter-free log-spectrogram front-end, per-electrode-per-band BatchNorm, a circular rotation-invariant MLP (one per band), and a stack of Time-Depth-Separable convolutional blocks (Hannun et al., 2019) without temporal padding. The encoder nn.Sequential mirrors upstream’s TDSConvCTCModule.model indices for parameter-bearing children, so upstream emg2qwerty checkpoints load directly via load_state_dict (after stripping the PyTorch-Lightning network. prefix); the class-level mapping only remaps the head from model.4.{weight,bias} to final_layer.{weight,bias}. Distributed under CC BY-NC 4.0 to match the upstream repository; see the class docstring for the license warning and the pretrained-checkpoint loading recipe. By Bruno Aristimunha.

API and behavior changes#

  • braindecode.modules.MultiHeadAttention now follows PyTorch’s SDPA mask convention: boolean masks use True to ignore a position (previously True meant keep). The scaling factor is now 1/sqrt(head_dim) instead of 1/sqrt(emb_size). (#902)

  • braindecode.models.BENDR: remove the n_chans_pretrained / chan_proj_max_norm parameters and the channel_projection layer; hard-code the 20 pre-training channels as _BENDR_TARGET_CHS_TUPLES. The official braindecode/braindecode-bendr checkpoint has been re-uploaded flat so from_pretrained now loads its 99 weights (previously 0 of 99 matched silently). Also ships braindecode.models.InterpolatedBENDR, the InterpolatedModel() wrapper that accepts arbitrary user chs_info and projects to the canonical 20 BENDR channels (the SCALE target has no physical position, so its interpolation row is a spatial spline of the user’s EEG — not the dn3 amplitude statistic). (#992 by Pierre Guetschel)

  • braindecode.models.Labram now requires chs_info to match LABRAM_CHANNEL_ORDER exactly (128 channels, canonical order). The on_unknown_chs parameter and the forward-time ch_names argument are removed. Users with arbitrary channel sets should migrate to braindecode.models.InterpolatedLaBraM. (#993 by Pierre Guetschel)

Bug fixes#

  • Fix braindecode.models.SyncNet swapped parameter initialization where phi_ini (phase shift) was using beta_init_values and beta (decay) was using phase_init_values, replaced incorrect .view() reshape with .permute() for proper conv2d filter weight layout, and fixed duplicate default values in docstring (by Sarthak Tayal)

  • Fix braindecode.models.AttentionBaseNet redundant super().__init__() call that ran the parent nn.Module.__init__ twice (by Sarthak Tayal)

  • Fix incomplete author email in braindecode.models.TSception header (by Sarthak Tayal)

  • Fix a time-of-check-time-of-use race in braindecode.datasets.base._zarr_to_memmap() that caused concurrent workers to repeatedly rename-replace the published .npy cache, producing wasted I/O on local filesystems and .nfsXXXX silly-rename files plus SIGBUS crashes on NFSv3. The published file is now created exactly once via os.link and is never replaced, making the cache safe under arbitrary concurrent access on local POSIX, NFSv3, Lustre and SMB (#986 by Pierre Guetschel)

  • Register braindecode.models.BIOT encoder index as a non-trainable buffer instead of a parameter (torch.long), so it is treated as module state rather than trainable weights (#988 by Pierre Guetschel)

  • Fix TypeError: type 'Any' is not subscriptable when importing braindecode.models.config without numpydantic installed on Python 3.12+ (#871 by Sarthak Tayal)

  • Fix braindecode.preprocessing.create_fixed_length_windows() crashing when only window_size_samples is provided without window_stride_samples, stride now defaults to window size as documented (#990 by Sarthak Tayal)

  • Add channel_embedding parameter to braindecode.models.SignalJEPA and braindecode.models.SignalJEPA_Contextual to load pre-trained channel embedding weights when fine-tuning on a subset of the pre-training channels. Two new HuggingFace checkpoints are published: braindecode/signal-jepa and braindecode/signal-jepa_without-chans (#991 by Pierre Guetschel)

  • Bump openneuro-py to >=2026.4.0 so the docs build picks up upstream PR #308 (DatasetFile.keyid). The previous <2026.4 pin (#1000) avoided a libc double-free seen with newer releases but broke examples/datasets_io/plot_bids_dataset_example.py against the live OpenNeuro 5.0.0 GraphQL schema (Cannot query field "key" on type "DatasetFile") (#1002 by Bruno Aristimunha)

  • Retry transient TLS failures from physionet.org when fetching braindecode.datasets.SleepPhysionet (by Bruno Aristimunha)

Current 1.4.0 (stable)#

Enhancements#

API changes#

  • Add braindecode.models.base.EEGModuleMixin.get_config() and braindecode.models.base.EEGModuleMixin.from_config() to all models, enabling full JSON round-trip serialization and reconstruction of any model including all __init__ parameters (by Bruno Aristimunha)

  • push_to_hub() now saves all model parameters to config.json (previously only 6 EEG-specific parameters were saved; model-specific parameters like F1, D, drop_prob were lost on reload) (by Bruno Aristimunha)

  • Add braindecode.modules.Square activation module and update braindecode.models.ShallowFBCSPNet to use type[nn.Module] for conv_nonlin (backward-compatible with callable) (by Bruno Aristimunha)

  • Replace LazyLinear with Linear in braindecode.models.CBraMod when input dimensions are known, improving Hub round-trip compatibility (by Bruno Aristimunha)

Requirements#

  • Relaxed PyTorch requirement to >=2.0 to support Intel-based Macs (by GalAshkenazi1)

Bugs#

  • Improve the error message when from_pretrained() or push_to_hub() are called without the optional huggingface_hub dependency installed. Users now get a clear ImportError with installation instructions (pip install 'braindecode[hub]') instead of an AttributeError. (#1024 by `@copilot`_)

  • Fix the documentation header “Cite Braindecode” announcement link: it used a bare cite.html URL, which browsers resolve relative to the current page path and led to 404s (for example from install/install.html). The link is now built with Sphinx’s pathto() for each page so it always targets the cite page correctly.

  • Fix braindecode.models.EEGITNet state dict mapping that pointed bias to the weight key and referenced a nonexistent submodule path, and fix third inception branch using the wrong variable for kernel length (by Sarthak Tayal)

  • Fix braindecode.models.EEGInceptionMI state dict mapping typo where the old key was tc.bias instead of fc.bias (by Sarthak Tayal)

  • Fix multi-target channel windowing in braindecode.preprocessing.windowers.create_windows_from_target_channels() to use the union of valid target positions across all misc channels instead of only the first channel (by Sarthak Tayal)

  • Fix braindecode.preprocessing.preprocess.filterbank() to preserve info fields (description, line_freq, device_info, etc.) when creating filtered copies, avoiding merge conflicts in MNE when adding channels (#928 by Bruno Aristimunha)

  • [Outdated:] Restrict to ``pandas>=3.0`` due to incompatibility with ``wfdb`` (#919 by Pierre Guetschel)

  • Fix multiple bugs in Labram positional encoding. Now the braindecode implementation is aligned with the original one (#931 by Pierre Guetschel )

  • Fix Zenodo citation: update to global concept DOI and add BibTeX/APA citation formats in docs/cite.rst, README.rst, CITATION.cff, and docs/conf.py (#937 by Bruno Aristimunha)

  • Fix channel reduction in braindecode.modules.SqueezeAndExcitation to avoid runtime shape mismatches when the reduced channel count differs from the reduction rate (#889 by Sarthak Tayal)

  • Push large datasets to HuggingFace Hub using huggingface_hub.upload_large_folder() to avoid limitations, and allow resuming downloads (#945 and #953 by Pierre Guetschel)

  • Fix braindecode.models.LUNA channel location embeddings repeated along batch dimension instead of patch dimension in prepare_tokens, and include pretrained weight typo mapping in self.mapping (#887 by Sarthak Tayal)

  • Fix temporal generalization tutorial producing degraded results (peak AUC dropped from ~0.9 to ~0.75): MEG data in SI units (T/m) has variances ~1e-23, so BatchNorm1d’s eps=1e-5 dominated the normalization denominator. Now uses epochs.get_data(units="fT/cm") to bring data to a reasonable scale, and removes the misleading “importance of normalization” section whose conclusions were an artifact of the data scale issue (by Bruno Aristimunha)

  • Fix braindecode.augmentation.BandstopFilter notch center frequency range using bandwidth/2 instead of 2*bandwidth to match docstring (#548 by Sarthak Tayal)

  • Fix braindecode.models.DeepSleepNet hardcoded linear layer size that caused a shape mismatch when using input shapes other than the default 1 channel, 3000 timepoints. The FC and BiLSTM input dimensions are now computed dynamically from the CNN output (#755 by Sarthak Tayal)

  • Fix model docstring inheritance: track_model_init_kwargs wrapped __init__ with @wraps before the NumpyDocstringInheritanceInitMeta metaclass ran, causing inspect.unwrap() to bypass the wrapper and read __doc__=None. This replaced every model’s description with the parent mixin’s and marked all model-specific parameters as “The description is missing” when DOCSTRING_INHERITANCE_ENABLE=1 was set during documentation builds (#971 by Bruno Aristimunha)

  • Expose max_nbytes directly on braindecode.preprocessing.preprocess.preprocess() and turn the cryptic mmap can't resize a readonly failure (raised when joblib memory-maps a preloaded array that a preprocessor later tries to resize) into an actionable error explaining how to fix it: pass max_nbytes=None to disable memory mapping, or supply a save_dir so data is reloaded with preload=False (#325 by Sarthak Tayal)

Code health#

  • Reorder model categories in documentation to follow the progression: Convolution, Filterbank, Interpretability, Recurrent, Attention/Transformer, SPD, Graph Neural Network, Channel, and Foundation Model (#962 by Bruno Aristimunha)

  • Fix documentation build warnings and errors: correct numpydoc section underlines in braindecode.models.EEGSym and braindecode.models.SSTDPN, strip upstream .. rubric:: directives from MNE and MOABB docstrings that caused Sphinx errors, fix RST title levels in whats_new.rst, correct bibtex key for EEGPT, and ensure conf.py prioritises the local package on sys.path (by Bruno Aristimunha)

  • Remove deprecated torch.irfft fallback in braindecode.visualization.gradients.compute_amplitude_gradients_for_X(), now uses torch.fft.irfft directly since braindecode requires torch>=2.2 (by Sarthak Tayal)

Current 1.3.2 (stable)#

Enhancements#

API changes#

  • BIDS and Hub modules moved to braindecode.datasets.bids subpackage: braindecode.datasets.bids.hub, braindecode.datasets.bids.hub_format, braindecode.datasets.bids.datasets, braindecode.datasets.bids.hub_validation (#871 by Bruno Aristimunha)

  • Deprecating the old naming of MOABB Dataset name (#826 by Bruno Aristimunha)

  • Exposing the braindecode.datautil.infer_signal_properties() utility function (#856 by Pierre Guetschel)

  • Deprecating the old naming of MOABB Dataset name #826 by Bruno Aristimunha

  • Drop support for Python 3.10 and increase support to Python 3.13 and python 3.14 (#840 by Bruno Aristimunha)

    • Model config helpers now soft-import pydantic/numpydantic; if the optional dependencies are missing the module skips config generation and warns to install pip install braindecode[pydantic].

Bugs#

Current 1.2#

Enhancements#

API changes#

  • Using the name from the original name and deprecation models that we create for no reason, models #775 by Bruno Aristimunha

  • Deprecated the version name in braindecode.models.EEGNetv4 in favour of braindecode.models.EEGNetv.

  • Deprecated the version name in braindecode.models.SleepStagerEldele2021 in favour of braindecode.models.AttnSleep.

  • Deprecated the version name in braindecode.models.TSceptionV1 in favour of braindecode.models.TSception.

Version 1.1.1#

Enhancements#

  • Massive refactor of the model webpage

Bugs#

Version 1.0#

Enhancements#

Bugs#

API changes#

Version 0.8 (11-2022)#

Enhancements#

Bugs#

API changes#

Version 0.7 (10-2022)#

Enhancements#

Bugs#

API changes#

  • Renaming the method get_params to get_augmentation_params in augmentation classes. This makes the Transform module compatible with scikit-learn cloning mechanism (#388 by Bruno Aristimunha and Alex Gramfort)

  • Delaying the deprecation of the preprocessing scale function braindecode.preprocessing.scale() and updates tutorials where the function were used. (#413 by Bruno Aristimunha)

  • Removing deprecated functions and classes braindecode.preprocessing.zscore(), braindecode.datautil.MNEPreproc and braindecode.datautil.NumpyPreproc (#415 by Bruno Aristimunha)

  • Setting iterator_train__drop_last=True by default for braindecode.EEGClassifier and braindecode.EEGRegressor (#411 by Robin Tibor Schirrmeister)

Version 0.6 (2021-12-06)#

Enhancements#

Bugs#

API changes#

Version 0.5.1 (2021-07-14)#

Enhancements#

Bugs#

API changes#

  • Preprocessor classes braindecode.datautil.MNEPreproc and braindecode.datautil.NumpyPreproc are deprecated in favor of braindecode.datautil.Preprocessor (#197 by Hubert Banville)

  • Parameter stop_offset_samples of braindecode.datautil.create_fixed_length_windows() must now be set to None instead of 0 to indicate the end of the recording (#152 by Hubert Banville)

Authors#