What’s new#

Current 1.5.0 (GitHub)#

Enhancements#

API and behavior changes#

  • braindecode.modules.MultiHeadAttention now follows PyTorch’s SDPA mask convention: boolean masks use True to ignore a position (previously True meant keep). The scaling factor is now 1/sqrt(head_dim) instead of 1/sqrt(emb_size). (#902)

  • braindecode.models.BENDR: remove the n_chans_pretrained / chan_proj_max_norm parameters and the channel_projection layer; hard-code the 20 pre-training channels as _BENDR_TARGET_CHS_TUPLES. The official braindecode/braindecode-bendr checkpoint has been re-uploaded flat so from_pretrained now loads its 99 weights (previously 0 of 99 matched silently). Also ships braindecode.models.InterpolatedBENDR, the InterpolatedModel() wrapper that accepts arbitrary user chs_info and projects to the canonical 20 BENDR channels (the SCALE target has no physical position, so its interpolation row is a spatial spline of the user’s EEG — not the dn3 amplitude statistic). (#992 by Pierre Guetschel)

  • braindecode.models.Labram now requires chs_info to match LABRAM_CHANNEL_ORDER exactly (128 channels, canonical order). The on_unknown_chs parameter and the forward-time ch_names argument are removed. Users with arbitrary channel sets should migrate to braindecode.models.InterpolatedLaBraM. (#993 by Pierre Guetschel)

Requirements#

  • None yet

Bug fixes#

  • Fix braindecode.models.SyncNet swapped parameter initialization where phi_ini (phase shift) was using beta_init_values and beta (decay) was using phase_init_values, replaced incorrect .view() reshape with .permute() for proper conv2d filter weight layout, and fixed duplicate default values in docstring (by Sarthak Tayal)

  • Fix braindecode.models.AttentionBaseNet redundant super().__init__() call that ran the parent nn.Module.__init__ twice (by Sarthak Tayal)

  • Fix incomplete author email in braindecode.models.TSception header (by Sarthak Tayal)

  • Fix a time-of-check-time-of-use race in braindecode.datasets.base._zarr_to_memmap() that caused concurrent workers to repeatedly rename-replace the published .npy cache, producing wasted I/O on local filesystems and .nfsXXXX silly-rename files plus SIGBUS crashes on NFSv3. The published file is now created exactly once via os.link and is never replaced, making the cache safe under arbitrary concurrent access on local POSIX, NFSv3, Lustre and SMB (#986 by Pierre Guetschel)

  • Register braindecode.models.BIOT encoder index as a non-trainable buffer instead of a parameter (torch.long), so it is treated as module state rather than trainable weights (#988 by Pierre Guetschel)

  • Fix TypeError: type 'Any' is not subscriptable when importing braindecode.models.config without numpydantic installed on Python 3.12+ (#871 by Sarthak Tayal)

  • Add channel_embedding parameter to braindecode.models.SignalJEPA and braindecode.models.SignalJEPA_Contextual to load pre-trained channel embedding weights when fine-tuning on a subset of the pre-training channels. Two new HuggingFace checkpoints are published: braindecode/signal-jepa and braindecode/signal-jepa_without-chans (#991 by Pierre Guetschel)

Code health#

  • None yet

Current 1.4.0 (stable)#

Enhancements#

API changes#

  • Add braindecode.models.base.EEGModuleMixin.get_config() and braindecode.models.base.EEGModuleMixin.from_config() to all models, enabling full JSON round-trip serialization and reconstruction of any model including all __init__ parameters (by Bruno Aristimunha)

  • push_to_hub() now saves all model parameters to config.json (previously only 6 EEG-specific parameters were saved; model-specific parameters like F1, D, drop_prob were lost on reload) (by Bruno Aristimunha)

  • Add braindecode.modules.Square activation module and update braindecode.models.ShallowFBCSPNet to use type[nn.Module] for conv_nonlin (backward-compatible with callable) (by Bruno Aristimunha)

  • Replace LazyLinear with Linear in braindecode.models.CBraMod when input dimensions are known, improving Hub round-trip compatibility (by Bruno Aristimunha)

Requirements#

  • Relaxed PyTorch requirement to >=2.0 to support Intel-based Macs (by GalAshkenazi1)

Bugs#

  • Fix the documentation header “Cite Braindecode” announcement link: it used a bare cite.html URL, which browsers resolve relative to the current page path and led to 404s (for example from install/install.html). The link is now built with Sphinx’s pathto() for each page so it always targets the cite page correctly.

  • Fix braindecode.models.EEGITNet state dict mapping that pointed bias to the weight key and referenced a nonexistent submodule path, and fix third inception branch using the wrong variable for kernel length (by Sarthak Tayal)

  • Fix braindecode.models.EEGInceptionMI state dict mapping typo where the old key was tc.bias instead of fc.bias (by Sarthak Tayal)

  • Fix multi-target channel windowing in braindecode.preprocessing.windowers.create_windows_from_target_channels() to use the union of valid target positions across all misc channels instead of only the first channel (by Sarthak Tayal)

  • Fix braindecode.preprocessing.preprocess.filterbank() to preserve info fields (description, line_freq, device_info, etc.) when creating filtered copies, avoiding merge conflicts in MNE when adding channels (#928 by Bruno Aristimunha)

  • [Outdated:] Restrict to ``pandas>=3.0`` due to incompatibility with ``wfdb`` (#919 by Pierre Guetschel)

  • Fix multiple bugs in Labram positional encoding. Now the braindecode implementation is aligned with the original one (#931 by Pierre Guetschel )

  • Fix Zenodo citation: update to global concept DOI and add BibTeX/APA citation formats in docs/cite.rst, README.rst, CITATION.cff, and docs/conf.py (#937 by Bruno Aristimunha)

  • Fix channel reduction in braindecode.modules.SqueezeAndExcitation to avoid runtime shape mismatches when the reduced channel count differs from the reduction rate (#889 by Sarthak Tayal)

  • Push large datasets to HuggingFace Hub using huggingface_hub.upload_large_folder() to avoid limitations, and allow resuming downloads (#945 and #953 by Pierre Guetschel)

  • Fix braindecode.models.LUNA channel location embeddings repeated along batch dimension instead of patch dimension in prepare_tokens, and include pretrained weight typo mapping in self.mapping (#887 by Sarthak Tayal)

  • Fix temporal generalization tutorial producing degraded results (peak AUC dropped from ~0.9 to ~0.75): MEG data in SI units (T/m) has variances ~1e-23, so BatchNorm1d’s eps=1e-5 dominated the normalization denominator. Now uses epochs.get_data(units="fT/cm") to bring data to a reasonable scale, and removes the misleading “importance of normalization” section whose conclusions were an artifact of the data scale issue (by Bruno Aristimunha)

  • Fix braindecode.augmentation.BandstopFilter notch center frequency range using bandwidth/2 instead of 2*bandwidth to match docstring (#548 by Sarthak Tayal)

  • Fix braindecode.models.DeepSleepNet hardcoded linear layer size that caused a shape mismatch when using input shapes other than the default 1 channel, 3000 timepoints. The FC and BiLSTM input dimensions are now computed dynamically from the CNN output (#755 by Sarthak Tayal)

  • Fix model docstring inheritance: track_model_init_kwargs wrapped __init__ with @wraps before the NumpyDocstringInheritanceInitMeta metaclass ran, causing inspect.unwrap() to bypass the wrapper and read __doc__=None. This replaced every model’s description with the parent mixin’s and marked all model-specific parameters as “The description is missing” when DOCSTRING_INHERITANCE_ENABLE=1 was set during documentation builds (#971 by Bruno Aristimunha)

Code health#

  • Reorder model categories in documentation to follow the progression: Convolution, Filterbank, Interpretability, Recurrent, Attention/Transformer, SPD, Graph Neural Network, Channel, and Foundation Model (#962 by Bruno Aristimunha)

  • Fix documentation build warnings and errors: correct numpydoc section underlines in braindecode.models.EEGSym and braindecode.models.SSTDPN, strip upstream .. rubric:: directives from MNE and MOABB docstrings that caused Sphinx errors, fix RST title levels in whats_new.rst, correct bibtex key for EEGPT, and ensure conf.py prioritises the local package on sys.path (by Bruno Aristimunha)

  • Remove deprecated torch.irfft fallback in braindecode.visualization.gradients.compute_amplitude_gradients_for_X(), now uses torch.fft.irfft directly since braindecode requires torch>=2.2 (by Sarthak Tayal)

Current 1.3.2 (stable)#

Enhancements#

API changes#

  • BIDS and Hub modules moved to braindecode.datasets.bids subpackage: braindecode.datasets.bids.hub, braindecode.datasets.bids.hub_format, braindecode.datasets.bids.datasets, braindecode.datasets.bids.hub_validation (#871 by Bruno Aristimunha)

  • Deprecating the old naming of MOABB Dataset name (#826 by Bruno Aristimunha)

  • Exposing the braindecode.datautil.infer_signal_properties() utility function (#856 by Pierre Guetschel)

  • Deprecating the old naming of MOABB Dataset name #826 by Bruno Aristimunha

  • Drop support for Python 3.10 and increase support to Python 3.13 and python 3.14 (#840 by Bruno Aristimunha)

    • Model config helpers now soft-import pydantic/numpydantic; if the optional dependencies are missing the module skips config generation and warns to install pip install braindecode[pydantic].

Bugs#

Current 1.2#

Enhancements#

API changes#

  • Using the name from the original name and deprecation models that we create for no reason, models #775 by Bruno Aristimunha

  • Deprecated the version name in braindecode.models.EEGNetv4 in favour of braindecode.models.EEGNetv.

  • Deprecated the version name in braindecode.models.SleepStagerEldele2021 in favour of braindecode.models.AttnSleep.

  • Deprecated the version name in braindecode.models.TSceptionV1 in favour of braindecode.models.TSception.

Version 1.1.1#

Enhancements#

  • Massive refactor of the model webpage

Bugs#

Version 1.0#

Enhancements#

Bugs#

API changes#

Version 0.8 (11-2022)#

Enhancements#

Bugs#

API changes#

Version 0.7 (10-2022)#

Enhancements#

Bugs#

API changes#

  • Renaming the method get_params to get_augmentation_params in augmentation classes. This makes the Transform module compatible with scikit-learn cloning mechanism (#388 by Bruno Aristimunha and Alex Gramfort)

  • Delaying the deprecation of the preprocessing scale function braindecode.preprocessing.scale() and updates tutorials where the function were used. (#413 by Bruno Aristimunha)

  • Removing deprecated functions and classes braindecode.preprocessing.zscore(), braindecode.datautil.MNEPreproc and braindecode.datautil.NumpyPreproc (#415 by Bruno Aristimunha)

  • Setting iterator_train__drop_last=True by default for braindecode.EEGClassifier and braindecode.EEGRegressor (#411 by Robin Tibor Schirrmeister)

Version 0.6 (2021-12-06)#

Enhancements#

Bugs#

API changes#

Version 0.5.1 (2021-07-14)#

Enhancements#

Bugs#

API changes#

  • Preprocessor classes braindecode.datautil.MNEPreproc and braindecode.datautil.NumpyPreproc are deprecated in favor of braindecode.datautil.Preprocessor (#197 by Hubert Banville)

  • Parameter stop_offset_samples of braindecode.datautil.create_fixed_length_windows() must now be set to None instead of 0 to indicate the end of the recording (#152 by Hubert Banville)

Authors#