The brain decode problem#

All the models in this library tackle the following problem: given time-series signals \(X \in \mathbb{R}^{C \times T}\) and labels \(y \in \mathcal{Y}\), braindecode implements neural networks \(f\) that decode brain activity, i.e., it applies a series of transformations layers (e.g. Conv2d, Linear, ELU) to the data to allow us to filter and extract features that are relevant to what we are modeling, in other words:

\[f_{\theta} : X \to y,\]

where \(C\) (n_chans) is the number of channels/electrodes and \(T\) (n_times) is the temporal window length/epoch size over the interval of interest.

The definition of \(y\) is broad; it may be anchored in a cognitive stimulus (e.g., BCI, ERP, SSVEP, cVEP), mental state (sleep stage), brain age, visual/audio/text/action inputs, or any target that can be quantized and modeled as a decoding task, see references 1, 2, 3, 4, 5, 6, 7, 8.

We aim to translate recorded brain activity into its originating stimulus, behavior, or mental state, King and Dehaene [2014], King et al. [2020], again, \(f(X) \to y\).

The neural networks model \(f\) learns a representation that is useful for the encoded stimulus in the subject’s brain over time series—also known as reverse inference.

In supervised decoding, we usually learn the network parameters \(\theta\) by minimizing the regularized the average loss over the training set \(\mathcal{D}_{\text{tr}}=\{(x_i,y_i)\}_{i=1}^{N_{\text{tr}}}\).

\[\begin{split}\begin{aligned} \theta^{*} &= \arg\min_{\theta}\, \hat{\mathcal{R}}(\theta) \\ &= \arg\min_{\theta}\, \frac{1}{N_{\text{tr}}}\sum_{i=1}^{N_{\text{tr}}} \ell\!\left(f_{\theta}(x_i),\, y_i\right) \;+\; \lambda\,\Omega(\theta)\,, \end{aligned}\end{split}\]

where \(\ell\) is the task loss (e.g., cross-entropy CrossEntropyLoss), \(\Omega\) is an optional regularizer, and \(\lambda \ge 0\) its weight (e.g. weight_decay parameter in Adam is the example of regularization).

Equivalently, the goal is to minimize the expected risk \(\mathcal{R}(\theta)=\mathbb{E}_{(x,y)\sim P_{\text{tr}}} [\ell(f_{\theta}(x),y)]\), for which the empirical average above is a finite-sample approximation.

With this, in this model’s sub-pages, we provide:

Next: Models categorization →

References

[1]

Bruno Aristimunha, Alexandre Janoni Bayerlein, M. Jorge Cardoso, Walter Hugo Lopez Pinaya, and Raphael Yokoingawa De Camargo. Sleep-Energy: An Energy Optimization Method to Sleep Stage Scoring. IEEE Access, 11:34595–34602, 2023.

[2]

Bruno Aristimunha, Igor Carrara, Pierre Guetschel, Sara Sedlar, Pedro Rodrigues, Jan Sosulski, Divyesh Narayanan, Erik Bjareholt, Barthelemy Quentin, Robin Tibor Schirrmeister, Emmanuel Kalunga, Ludovic Darmet, Cattan Gregoire, Ali Abdul Hussain, Ramiro Gatti, Vladislav Goncharenko, Jordy Thielen, Thomas Moreau, Yannick Roy, Vinay Jayaram, Alexandre Barachant, and Sylvain Chevallier. Mother of all bci benchmarks v1.0. doi.org/10.5281/zenodo.10034223, 2023. DOI: 10.5281/zenodo.10034223.

[3]

Sylvain Chevallier, Igor Carrara, Bruno Aristimunha, Pierre Guetschel, Sara Sedlar, Bruna Lopes, Sebastien Velut, Salim Khazem, and Thomas Moreau. The largest EEG-based BCI reproducibility study for open science: the MOABB benchmark. arXiv preprint arXiv:2404.15319, 2024.

[4]

Jarod Lévy, Mingfang Zhang, Svetlana Pinet, Jérémy Rapin, Hubert Banville, Stéphane d'Ascoli, and Jean-Rémi King. Brain-to-text decoding: a non-invasive approach via typing. arXiv preprint arXiv:2502.17480, 2025.

[5]

Yohann Benchetrit, Hubert Banville, and Jean-Remi King. Brain decoding: toward real-time reconstruction of visual perception. In The Twelfth International Conference on Learning Representations. 2024. URL: https://openreview.net/forum?id=3y1K6buO8c.

[6]

Stéphane d'Ascoli, Corentin Bel, Jérémy Rapin, Hubert Banville, Yohann Benchetrit, Christophe Pallier, and Jean-Rémi King. Decoding individual words from non-invasive brain recordings across 723 participants. arXiv preprint arXiv:2412.17829, 2024.

[7]

Denis A Engemann, Apolline Mellot, Richard Höchenberger, Hubert Banville, David Sabbagh, Lukas Gemein, Tonio Ball, and Alexandre Gramfort. A reusable benchmark of brain-age prediction from m/eeg resting-state signals. Neuroimage, 262:119521, 2022.

[8]

Jonathan Xu, Bruno Aristimunha, Max Emanuel Feucht, Emma Qian, Charles Liu, Tazik Shahjahan, Martyna Spyra, Steven Zifan Zhang, Nicholas Short, Jioh Kim, Paula Perdomo, Ricky Renfeng Mao, Yashvir Sabharwal, Michael Ahedor Moaz Shoura, and Adrian Nestor. Alljoined – a dataset for EEG-to-image decoding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Workshop on Data Curation and Augmentation in Enhancing Medical Imaging Applications, 1–9. 2024.

[9]

Jean-Rémi King and Stanislas Dehaene. Characterizing the dynamics of mental representations: the temporal generalization method. Trends in cognitive sciences, 18(4):203–210, 2014.

[10]

Jean-Rémi King, Laura Gwilliams, Chris Holdgraf, Jona Sassenhagen, Alexandre Barachant, Denis Engemann, Eric Larson, and Alexandre Gramfort. Encoding and Decoding Framework to Uncover the Algorithms of Cognition. In The Cognitive Neurosciences. The MIT Press, 05 2020.

[11]

Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.

[12]

Yonghao Song, Qingqing Zheng, Bingchuan Liu, and Xiaorong Gao. Eeg conformer: convolutional transformer for eeg decoding and visualization. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31:710–719, 2022.

[13]

Wei Zhao, Xiaolu Jiang, Baocan Zhang, Shixiao Xiao, and Sujun Weng. CTNet: a convolutional transformer network for EEG-based motor imagery classification. Scientific reports, 14(1):20237, 2024.

[14]

Hamdi Altaheri, Ghulam Muhammad, and Mansour Alsulaiman. Physics-informed attention temporal convolutional network for EEG-based motor imagery classification. IEEE transactions on industrial informatics, 19(2):2249–2258, 2022.

[15]

Ravikiran Mane, Effie Chew, Karen Chua, Kai Keng Ang, Neethu Robinson, A Prasad Vinod, Seong-Whan Lee, and Cuntai Guan. Fbcnet: a multi-view convolutional neural network for brain-computer interface. arXiv preprint arXiv:2104.01233, 2021.

[16]

Ke Liu, Mingzhao Yang, Zhuliang Yu, Guoyin Wang, and Wei Wu. Fbmsnet: a filter-bank multi-scale convolutional neural network for eeg-based motor imagery decoding. IEEE Transactions on Biomedical Engineering, 70(2):436–445, 2022.

[17]

Davide Borra, Silvia Fantozzi, and Elisa Magosso. Interpretable and lightweight convolutional neural network for eeg decoding: application to movement execution and imagination. Neural Networks, 129:55–74, 2020.

[18]

Siegfried Ludwig, Stylianos Bakas, Dimitrios A Adamos, Nikolaos Laskaris, Yannis Panagakis, and Stefanos Zafeiriou. Eegminer: discovering interpretable features of brain activity with learnable filters. Journal of Neural Engineering, 21(3):036010, 2024.

[19]

Zhiwu Huang and Luc Van Gool. A riemannian network for spd matrix learning. In Proceedings of the AAAI conference on artificial intelligence, volume 31. 2017.

[20]

Chaoqi Yang, M Westover, and Jimeng Sun. Biot: biosignal transformer for cross-data learning in the wild. Advances in Neural Information Processing Systems, 36:78240–78260, 2023.

[21]

Dominik Klepl, Min Wu, and Fei He. Graph neural network-based eeg classification: a survey. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 32:493–503, 2024.

[22]

P. Guetschel, T. Moreau, and M. Tangermann. S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention. In Proceedings of the 9th Graz Brain-Computer Interface Conference. 2024. URL: https://doi.org/10.3217/978-3-99161-014-4-003, doi:10.3217/978-3-99161-014-4-003.

[23]

Zhige Chen, Rui Yang, Mengjie Huang, Fumin Li, Guoping Lu, and Zidong Wang. Eegprogress: a fast and lightweight progressive convolution architecture for eeg classification. Computers in Biology and Medicine, 169:107901, 2024.