braindecode.models.BIOT#
- class braindecode.models.BIOT(embed_dim=256, num_heads=8, num_layers=4, sfreq=200, hop_length=100, return_feature=False, n_outputs=None, n_chans=None, chs_info=None, n_times=None, input_window_seconds=None, activation=<class 'torch.nn.modules.activation.ELU'>, drop_prob=0.5, max_seq_len=1024, att_drop_prob=0.2, att_layer_drop_prob=0.2)[source]#
BIOT from Yang et al. (2023) [Yang2023]
Foundation Model
BIOT: Cross-data Biosignal Learning in the Wild.
BIOT is a foundation model for biosignal classification. It is a wrapper around the BIOTEncoder and ClassificationHead modules.
It is designed for N-dimensional biosignal data such as EEG, ECG, etc. The method was proposed by Yang et al. [Yang2023] and the code is available at [Code2023]
The model is trained with a contrastive loss on large EEG datasets TUH Abnormal EEG Corpus with 400K samples and Sleep Heart Health Study 5M. Here, we only provide the model architecture, not the pre-trained weights or contrastive loss training.
The architecture is based on the LinearAttentionTransformer and PatchFrequencyEmbedding modules. The BIOTEncoder is a transformer that takes the input data and outputs a fixed-size representation of the input data. More details are present in the BIOTEncoder class.
The ClassificationHead is an ELU activation layer, followed by a simple linear layer that takes the output of the BIOTEncoder and outputs the classification probabilities.
Important
Pre-trained Weights Available
This model has pre-trained weights available on the Hugging Face Hub. You can load them using:
from braindecode.models import BIOT # Load the original pre-trained model from Hugging Face Hub # For 16-channel models: model = BIOT.from_pretrained("braindecode/biot-pretrained-prest-16chs") # For 18-channel models: model = BIOT.from_pretrained("braindecode/biot-pretrained-shhs-prest-18chs") model = BIOT.from_pretrained("braindecode/biot-pretrained-six-datasets-18chs")
To push your own trained model to the Hub:
# After training your model model.push_to_hub( repo_id="username/my-biot-model", commit_message="Upload trained BIOT model" )
Requires installing
braindecode[hug]for Hub integration.Added in version 0.9.
- Parameters:
embed_dim (int, optional) – The size of the embedding layer, by default 256
num_heads (int, optional) – The number of attention heads, by default 8
num_layers (int, optional) – The number of transformer layers, by default 4
activation (nn.Module, default=nn.ELU) – Activation function class to apply. Should be a PyTorch activation module class like
nn.ReLUornn.ELU. Default isnn.ELU.return_feature (bool, optional) – Changing the output for the neural network. Default is single tensor when return_feature is True, return embedding space too. Default is False.
hop_length (int, optional) – The hop length for the torch.stft transformation in the encoder. The default is 100.
sfreq (int, optional) – The sfreq parameter for the encoder. The default is 200
References
[Yang2023] (1,2)Yang, C., Westover, M.B. and Sun, J., 2023, November. BIOT: Biosignal Transformer for Cross-data Learning in the Wild. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS.
[Code2023]Yang, C., Westover, M.B. and Sun, J., 2023. BIOT Biosignal Transformer for Cross-data Learning in the Wild. GitHub ycq091044/BIOT (accessed 2024-02-13)
Methods