braindecode.functional package#

Submodules#

braindecode.functional.functions module#

braindecode.functional.functions.drop_path(x, drop_prob: float = 0.0, training: bool = False, scale_by_keep: bool = True)[source]#

Drop paths (Stochastic Depth) per sample.

Notes: This implementation is taken from timm library.

All credit goes to Ross Wightman.

Parameters:
  • x (torch.Tensor) – input tensor

  • drop_prob (float, optional) – survival rate (i.e. probability of being kept), by default 0.0

  • training (bool, optional) – whether the model is in training mode, by default False

  • scale_by_keep (bool, optional) – whether to scale output by (1/keep_prob) during training, by default True

Returns:

  • torch.Tensor – output tensor

  • Notes from Ross Wightman

  • (when applied in main path of residual blocks)

  • This is the same as the DropConnect impl I created for EfficientNet,

  • etc. networks, however,

  • the original name is misleading as ‘Drop Connect’ is a different form

  • of dropout in a separate paper…

  • See discussion (https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956)

  • … I’ve opted for changing the layer and argument names to ‘drop path’

  • rather than mix DropConnect as a layer name and use

  • ’survival rate’ as the argument.

braindecode.functional.functions.hilbert_freq(x, forward_fourier=True)[source]#

Compute the Hilbert transform using PyTorch, separating the real and imaginary parts.

The analytic signal \(x_a(t)\) of a real-valued signal \(x(t)\) is defined as:

\[x_a(t) = x(t) + i y(t) = \mathcal{F}^{-1} \{ U(f) \mathcal{F}\{x(t)\} \}\]

where: - \(\mathcal{F}\) is the Fourier transform, - \(U(f)\) is the unit step function, - \(y(t)\) is the Hilbert transform of \(x(t)\).

Parameters:
  • input (torch.Tensor) –

    Input tensor. The expected shape depends on the forward_fourier parameter:

    • If forward_fourier is True:

      (…, seq_len)

    • If forward_fourier is False:

      (…, seq_len / 2 + 1, 2)

  • forward_fourier (bool, optional) – Determines the format of the input tensor. - If True, the input is in the forward Fourier domain. - If False, the input contains separate real and imaginary parts. Default is True.

Returns:

Output tensor with shape (…, seq_len, 2), where the last dimension represents the real and imaginary parts of the Hilbert transform.

Return type:

torch.Tensor

Examples

>>> import torch
>>> input = torch.randn(10, 100)  # Example input tensor
>>> output = hilbert_transform(input)
>>> print(output.shape)
torch.Size([10, 100, 2])

Notes

The implementation is matching scipy implementation, but using torch. scipy/scipy

braindecode.functional.functions.identity(x)[source]#
braindecode.functional.functions.plv_time(x, forward_fourier=True, epsilon: float = 1e-06)[source]#

Compute the Phase Locking Value (PLV) metric in the time domain.

The Phase Locking Value (PLV) is a measure of the synchronization between different channels by evaluating the consistency of phase differences over time. It ranges from 0 (no synchronization) to 1 (perfect synchronization) [1]_.

Parameters:
  • x (torch.Tensor) –

    Input tensor containing the signal data. - If forward_fourier is True, the shape should be (…, channels, time). - If forward_fourier is False, the shape should be (…, channels, freqs, 2),

    where the last dimension represents the real and imaginary parts.

  • forward_fourier (bool, optional) –

    Specifies the format of the input tensor x. - If True, x is assumed to be in the time domain. - If False, x is assumed to be in the Fourier domain with separate real and

    imaginary components.

    Default is True.

  • epsilon (float, default 1e-6) – Small numerical value to ensure positivity constraint on the complex part

Returns:

plv – The Phase Locking Value matrix with shape (…, channels, channels). Each element [i, j] represents the PLV between channel i and channel j.

Return type:

torch.Tensor

References

[1] Lachaux, J. P., Rodriguez, E., Martinerie, J., & Varela, F. J. (1999).

Measuring phase synchrony in brain signals. Human brain mapping, 8(4), 194-208.

braindecode.functional.functions.safe_log(x, eps: float = 1e-06) Tensor[source]#

Prevents \(log(0)\) by using \(log(max(x, eps))\).

braindecode.functional.functions.square(x)[source]#

braindecode.functional.initialization module#

braindecode.functional.initialization.glorot_weight_zero_bias(model)[source]#

Initialize parameters of all modules by initializing weights with glorot

uniform/xavier initialization, and setting biases to zero. Weights from batch norm layers are set to 1.

Parameters:

model (Module)

braindecode.functional.initialization.rescale_parameter(param, layer_id)[source]#

Recaling the l-th transformer layer.

Rescales the parameter tensor by the inverse square root of the layer id. Made inplace. \(\frac{1}{\sqrt{2 \cdot \text{layer\_id}}}\) [Beit2022]

In the labram, this is used to rescale the output matrices (i.e., the last linear projection within each sub-layer) of the self-attention module.

Parameters:
  • param (torch.Tensor) – tensor to be rescaled

  • layer_id (int) – layer id in the neural network

References

[Beit2022] Hangbo Bao, Li Dong, Songhao Piao, Furu We (2022). BEIT: BERT Pre-Training of Image Transformers.