braindecode.modules.FCA#

class braindecode.modules.FCA(in_channels, seq_len=62, reduction_rate=4, freq_idx=0)[source]#

Frequency Channel Attention Networks from [Qin2021].

Parameters:
  • in_channels (int) – Number of input feature channels

  • seq_len (int) – Sequence length along temporal dimension, default=62

  • reduction_rate (int, default=4) – Reduction ratio of the fully-connected layers.

Examples

>>> import torch
>>> from braindecode.modules import FCA
>>> module = FCA(in_channels=16, seq_len=64, reduction_rate=4, freq_idx=0)
>>> inputs = torch.randn(2, 16, 1, 64)
>>> outputs = module(inputs)
>>> outputs.shape
torch.Size([2, 16, 1, 64])

References

[Qin2021]

Qin, Z., Zhang, P., Wu, F., Li, X., 2021. FcaNet: Frequency Channel Attention Networks. ICCV 2021.

Methods

forward(x)[source]#

Apply the Frequency Channel Attention Networks block to the input.

Parameters:

x (Pytorch.Tensor)

Return type:

Pytorch.Tensor

static get_dct_filter(seq_len, mapper_y, in_channels)[source]#

Util function to get the DCT filter.

Parameters:
  • seq_len (int) – Size of the sequence

  • mapper_y (list) – List of frequencies

  • in_channels (int) – Number of input channels.

Return type:

torch.Tensor