braindecode.modules.FCA#

class braindecode.modules.FCA(in_channels, seq_len: int = 62, reduction_rate: int = 4, freq_idx: int = 0)[source]#

Frequency Channel Attention Networks from [Qin2021].

Parameters:
  • in_channels (int) – Number of input feature channels

  • seq_len (int) – Sequence length along temporal dimension, default=62

  • reduction_rate (int, default=4) – Reduction ratio of the fully-connected layers.

References

[Qin2021]

Qin, Z., Zhang, P., Wu, F., Li, X., 2021.

FcaNet: Frequency Channel Attention Networks. ICCV 2021.

Methods

forward(x)[source]#

Apply the Frequency Channel Attention Networks block to the input.

Parameters:

x (Pytorch.Tensor)

Return type:

Pytorch.Tensor

static get_dct_filter(seq_len: int, mapper_y: list, in_channels: int)[source]#

Util function to get the DCT filter.

Parameters:
  • seq_len (int) – Size of the sequence

  • mapper_y – List of frequencies

  • in_channels – Number of input channels.

Return type:

torch.Tensor