braindecode.modules.CBAM#

class braindecode.modules.CBAM(in_channels, reduction_rate, kernel_size)[source]#

Convolutional Block Attention Module from [Woo2018].

Parameters:
  • in_channels (int) – number of input feature channels

  • reduction_rate (int) – reduction ratio of the fully-connected layers

  • kernel_size (int) – kernel size of the convolutional layer

Examples

>>> import torch
>>> from braindecode.modules import CBAM
>>> module = CBAM(in_channels=16, reduction_rate=4, kernel_size=3)
>>> inputs = torch.randn(2, 16, 1, 64)
>>> outputs = module(inputs)
>>> outputs.shape
torch.Size([2, 16, 1, 64])

References

[Woo2018]

Woo, S., Park, J., Lee, J., Kweon, I., 2018. CBAM: Convolutional Block Attention Module. ECCV 2018.

Methods

forward(x)[source]#

Apply the Convolutional Block Attention Module to the input tensor.

Parameters:

x (Pytorch.Tensor)

Return type:

Pytorch.Tensor