- class braindecode.models.ATCNet(n_chans=None, n_outputs=None, input_window_seconds=4.5, sfreq=250.0, conv_block_n_filters=16, conv_block_kernel_length_1=64, conv_block_kernel_length_2=16, conv_block_pool_size_1=8, conv_block_pool_size_2=7, conv_block_depth_mult=2, conv_block_dropout=0.3, n_windows=5, att_head_dim=8, att_num_heads=2, att_dropout=0.5, tcn_depth=2, tcn_kernel_size=4, tcn_n_filters=32, tcn_dropout=0.3, tcn_activation=ELU(alpha=1.0), concat=False, max_norm_const=0.25, chs_info=None, n_times=None, n_channels=None, n_classes=None, input_size_s=None, add_log_softmax=True)#
ATCNet model from 
Pytorch implementation based on official tensorflow code .
n_chans (int) – Number of EEG channels.
n_outputs (int) – Number of outputs of the model. This is the number of classes in the case of classification.
input_window_seconds (float, optional) – Time length of inputs, in seconds. Defaults to 4.5 s, as in BCI-IV 2a dataset.
sfreq (int, optional) – Sampling frequency of the inputs, in Hz. Default to 250 Hz, as in BCI-IV 2a dataset.
tcn_activation (torch.nn.Module) – Nonlinear activation to use. Defaults to nn.ELU().
concat (bool) – When
True, concatenates each slidding window embedding before feeding it to a fully-connected layer, as done in . When
False, maps each slidding window to n_outputs logits and average them. Defaults to
Falsecontrary to what is reported in , but matching what the official code does .
max_norm_const (float) – Maximum L2-norm constraint imposed on weights of the last fully-connected layer. Defaults to 0.25.
n_times (int) – Number of time samples of the input window.
n_channels – Alias for n_chans.
n_classes – Alias for n_outputs.
input_size_s – Alias for input_window_seconds.
add_log_softmax (bool) – Whether to use log-softmax non-linearity as the output function. LogSoftmax final layer will be removed in the future. Please adjust your loss function accordingly (e.g. CrossEntropyLoss)! Check the documentation of the torch.nn loss functions: https://pytorch.org/docs/stable/nn.html#loss-functions.
If some input signal-related parameters are not specified, there will be an attempt to infer them from the other parameters.
H. Altaheri, G. Muhammad and M. Alsulaiman, “Physics-informed attention temporal convolutional network for EEG-based motor imagery classification,” in IEEE Transactions on Industrial Informatics, 2022, doi: 10.1109/TII.2022.3197419.
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
X – The description is missing.