braindecode.models.TIDNet#
- class braindecode.models.TIDNet(n_chans=None, n_outputs=None, n_times=None, input_window_seconds=None, sfreq=None, chs_info=None, s_growth=24, t_filters=32, drop_prob=0.4, pooling=15, temp_layers=2, spat_layers=2, temp_span=0.05, bottleneck=3, summary=-1, activation=<class 'torch.nn.modules.activation.LeakyReLU'>)[source]#
Thinker Invariance DenseNet model from Kostas et al (2020) [TIDNet].
Convolution
See [TIDNet] for details.
- Parameters:
s_growth (
int) – DenseNet-style growth factor (added filters per DenseFilter)t_filters (
int) – Number of temporal filters.drop_prob (
float) – Dropout probabilitypooling (
int) – Max temporal pooling (width and stride)temp_layers (
int) – Number of temporal layersspat_layers (
int) – Number of DenseFilterstemp_span (
float) – Percentage of n_times that defines the temporal filter length: temp_len = ceil(temp_span * n_times) e.g A value of 0.05 for temp_span with 1500 n_times will yield a temporal filter of length 75.bottleneck (
int) – Bottleneck factor within Densefiltersummary (
int) – Output size of AdaptiveAvgPool1D layer. If set to -1, value will be calculated automatically (n_times // pooling).in_chans – Alias for n_chans.
n_classes – Alias for n_outputs.
input_window_samples – Alias for n_times.
activation (
type[Module]) – Activation function class to apply. Should be a PyTorch activation module class likenn.ReLUornn.ELU. Default isnn.LeakyReLU.
Notes
Code adapted from: SPOClab-ca/ThinkerInvariance
References
Methods