braindecode.augmentation.Mixup#

class braindecode.augmentation.Mixup(alpha, beta_per_sample=False, random_state=None)[source]#

Implements Iterator for Mixup for EEG data. See [1]. Implementation based on [2].

Parameters
  • alpha (float) – Mixup hyperparameter.

  • beta_per_sample (bool (default=False)) – By default, one mixing coefficient per batch is drawn from a beta distribution. If True, one mixing coefficient per sample is drawn.

  • random_state (int | numpy.random.Generator, optional) – Seed to be used to instantiate numpy random number generator instance. Defaults to None.

References

1

Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz (2018). mixup: Beyond Empirical Risk Minimization. In 2018 International Conference on Learning Representations (ICLR) Online: https://arxiv.org/abs/1710.09412

2

facebookresearch/mixup-cifar10

Methods

get_augmentation_params(*batch)[source]#

Return transform parameters.

Parameters
  • X (tensor.Tensor) – The data.

  • y (tensor.Tensor) – The labels.

Returns

params – Contains the values sampled uniformly between 0 and 1 setting the linear interpolation between examples (lam) and the shuffled indices of examples that are mixed into original examples (idx_perm).

Return type

dict

static operation(X, y, lam, idx_perm)#

Mixes two channels of EEG data.

See [1] for details. Implementation based on [2].

Parameters
  • X (torch.Tensor) – EEG data in form batch_size, n_channels, n_times

  • y (torch.Tensor) – Target of length batch_size

  • lam (torch.Tensor) – Values between 0 and 1 setting the linear interpolation between examples.

  • idx_perm (torch.Tensor) – Permuted indices of example that are mixed into original examples.

Returns

X, y. Where X is augmented and y is a tuple of length 3 containing the labels of the two mixed channels and the mixing coefficient.

Return type

tuple

References

1

Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz (2018). mixup: Beyond Empirical Risk Minimization. In 2018 International Conference on Learning Representations (ICLR) Online: https://arxiv.org/abs/1710.09412

2

facebookresearch/mixup-cifar10