braindecode.preprocessing package#

class braindecode.preprocessing.Crop(tmin=0.0, tmax=None, include_tmax=True, *, verbose=None)[source]#

Bases: Preprocessor

Crop raw data file.

Limit the data from the raw file to go between specific times. Note that the new tmin is assumed to be t=0 for all subsequently called functions (e.g., time_as_index(), or Epochs). New first_samp and last_samp are set accordingly.

Thus function operates in-place on the instance. Use mne.io.Raw.copy() if operation on a copy is desired.

Parameters:
tminfloat

Start time of the raw data to use in seconds (must be >= 0).

tmaxfloat | None

End time of the raw data to use in seconds (cannot exceed data duration). If None (default), the current end of the data is used.

include_tmaxbool

If True (default), include tmax. If False, exclude tmax (similar to how Python indexing typically works).

Added in version 0.19.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
rawinstance of Raw

The cropped raw object, modified in-place.

See more details in mne.io.base.crop
fn = 'crop'#
class braindecode.preprocessing.DropChannels(ch_names, on_missing='raise')[source]#

Bases: Preprocessor

Drop channel(s).

Parameters:
ch_namesiterable or str

Iterable (e.g. list) of channel name(s) or channel name to remove.

on_missing‘raise’ | ‘warn’ | ‘ignore’

Can be 'raise' (default) to raise an error, 'warn' to emit a warning, or 'ignore' to ignore when entries in ch_names are not present in the raw instance.

Added in version 0.23.0.

Returns:
instinstance of Raw, Epochs, or Evoked

The modified instance.

See also

reorder_channels
pick_channels
pick_types

See more details in mne.channels.channels.drop_channels

fn = 'drop_channels'#
class braindecode.preprocessing.Filter(l_freq, h_freq, picks=None, filter_length='auto', l_trans_bandwidth='auto', h_trans_bandwidth='auto', n_jobs=None, method='fir', iir_params=None, phase='zero', fir_window='hamming', fir_design='firwin', skip_by_annotation=('edge', 'bad_acq_skip'), pad='reflect_limited', verbose=None)[source]#

Bases: Preprocessor

Filter a subset of channels/vertices.

Parameters:
l_freqfloat | None

For FIR filters, the lower pass-band edge; for IIR filters, the lower cutoff frequency. If None the data are only low-passed.

h_freqfloat | None

For FIR filters, the upper pass-band edge; for IIR filters, the upper cutoff frequency. If None the data are only high-passed.

picksstr | array-like | slice | None

Channels to include. Slices and lists of integers will be interpreted as channel indices. In lists, channel type strings (e.g., ['meg', 'eeg']) will pick channels of those types, channel name strings (e.g., ['MEG0111', 'MEG2623'] will pick the given channels. Can also be the string values 'all' to pick all channels, or 'data' to pick data channels. None (default) will pick all data channels. Note that channels in info['bads'] will be included if their names or indices are explicitly provided.

filter_lengthstr | int

Length of the FIR filter to use (if applicable):

  • ‘auto’ (default): The filter length is chosen based on the size of the transition regions (6.6 times the reciprocal of the shortest transition band for fir_window=’hamming’ and fir_design=”firwin2”, and half that for “firwin”).

  • str: A human-readable time in units of “s” or “ms” (e.g., “10s” or “5500ms”) will be converted to that number of samples if phase="zero", or the shortest power-of-two length at least that duration for phase="zero-double".

  • int: Specified length in samples. For fir_design=”firwin”, this should not be used.

l_trans_bandwidthfloat | str

Width of the transition band at the low cut-off frequency in Hz (high pass or cutoff 1 in bandpass). Can be “auto” (default) to use a multiple of l_freq:

min(max(l_freq * 0.25, 2), l_freq)

Only used for method='fir'.

h_trans_bandwidthfloat | str

Width of the transition band at the high cut-off frequency in Hz (low pass or cutoff 2 in bandpass). Can be “auto” (default in 0.14) to use a multiple of h_freq:

min(max(h_freq * 0.25, 2.), info['sfreq'] / 2. - h_freq)

Only used for method='fir'.

n_jobsint | str

Number of jobs to run in parallel. Can be 'cuda' if cupy is installed properly and method='fir'.

methodstr

'fir' will use overlap-add FIR filtering, 'iir' will use IIR forward-backward filtering (via filtfilt()).

iir_paramsdict | None

Dictionary of parameters to use for IIR filtering. If iir_params=None and method="iir", 4th order Butterworth will be used. For more information, see mne.filter.construct_iir_filter().

phasestr

Phase of the filter. When method='fir', symmetric linear-phase FIR filters are constructed with the following behaviors when method="fir":

"zero" (default)

The delay of this filter is compensated for, making it non-causal.

"minimum"

A minimum-phase filter will be constructed by decomposing the zero-phase filter into a minimum-phase and all-pass systems, and then retaining only the minimum-phase system (of the same length as the original zero-phase filter) via scipy.signal.minimum_phase().

"zero-double"

This is a legacy option for compatibility with MNE <= 0.13. The filter is applied twice, once forward, and once backward (also making it non-causal).

"minimum-half"

This is a legacy option for compatibility with MNE <= 1.6. A minimum-phase filter will be reconstructed from the zero-phase filter with half the length of the original filter.

When method='iir', phase='zero' (default) or equivalently 'zero-double' constructs and applies IIR filter twice, once forward, and once backward (making it non-causal) using filtfilt(); phase='forward' will apply the filter once in the forward (causal) direction using lfilter().

Added in version 0.13.

Changed in version 1.7: The behavior for phase="minimum" was fixed to use a filter of the requested length and improved suppression.

fir_windowstr

The window to use in FIR design, can be “hamming” (default), “hann” (default in 0.13), or “blackman”.

Added in version 0.15.

fir_designstr

Can be “firwin” (default) to use scipy.signal.firwin(), or “firwin2” to use scipy.signal.firwin2(). “firwin” uses a time-domain design technique that generally gives improved attenuation using fewer samples than “firwin2”.

Added in version 0.15.

skip_by_annotationstr | list of str

If a string (or list of str), any annotation segment that begins with the given string will not be included in filtering, and segments on either side of the given excluded annotated segment will be filtered separately (i.e., as independent signals). The default (('edge', 'bad_acq_skip') will separately filter any segments that were concatenated by mne.concatenate_raws() or mne.io.Raw.append(), or separated during acquisition. To disable, provide an empty list. Only used if inst is raw.

Added in version 0.16..

padstr

The type of padding to use. Supports all numpy.pad() mode options. Can also be "reflect_limited", which pads with a reflected version of each vector mirrored on the first and last values of the vector, followed by zeros. Only used for method='fir'.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
instinstance of Epochs, Evoked, SourceEstimate, or Raw

The filtered data.

See more details in mne.io.base.filter

fn = 'filter'#
class braindecode.preprocessing.Pick(picks, exclude=(), *, verbose=None)[source]#

Bases: Preprocessor

Pick a subset of channels.

Parameters:
picksstr | array-like | slice | None

Channels to include. Slices and lists of integers will be interpreted as channel indices. In lists, channel type strings (e.g., ['meg', 'eeg']) will pick channels of those types, channel name strings (e.g., ['MEG0111', 'MEG2623'] will pick the given channels. Can also be the string values 'all' to pick all channels, or 'data' to pick data channels. None (default) will pick all channels. Note that channels in info['bads'] will be included if their names or indices are explicitly provided.

excludelist | str

Set of channels to exclude, only used when picking based on types (e.g., exclude=”bads” when picks=”meg”).

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Added in version 0.24.0.

Returns:
instinstance of Raw, Epochs, or Evoked

The modified instance.

See more details in mne.channels.channels.pick
fn = 'pick'#
class braindecode.preprocessing.Preprocessor(fn: Callable | str, *, apply_on_array: bool = True, **kwargs)[source]#

Bases: object

Preprocessor for an MNE Raw or Epochs object.

Applies the provided preprocessing function to the data of a Raw or Epochs object. If the function is provided as a string, the method with that name will be used (e.g., ‘pick_channels’, ‘filter’, etc.). If it is provided as a callable and apply_on_array is True, the apply_function method of Raw and Epochs object will be used to apply the function on the internal arrays of Raw and Epochs. If apply_on_array is False, the callable must directly modify the Raw or Epochs object (e.g., by calling its method(s) or modifying its attributes).

Parameters:
  • fn (str or callable) – If str, the Raw/Epochs object must have a method with that name. If callable, directly apply the callable to the object.

  • apply_on_array (bool) – Ignored if fn is not a callable. If True, the apply_function of Raw and Epochs object will be used to run fn on the underlying arrays directly. If False, fn must directly modify the Raw or Epochs object.

  • kwargs – Keyword arguments to be forwarded to the MNE function.

apply(raw_or_epochs: BaseRaw | BaseEpochs)[source]#
class braindecode.preprocessing.Resample(up=1.0, down=1.0, *, axis=-1, window='auto', n_jobs=None, pad='auto', npad=100, method='fft', verbose=None)[source]#

Bases: Preprocessor

Resample an array.

Operates along the last dimension of the array.

Parameters:
xndarray

Signal to resample.

upfloat

Factor to upsample by.

downfloat

Factor to downsample by.

axisint

Axis along which to resample (default is the last axis).

windowstr | tuple

When method="fft", this is the frequency-domain window to use in resampling, and should be the same length as the signal; see scipy.signal.resample() for details. When method="polyphase", this is the time-domain linear-phase window to use after upsampling the signal; see scipy.signal.resample_poly() for details. The default "auto" will use "boxcar" for method="fft" and ("kaiser", 5.0) for method="polyphase".

n_jobsint | str

Number of jobs to run in parallel. Can be 'cuda' if cupy is installed properly. n_jobs='cuda' is only supported when method="fft".

padstr

The type of padding to use. When method="fft", supports all numpy.pad() mode options. Can also be "reflect_limited", which pads with a reflected version of each vector mirrored on the first and last values of the vector, followed by zeros. When method="polyphase", supports all modes of scipy.signal.upfirdn(). The default (“auto”) means 'reflect_limited' for method='fft' and 'reflect' for method='polyphase'.

Added in version 0.15.

npadint | str

Amount to pad the start and end of the data. Can also be "auto" to use a padding that will result in a power-of-two size (can be much faster).

Only used when method="fft".

methodstr

Resampling method to use. Can be "fft" (default) or "polyphase" to use FFT-based on polyphase FIR resampling, respectively. These wrap to scipy.signal.resample() and scipy.signal.resample_poly(), respectively.

Added in version 1.7.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
yarray

The x array resampled.

See more details in mne.filter.resample

fn = 'resample'#
class braindecode.preprocessing.SetEEGReference(ref_channels='average', projection=False, ch_type='auto', forward=None, *, joint=False, verbose=None)[source]#

Bases: Preprocessor

Specify which reference to use for EEG data.

Use this function to explicitly specify the desired reference for EEG. This can be either an existing electrode or a new virtual channel. This function will re-reference the data according to the desired reference.

Parameters:
ref_channelslist of str | str | dict

Can be:

  • The name(s) of the channel(s) used to construct the reference for every channel of ch_type.

  • 'average' to apply an average reference (default)

  • 'REST' to use the Reference Electrode Standardization Technique infinity reference :footcite:`Yao2001`.

  • A dictionary mapping names of data channels to (lists of) names of reference channels. For example, {‘A1’: ‘A3’} would replace the data in channel ‘A1’ with the difference between ‘A1’ and ‘A3’. To take the average of multiple channels as reference, supply a list of channel names as the dictionary value, e.g. {‘A1’: [‘A2’, ‘A3’]} would replace channel A1 with A1 - mean(A2, A3).

  • An empty list, in which case MNE will not attempt any re-referencing of the data

projectionbool

If ref_channels='average' this argument specifies if the average reference should be computed as a projection (True) or not (False; default). If projection=True, the average reference is added as a projection and is not applied to the data (it can be applied afterwards with the apply_proj method). If projection=False, the average reference is directly applied to the data. If ref_channels is not 'average', projection must be set to False (the default in this case).

ch_typelist of str | str

The name of the channel type to apply the reference to. Valid channel types are 'auto', 'eeg', 'ecog', 'seeg', 'dbs'. If 'auto', the first channel type of eeg, ecog, seeg or dbs that is found (in that order) will be selected.

Added in version 0.19.

Changed in version 1.2: list-of-str is now supported with projection=True.

forwardinstance of Forward | None

Forward solution to use. Only used with ref_channels='REST'.

Added in version 0.21.

jointbool

How to handle list-of-str ch_type. If False (default), one projector is created per channel type. If True, one projector is created across all channel types. This is only used when projection=True.

Added in version 1.2.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
instinstance of Raw | Epochs | Evoked

Data with EEG channels re-referenced. If ref_channels='average' and projection=True a projection will be added instead of directly re-referencing the data.

See also

mne.set_bipolar_reference

Convenience function for creating bipolar references.

Notes

Some common referencing schemes and the corresponding value for the ref_channels parameter:

  • Average reference:

    A new virtual reference electrode is created by averaging the current EEG signal by setting ref_channels='average'. Bad EEG channels are automatically excluded if they are properly set in info['bads'].

  • A single electrode:

    Set ref_channels to a list containing the name of the channel that will act as the new reference, for example ref_channels=['Cz'].

  • The mean of multiple electrodes:

    A new virtual reference electrode is created by computing the average of the current EEG signal recorded from two or more selected channels. Set ref_channels to a list of channel names, indicating which channels to use. For example, to apply an average mastoid reference, when using the 10-20 naming scheme, set ref_channels=['M1', 'M2'].

  • REST

    The given EEG electrodes are referenced to a point at infinity using the lead fields in forward, which helps standardize the signals.

  • Different references for different channels

    Set ref_channels to a dictionary mapping source channel names (str) to the reference channel names (str or list of str). Unlike the other approaches where the same reference is applied globally, you can set different references for different channels with this method. For example, to re-reference channel ‘A1’ to ‘A2’ and ‘B1’ to the average of ‘B2’ and ‘B3’, set ref_channels={'A1': 'A2', 'B1': ['B2', 'B3']}. Warnings are issued when a mapping involves bad channels or channels of different types.

  1. If a reference is requested that is not the average reference, this function removes any pre-existing average reference projections.

  2. During source localization, the EEG signal should have an average reference.

  3. In order to apply a reference, the data must be preloaded. This is not necessary if ref_channels='average' and projection=True.

  4. For an average or REST reference, bad EEG channels are automatically excluded if they are properly set in info['bads'].

Added in version 0.9.0.

See more details in mne.channels.channels.set_eeg_reference

fn = 'set_eeg_reference'#
braindecode.preprocessing.create_fixed_length_windows(concat_ds: BaseConcatDataset, start_offset_samples: int = 0, stop_offset_samples: int | None = None, window_size_samples: int | None = None, window_stride_samples: int | None = None, drop_last_window: bool | None = None, mapping: dict[str, int] | None = None, preload: bool = False, picks: str | Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | slice | None = None, reject: dict[str, float] | None = None, flat: dict[str, float] | None = None, targets_from: str = 'metadata', last_target_only: bool = True, lazy_metadata: bool = False, on_missing: str = 'error', n_jobs: int = 1, verbose: bool | str | int | None = 'error')[source]#

Windower that creates sliding windows.

Parameters:
  • concat_ds (ConcatDataset) – A concat of base datasets each holding raw and description.

  • start_offset_samples (int) – Start offset from beginning of recording in samples.

  • stop_offset_samples (int | None) – Stop offset from beginning of recording in samples. If None, set to be the end of the recording.

  • window_size_samples (int | None) – Window size in samples. If None, set to be the maximum possible window size, ie length of the recording, once offsets are accounted for.

  • window_stride_samples (int | None) – Stride between windows in samples. If None, set to be equal to winddow_size_samples, so windows will not overlap.

  • drop_last_window (bool | None) – Whether or not have a last overlapping window, when windows do not equally divide the continuous signal. Must be set to a bool if window size and stride are not None.

  • mapping (dict(str: int)) – Mapping from event description to target value.

  • preload (bool) – If True, preload the data of the Epochs objects.

  • picks (str | list | slice | None) – Channels to include. If None, all available channels are used. See mne.Epochs.

  • reject (dict | None) – Epoch rejection parameters based on peak-to-peak amplitude. If None, no rejection is done based on peak-to-peak amplitude. See mne.Epochs.

  • flat (dict | None) – Epoch rejection parameters based on flatness of signals. If None, no rejection based on flatness is done. See mne.Epochs.

  • lazy_metadata (bool) – If True, metadata is not computed immediately, but only when accessed by using the _LazyDataFrame (experimental).

  • on_missing (str) – What to do if one or several event ids are not found in the recording. Valid keys are ‘error’ | ‘warning’ | ‘ignore’. See mne.Epochs.

  • n_jobs (int) – Number of jobs to use to parallelize the windowing.

  • verbose (bool | str | int | None) – Control verbosity of the logging output when calling mne.Epochs.

Returns:

windows_datasets – Concatenated datasets of WindowsDataset containing the extracted windows.

Return type:

BaseConcatDataset

braindecode.preprocessing.create_windows_from_events(concat_ds: BaseConcatDataset, trial_start_offset_samples: int = 0, trial_stop_offset_samples: int = 0, window_size_samples: int | None = None, window_stride_samples: int | None = None, drop_last_window: bool = False, mapping: dict[str, int] | None = None, preload: bool = False, drop_bad_windows: bool | None = None, picks: str | Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | slice | None = None, reject: dict[str, float] | None = None, flat: dict[str, float] | None = None, on_missing: str = 'error', accepted_bads_ratio: float = 0.0, use_mne_epochs: bool | None = None, n_jobs: int = 1, verbose: bool | str | int | None = 'error')[source]#

Create windows based on events in mne.Raw.

This function extracts windows of size window_size_samples in the interval [trial onset + trial_start_offset_samples, trial onset + trial duration + trial_stop_offset_samples] around each trial, with a separation of window_stride_samples between consecutive windows. If the last window around an event does not end at trial_stop_offset_samples and drop_last_window is set to False, an additional overlapping window that ends at trial_stop_offset_samples is created.

Windows are extracted from the interval defined by the following:

                                        trial onset +
                trial onset                duration
|--------------------|------------------------|-----------------------|
trial onset -                                             trial onset +
trial_start_offset_samples                                   duration +
                                            trial_stop_offset_samples
Parameters:
  • concat_ds (BaseConcatDataset) – A concat of base datasets each holding raw and description.

  • trial_start_offset_samples (int) – Start offset from original trial onsets, in samples. Defaults to zero.

  • trial_stop_offset_samples (int) – Stop offset from original trial stop, in samples. Defaults to zero.

  • window_size_samples (int | None) – Window size. If None, the window size is inferred from the original trial size of the first trial and trial_start_offset_samples and trial_stop_offset_samples.

  • window_stride_samples (int | None) – Stride between windows, in samples. If None, the window stride is inferred from the original trial size of the first trial and trial_start_offset_samples and trial_stop_offset_samples.

  • drop_last_window (bool) – If False, an additional overlapping window that ends at trial_stop_offset_samples will be extracted around each event when the last window does not end exactly at trial_stop_offset_samples.

  • mapping (dict(str: int)) – Mapping from event description to numerical target value.

  • preload (bool) – If True, preload the data of the Epochs objects. This is useful to reduce disk reading overhead when returning windows in a training scenario, however very large data might not fit into memory.

  • drop_bad_windows (bool) – If True, call .drop_bad() on the resulting mne.Epochs object. This step allows identifying e.g., windows that fall outside of the continuous recording. It is suggested to run this step here as otherwise the BaseConcatDataset has to be updated as well.

  • picks (str | list | slice | None) – Channels to include. If None, all available channels are used. See mne.Epochs.

  • reject (dict | None) – Epoch rejection parameters based on peak-to-peak amplitude. If None, no rejection is done based on peak-to-peak amplitude. See mne.Epochs.

  • flat (dict | None) – Epoch rejection parameters based on flatness of signals. If None, no rejection based on flatness is done. See mne.Epochs.

  • on_missing (str) – What to do if one or several event ids are not found in the recording. Valid keys are ‘error’ | ‘warning’ | ‘ignore’. See mne.Epochs.

  • accepted_bads_ratio (float, optional) – Acceptable proportion of trials with inconsistent length in a raw. If the number of trials whose length is exceeded by the window size is smaller than this, then only the corresponding trials are dropped, but the computation continues. Otherwise, an error is raised. Defaults to 0.0 (raise an error).

  • use_mne_epochs (bool) – If False, return EEGWindowsDataset objects. If True, return mne.Epochs objects encapsulated in WindowsDataset objects, which is substantially slower that EEGWindowsDataset.

  • n_jobs (int) – Number of jobs to use to parallelize the windowing.

  • verbose (bool | str | int | None) – Control verbosity of the logging output when calling mne.Epochs.

Returns:

windows_datasets – Concatenated datasets of WindowsDataset containing the extracted windows.

Return type:

BaseConcatDataset

braindecode.preprocessing.create_windows_from_target_channels(concat_ds, window_size_samples=None, preload=False, picks=None, reject=None, flat=None, n_jobs=1, last_target_only=True, verbose='error')[source]#
braindecode.preprocessing.exponential_moving_demean(data: ndarray[Any, dtype[_ScalarType_co]], factor_new: float = 0.001, init_block_size: int | None = None)[source]#

Perform exponential moving demeanining.

Compute the exponental moving mean \(m_t\) at time t as \(m_t=\mathrm{factornew} \cdot mean(x_t) + (1 - \mathrm{factornew}) \cdot m_{t-1}\).

Deman the data point \(x_t\) at time t as: \(x'_t=(x_t - m_t)\).

Parameters:
  • data (np.ndarray (n_channels, n_times))

  • factor_new (float)

  • init_block_size (int) – Demean data before to this index with regular demeaning.

Returns:

demeaned – Demeaned data.

Return type:

np.ndarray (n_channels, n_times)

braindecode.preprocessing.exponential_moving_standardize(data: ndarray[Any, dtype[_ScalarType_co]], factor_new: float = 0.001, init_block_size: int | None = None, eps: float = 0.0001)[source]#

Perform exponential moving standardization.

Compute the exponental moving mean \(m_t\) at time t as \(m_t=\mathrm{factornew} \cdot mean(x_t) + (1 - \mathrm{factornew}) \cdot m_{t-1}\).

Then, compute exponential moving variance \(v_t\) at time t as \(v_t=\mathrm{factornew} \cdot (m_t - x_t)^2 + (1 - \mathrm{factornew}) \cdot v_{t-1}\).

Finally, standardize the data point \(x_t\) at time t as: \(x'_t=(x_t - m_t) / max(\sqrt{->v_t}, eps)\).

Parameters:
  • data (np.ndarray (n_channels, n_times))

  • factor_new (float)

  • init_block_size (int) – Standardize data before to this index with regular standardization.

  • eps (float) – Stabilizer for division by zero variance.

Returns:

standardized – Standardized data.

Return type:

np.ndarray (n_channels, n_times)

braindecode.preprocessing.filterbank(raw: BaseRaw, frequency_bands: list[tuple[float, float]], drop_original_signals: bool = True, order_by_frequency_band: bool = False, **mne_filter_kwargs)[source]#

Applies multiple bandpass filters to the signals in raw. The raw will be modified in-place and number of channels in raw will be updated to len(frequency_bands) * len(raw.ch_names) (-len(raw.ch_names) if drop_original_signals).

Parameters:
  • raw (mne.io.Raw) – The raw signals to be filtered.

  • frequency_bands (list(tuple)) – The frequency bands to be filtered for (e.g. [(4, 8), (8, 13)]).

  • drop_original_signals (bool) – Whether to drop the original unfiltered signals

  • order_by_frequency_band (bool) – If True will return channels ordered by frequency bands, so if there are channels Cz, O1 and filterbank ranges [(4,8), (8,13)], returned channels will be [Cz_4-8, O1_4-8, Cz_8-13, O1_8-13]. If False, order will be [Cz_4-8, Cz_8-13, O1_4-8, O1_8-13].

  • mne_filter_kwargs (dict) – Keyword arguments for filtering supported by mne.io.Raw.filter(). Please refer to mne for a detailed explanation.

braindecode.preprocessing.preprocess(concat_ds: BaseConcatDataset, preprocessors: list[Preprocessor], save_dir: str | None = None, overwrite: bool = False, n_jobs: int | None = None, offset: int = 0, copy_data: bool | None = None)[source]#

Apply preprocessors to a concat dataset.

Parameters:
  • concat_ds (BaseConcatDataset) – A concat of BaseDataset or WindowsDataset datasets to be preprocessed.

  • preprocessors (list(Preprocessor)) – List of Preprocessor objects to apply to the dataset.

  • save_dir (str | None) – If a string, the preprocessed data will be saved under the specified directory and the datasets in concat_ds will be reloaded with preload=False.

  • overwrite (bool) – When save_dir is provided, controls whether to delete the old subdirectories that will be written to under save_dir. If False and the corresponding subdirectories already exist, a FileExistsError will be raised.

  • n_jobs (int | None) – Number of jobs for parallel execution. See joblib.Parallel for a more detailed explanation.

  • offset (int) – If provided, the integer is added to the id of the dataset in the concat. This is useful in the setting of very large datasets, where one dataset has to be processed and saved at a time to account for its original position.

  • copy_data (bool | None) – Whether the data passed to the different jobs should be copied or passed by reference.

Returns:

Preprocessed dataset.

Return type:

BaseConcatDataset

Submodules#

braindecode.preprocessing.mne_preprocess module#

Preprocessor objects based on mne methods.

class braindecode.preprocessing.mne_preprocess.Crop(tmin=0.0, tmax=None, include_tmax=True, *, verbose=None)[source]#

Bases: Preprocessor

Crop raw data file.

Limit the data from the raw file to go between specific times. Note that the new tmin is assumed to be t=0 for all subsequently called functions (e.g., time_as_index(), or Epochs). New first_samp and last_samp are set accordingly.

Thus function operates in-place on the instance. Use mne.io.Raw.copy() if operation on a copy is desired.

Parameters:
tminfloat

Start time of the raw data to use in seconds (must be >= 0).

tmaxfloat | None

End time of the raw data to use in seconds (cannot exceed data duration). If None (default), the current end of the data is used.

include_tmaxbool

If True (default), include tmax. If False, exclude tmax (similar to how Python indexing typically works).

Added in version 0.19.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
rawinstance of Raw

The cropped raw object, modified in-place.

See more details in mne.io.base.crop
fn = 'crop'#
class braindecode.preprocessing.mne_preprocess.DropChannels(ch_names, on_missing='raise')[source]#

Bases: Preprocessor

Drop channel(s).

Parameters:
ch_namesiterable or str

Iterable (e.g. list) of channel name(s) or channel name to remove.

on_missing‘raise’ | ‘warn’ | ‘ignore’

Can be 'raise' (default) to raise an error, 'warn' to emit a warning, or 'ignore' to ignore when entries in ch_names are not present in the raw instance.

Added in version 0.23.0.

Returns:
instinstance of Raw, Epochs, or Evoked

The modified instance.

See also

reorder_channels
pick_channels
pick_types

See more details in mne.channels.channels.drop_channels

fn = 'drop_channels'#
class braindecode.preprocessing.mne_preprocess.Filter(l_freq, h_freq, picks=None, filter_length='auto', l_trans_bandwidth='auto', h_trans_bandwidth='auto', n_jobs=None, method='fir', iir_params=None, phase='zero', fir_window='hamming', fir_design='firwin', skip_by_annotation=('edge', 'bad_acq_skip'), pad='reflect_limited', verbose=None)[source]#

Bases: Preprocessor

Filter a subset of channels/vertices.

Parameters:
l_freqfloat | None

For FIR filters, the lower pass-band edge; for IIR filters, the lower cutoff frequency. If None the data are only low-passed.

h_freqfloat | None

For FIR filters, the upper pass-band edge; for IIR filters, the upper cutoff frequency. If None the data are only high-passed.

picksstr | array-like | slice | None

Channels to include. Slices and lists of integers will be interpreted as channel indices. In lists, channel type strings (e.g., ['meg', 'eeg']) will pick channels of those types, channel name strings (e.g., ['MEG0111', 'MEG2623'] will pick the given channels. Can also be the string values 'all' to pick all channels, or 'data' to pick data channels. None (default) will pick all data channels. Note that channels in info['bads'] will be included if their names or indices are explicitly provided.

filter_lengthstr | int

Length of the FIR filter to use (if applicable):

  • ‘auto’ (default): The filter length is chosen based on the size of the transition regions (6.6 times the reciprocal of the shortest transition band for fir_window=’hamming’ and fir_design=”firwin2”, and half that for “firwin”).

  • str: A human-readable time in units of “s” or “ms” (e.g., “10s” or “5500ms”) will be converted to that number of samples if phase="zero", or the shortest power-of-two length at least that duration for phase="zero-double".

  • int: Specified length in samples. For fir_design=”firwin”, this should not be used.

l_trans_bandwidthfloat | str

Width of the transition band at the low cut-off frequency in Hz (high pass or cutoff 1 in bandpass). Can be “auto” (default) to use a multiple of l_freq:

min(max(l_freq * 0.25, 2), l_freq)

Only used for method='fir'.

h_trans_bandwidthfloat | str

Width of the transition band at the high cut-off frequency in Hz (low pass or cutoff 2 in bandpass). Can be “auto” (default in 0.14) to use a multiple of h_freq:

min(max(h_freq * 0.25, 2.), info['sfreq'] / 2. - h_freq)

Only used for method='fir'.

n_jobsint | str

Number of jobs to run in parallel. Can be 'cuda' if cupy is installed properly and method='fir'.

methodstr

'fir' will use overlap-add FIR filtering, 'iir' will use IIR forward-backward filtering (via filtfilt()).

iir_paramsdict | None

Dictionary of parameters to use for IIR filtering. If iir_params=None and method="iir", 4th order Butterworth will be used. For more information, see mne.filter.construct_iir_filter().

phasestr

Phase of the filter. When method='fir', symmetric linear-phase FIR filters are constructed with the following behaviors when method="fir":

"zero" (default)

The delay of this filter is compensated for, making it non-causal.

"minimum"

A minimum-phase filter will be constructed by decomposing the zero-phase filter into a minimum-phase and all-pass systems, and then retaining only the minimum-phase system (of the same length as the original zero-phase filter) via scipy.signal.minimum_phase().

"zero-double"

This is a legacy option for compatibility with MNE <= 0.13. The filter is applied twice, once forward, and once backward (also making it non-causal).

"minimum-half"

This is a legacy option for compatibility with MNE <= 1.6. A minimum-phase filter will be reconstructed from the zero-phase filter with half the length of the original filter.

When method='iir', phase='zero' (default) or equivalently 'zero-double' constructs and applies IIR filter twice, once forward, and once backward (making it non-causal) using filtfilt(); phase='forward' will apply the filter once in the forward (causal) direction using lfilter().

Added in version 0.13.

Changed in version 1.7: The behavior for phase="minimum" was fixed to use a filter of the requested length and improved suppression.

fir_windowstr

The window to use in FIR design, can be “hamming” (default), “hann” (default in 0.13), or “blackman”.

Added in version 0.15.

fir_designstr

Can be “firwin” (default) to use scipy.signal.firwin(), or “firwin2” to use scipy.signal.firwin2(). “firwin” uses a time-domain design technique that generally gives improved attenuation using fewer samples than “firwin2”.

Added in version 0.15.

skip_by_annotationstr | list of str

If a string (or list of str), any annotation segment that begins with the given string will not be included in filtering, and segments on either side of the given excluded annotated segment will be filtered separately (i.e., as independent signals). The default (('edge', 'bad_acq_skip') will separately filter any segments that were concatenated by mne.concatenate_raws() or mne.io.Raw.append(), or separated during acquisition. To disable, provide an empty list. Only used if inst is raw.

Added in version 0.16..

padstr

The type of padding to use. Supports all numpy.pad() mode options. Can also be "reflect_limited", which pads with a reflected version of each vector mirrored on the first and last values of the vector, followed by zeros. Only used for method='fir'.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
instinstance of Epochs, Evoked, SourceEstimate, or Raw

The filtered data.

See more details in mne.io.base.filter

fn = 'filter'#
class braindecode.preprocessing.mne_preprocess.Pick(picks, exclude=(), *, verbose=None)[source]#

Bases: Preprocessor

Pick a subset of channels.

Parameters:
picksstr | array-like | slice | None

Channels to include. Slices and lists of integers will be interpreted as channel indices. In lists, channel type strings (e.g., ['meg', 'eeg']) will pick channels of those types, channel name strings (e.g., ['MEG0111', 'MEG2623'] will pick the given channels. Can also be the string values 'all' to pick all channels, or 'data' to pick data channels. None (default) will pick all channels. Note that channels in info['bads'] will be included if their names or indices are explicitly provided.

excludelist | str

Set of channels to exclude, only used when picking based on types (e.g., exclude=”bads” when picks=”meg”).

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Added in version 0.24.0.

Returns:
instinstance of Raw, Epochs, or Evoked

The modified instance.

See more details in mne.channels.channels.pick
fn = 'pick'#
class braindecode.preprocessing.mne_preprocess.Preprocessor(fn: Callable | str, *, apply_on_array: bool = True, **kwargs)[source]#

Bases: object

Preprocessor for an MNE Raw or Epochs object.

Applies the provided preprocessing function to the data of a Raw or Epochs object. If the function is provided as a string, the method with that name will be used (e.g., ‘pick_channels’, ‘filter’, etc.). If it is provided as a callable and apply_on_array is True, the apply_function method of Raw and Epochs object will be used to apply the function on the internal arrays of Raw and Epochs. If apply_on_array is False, the callable must directly modify the Raw or Epochs object (e.g., by calling its method(s) or modifying its attributes).

Parameters:
  • fn (str or callable) – If str, the Raw/Epochs object must have a method with that name. If callable, directly apply the callable to the object.

  • apply_on_array (bool) – Ignored if fn is not a callable. If True, the apply_function of Raw and Epochs object will be used to run fn on the underlying arrays directly. If False, fn must directly modify the Raw or Epochs object.

  • kwargs – Keyword arguments to be forwarded to the MNE function.

apply(raw_or_epochs: BaseRaw | BaseEpochs)[source]#
class braindecode.preprocessing.mne_preprocess.Resample(up=1.0, down=1.0, *, axis=-1, window='auto', n_jobs=None, pad='auto', npad=100, method='fft', verbose=None)[source]#

Bases: Preprocessor

Resample an array.

Operates along the last dimension of the array.

Parameters:
xndarray

Signal to resample.

upfloat

Factor to upsample by.

downfloat

Factor to downsample by.

axisint

Axis along which to resample (default is the last axis).

windowstr | tuple

When method="fft", this is the frequency-domain window to use in resampling, and should be the same length as the signal; see scipy.signal.resample() for details. When method="polyphase", this is the time-domain linear-phase window to use after upsampling the signal; see scipy.signal.resample_poly() for details. The default "auto" will use "boxcar" for method="fft" and ("kaiser", 5.0) for method="polyphase".

n_jobsint | str

Number of jobs to run in parallel. Can be 'cuda' if cupy is installed properly. n_jobs='cuda' is only supported when method="fft".

padstr

The type of padding to use. When method="fft", supports all numpy.pad() mode options. Can also be "reflect_limited", which pads with a reflected version of each vector mirrored on the first and last values of the vector, followed by zeros. When method="polyphase", supports all modes of scipy.signal.upfirdn(). The default (“auto”) means 'reflect_limited' for method='fft' and 'reflect' for method='polyphase'.

Added in version 0.15.

npadint | str

Amount to pad the start and end of the data. Can also be "auto" to use a padding that will result in a power-of-two size (can be much faster).

Only used when method="fft".

methodstr

Resampling method to use. Can be "fft" (default) or "polyphase" to use FFT-based on polyphase FIR resampling, respectively. These wrap to scipy.signal.resample() and scipy.signal.resample_poly(), respectively.

Added in version 1.7.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
yarray

The x array resampled.

See more details in mne.filter.resample

fn = 'resample'#
class braindecode.preprocessing.mne_preprocess.SetEEGReference(ref_channels='average', projection=False, ch_type='auto', forward=None, *, joint=False, verbose=None)[source]#

Bases: Preprocessor

Specify which reference to use for EEG data.

Use this function to explicitly specify the desired reference for EEG. This can be either an existing electrode or a new virtual channel. This function will re-reference the data according to the desired reference.

Parameters:
ref_channelslist of str | str | dict

Can be:

  • The name(s) of the channel(s) used to construct the reference for every channel of ch_type.

  • 'average' to apply an average reference (default)

  • 'REST' to use the Reference Electrode Standardization Technique infinity reference :footcite:`Yao2001`.

  • A dictionary mapping names of data channels to (lists of) names of reference channels. For example, {‘A1’: ‘A3’} would replace the data in channel ‘A1’ with the difference between ‘A1’ and ‘A3’. To take the average of multiple channels as reference, supply a list of channel names as the dictionary value, e.g. {‘A1’: [‘A2’, ‘A3’]} would replace channel A1 with A1 - mean(A2, A3).

  • An empty list, in which case MNE will not attempt any re-referencing of the data

projectionbool

If ref_channels='average' this argument specifies if the average reference should be computed as a projection (True) or not (False; default). If projection=True, the average reference is added as a projection and is not applied to the data (it can be applied afterwards with the apply_proj method). If projection=False, the average reference is directly applied to the data. If ref_channels is not 'average', projection must be set to False (the default in this case).

ch_typelist of str | str

The name of the channel type to apply the reference to. Valid channel types are 'auto', 'eeg', 'ecog', 'seeg', 'dbs'. If 'auto', the first channel type of eeg, ecog, seeg or dbs that is found (in that order) will be selected.

Added in version 0.19.

Changed in version 1.2: list-of-str is now supported with projection=True.

forwardinstance of Forward | None

Forward solution to use. Only used with ref_channels='REST'.

Added in version 0.21.

jointbool

How to handle list-of-str ch_type. If False (default), one projector is created per channel type. If True, one projector is created across all channel types. This is only used when projection=True.

Added in version 1.2.

verbosebool | str | int | None

Control verbosity of the logging output. If None, use the default verbosity level. See the logging documentation and mne.verbose() for details. Should only be passed as a keyword argument.

Returns:
instinstance of Raw | Epochs | Evoked

Data with EEG channels re-referenced. If ref_channels='average' and projection=True a projection will be added instead of directly re-referencing the data.

See also

mne.set_bipolar_reference

Convenience function for creating bipolar references.

Notes

Some common referencing schemes and the corresponding value for the ref_channels parameter:

  • Average reference:

    A new virtual reference electrode is created by averaging the current EEG signal by setting ref_channels='average'. Bad EEG channels are automatically excluded if they are properly set in info['bads'].

  • A single electrode:

    Set ref_channels to a list containing the name of the channel that will act as the new reference, for example ref_channels=['Cz'].

  • The mean of multiple electrodes:

    A new virtual reference electrode is created by computing the average of the current EEG signal recorded from two or more selected channels. Set ref_channels to a list of channel names, indicating which channels to use. For example, to apply an average mastoid reference, when using the 10-20 naming scheme, set ref_channels=['M1', 'M2'].

  • REST

    The given EEG electrodes are referenced to a point at infinity using the lead fields in forward, which helps standardize the signals.

  • Different references for different channels

    Set ref_channels to a dictionary mapping source channel names (str) to the reference channel names (str or list of str). Unlike the other approaches where the same reference is applied globally, you can set different references for different channels with this method. For example, to re-reference channel ‘A1’ to ‘A2’ and ‘B1’ to the average of ‘B2’ and ‘B3’, set ref_channels={'A1': 'A2', 'B1': ['B2', 'B3']}. Warnings are issued when a mapping involves bad channels or channels of different types.

  1. If a reference is requested that is not the average reference, this function removes any pre-existing average reference projections.

  2. During source localization, the EEG signal should have an average reference.

  3. In order to apply a reference, the data must be preloaded. This is not necessary if ref_channels='average' and projection=True.

  4. For an average or REST reference, bad EEG channels are automatically excluded if they are properly set in info['bads'].

Added in version 0.9.0.

See more details in mne.channels.channels.set_eeg_reference

fn = 'set_eeg_reference'#

braindecode.preprocessing.preprocess module#

Preprocessors that work on Raw or Epochs objects.

class braindecode.preprocessing.preprocess.Preprocessor(fn: Callable | str, *, apply_on_array: bool = True, **kwargs)[source]#

Bases: object

Preprocessor for an MNE Raw or Epochs object.

Applies the provided preprocessing function to the data of a Raw or Epochs object. If the function is provided as a string, the method with that name will be used (e.g., ‘pick_channels’, ‘filter’, etc.). If it is provided as a callable and apply_on_array is True, the apply_function method of Raw and Epochs object will be used to apply the function on the internal arrays of Raw and Epochs. If apply_on_array is False, the callable must directly modify the Raw or Epochs object (e.g., by calling its method(s) or modifying its attributes).

Parameters:
  • fn (str or callable) – If str, the Raw/Epochs object must have a method with that name. If callable, directly apply the callable to the object.

  • apply_on_array (bool) – Ignored if fn is not a callable. If True, the apply_function of Raw and Epochs object will be used to run fn on the underlying arrays directly. If False, fn must directly modify the Raw or Epochs object.

  • kwargs – Keyword arguments to be forwarded to the MNE function.

apply(raw_or_epochs: BaseRaw | BaseEpochs)[source]#
braindecode.preprocessing.preprocess.exponential_moving_demean(data: ndarray[Any, dtype[_ScalarType_co]], factor_new: float = 0.001, init_block_size: int | None = None)[source]#

Perform exponential moving demeanining.

Compute the exponental moving mean \(m_t\) at time t as \(m_t=\mathrm{factornew} \cdot mean(x_t) + (1 - \mathrm{factornew}) \cdot m_{t-1}\).

Deman the data point \(x_t\) at time t as: \(x'_t=(x_t - m_t)\).

Parameters:
  • data (np.ndarray (n_channels, n_times))

  • factor_new (float)

  • init_block_size (int) – Demean data before to this index with regular demeaning.

Returns:

demeaned – Demeaned data.

Return type:

np.ndarray (n_channels, n_times)

braindecode.preprocessing.preprocess.exponential_moving_standardize(data: ndarray[Any, dtype[_ScalarType_co]], factor_new: float = 0.001, init_block_size: int | None = None, eps: float = 0.0001)[source]#

Perform exponential moving standardization.

Compute the exponental moving mean \(m_t\) at time t as \(m_t=\mathrm{factornew} \cdot mean(x_t) + (1 - \mathrm{factornew}) \cdot m_{t-1}\).

Then, compute exponential moving variance \(v_t\) at time t as \(v_t=\mathrm{factornew} \cdot (m_t - x_t)^2 + (1 - \mathrm{factornew}) \cdot v_{t-1}\).

Finally, standardize the data point \(x_t\) at time t as: \(x'_t=(x_t - m_t) / max(\sqrt{->v_t}, eps)\).

Parameters:
  • data (np.ndarray (n_channels, n_times))

  • factor_new (float)

  • init_block_size (int) – Standardize data before to this index with regular standardization.

  • eps (float) – Stabilizer for division by zero variance.

Returns:

standardized – Standardized data.

Return type:

np.ndarray (n_channels, n_times)

braindecode.preprocessing.preprocess.filterbank(raw: BaseRaw, frequency_bands: list[tuple[float, float]], drop_original_signals: bool = True, order_by_frequency_band: bool = False, **mne_filter_kwargs)[source]#

Applies multiple bandpass filters to the signals in raw. The raw will be modified in-place and number of channels in raw will be updated to len(frequency_bands) * len(raw.ch_names) (-len(raw.ch_names) if drop_original_signals).

Parameters:
  • raw (mne.io.Raw) – The raw signals to be filtered.

  • frequency_bands (list(tuple)) – The frequency bands to be filtered for (e.g. [(4, 8), (8, 13)]).

  • drop_original_signals (bool) – Whether to drop the original unfiltered signals

  • order_by_frequency_band (bool) – If True will return channels ordered by frequency bands, so if there are channels Cz, O1 and filterbank ranges [(4,8), (8,13)], returned channels will be [Cz_4-8, O1_4-8, Cz_8-13, O1_8-13]. If False, order will be [Cz_4-8, Cz_8-13, O1_4-8, O1_8-13].

  • mne_filter_kwargs (dict) – Keyword arguments for filtering supported by mne.io.Raw.filter(). Please refer to mne for a detailed explanation.

braindecode.preprocessing.preprocess.preprocess(concat_ds: BaseConcatDataset, preprocessors: list[Preprocessor], save_dir: str | None = None, overwrite: bool = False, n_jobs: int | None = None, offset: int = 0, copy_data: bool | None = None)[source]#

Apply preprocessors to a concat dataset.

Parameters:
  • concat_ds (BaseConcatDataset) – A concat of BaseDataset or WindowsDataset datasets to be preprocessed.

  • preprocessors (list(Preprocessor)) – List of Preprocessor objects to apply to the dataset.

  • save_dir (str | None) – If a string, the preprocessed data will be saved under the specified directory and the datasets in concat_ds will be reloaded with preload=False.

  • overwrite (bool) – When save_dir is provided, controls whether to delete the old subdirectories that will be written to under save_dir. If False and the corresponding subdirectories already exist, a FileExistsError will be raised.

  • n_jobs (int | None) – Number of jobs for parallel execution. See joblib.Parallel for a more detailed explanation.

  • offset (int) – If provided, the integer is added to the id of the dataset in the concat. This is useful in the setting of very large datasets, where one dataset has to be processed and saved at a time to account for its original position.

  • copy_data (bool | None) – Whether the data passed to the different jobs should be copied or passed by reference.

Returns:

Preprocessed dataset.

Return type:

BaseConcatDataset

braindecode.preprocessing.windowers module#

Get epochs from mne.Raw

braindecode.preprocessing.windowers.create_fixed_length_windows(concat_ds: BaseConcatDataset, start_offset_samples: int = 0, stop_offset_samples: int | None = None, window_size_samples: int | None = None, window_stride_samples: int | None = None, drop_last_window: bool | None = None, mapping: dict[str, int] | None = None, preload: bool = False, picks: str | Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | slice | None = None, reject: dict[str, float] | None = None, flat: dict[str, float] | None = None, targets_from: str = 'metadata', last_target_only: bool = True, lazy_metadata: bool = False, on_missing: str = 'error', n_jobs: int = 1, verbose: bool | str | int | None = 'error')[source]#

Windower that creates sliding windows.

Parameters:
  • concat_ds (ConcatDataset) – A concat of base datasets each holding raw and description.

  • start_offset_samples (int) – Start offset from beginning of recording in samples.

  • stop_offset_samples (int | None) – Stop offset from beginning of recording in samples. If None, set to be the end of the recording.

  • window_size_samples (int | None) – Window size in samples. If None, set to be the maximum possible window size, ie length of the recording, once offsets are accounted for.

  • window_stride_samples (int | None) – Stride between windows in samples. If None, set to be equal to winddow_size_samples, so windows will not overlap.

  • drop_last_window (bool | None) – Whether or not have a last overlapping window, when windows do not equally divide the continuous signal. Must be set to a bool if window size and stride are not None.

  • mapping (dict(str: int)) – Mapping from event description to target value.

  • preload (bool) – If True, preload the data of the Epochs objects.

  • picks (str | list | slice | None) – Channels to include. If None, all available channels are used. See mne.Epochs.

  • reject (dict | None) – Epoch rejection parameters based on peak-to-peak amplitude. If None, no rejection is done based on peak-to-peak amplitude. See mne.Epochs.

  • flat (dict | None) – Epoch rejection parameters based on flatness of signals. If None, no rejection based on flatness is done. See mne.Epochs.

  • lazy_metadata (bool) – If True, metadata is not computed immediately, but only when accessed by using the _LazyDataFrame (experimental).

  • on_missing (str) – What to do if one or several event ids are not found in the recording. Valid keys are ‘error’ | ‘warning’ | ‘ignore’. See mne.Epochs.

  • n_jobs (int) – Number of jobs to use to parallelize the windowing.

  • verbose (bool | str | int | None) – Control verbosity of the logging output when calling mne.Epochs.

Returns:

windows_datasets – Concatenated datasets of WindowsDataset containing the extracted windows.

Return type:

BaseConcatDataset

braindecode.preprocessing.windowers.create_windows_from_events(concat_ds: BaseConcatDataset, trial_start_offset_samples: int = 0, trial_stop_offset_samples: int = 0, window_size_samples: int | None = None, window_stride_samples: int | None = None, drop_last_window: bool = False, mapping: dict[str, int] | None = None, preload: bool = False, drop_bad_windows: bool | None = None, picks: str | Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | slice | None = None, reject: dict[str, float] | None = None, flat: dict[str, float] | None = None, on_missing: str = 'error', accepted_bads_ratio: float = 0.0, use_mne_epochs: bool | None = None, n_jobs: int = 1, verbose: bool | str | int | None = 'error')[source]#

Create windows based on events in mne.Raw.

This function extracts windows of size window_size_samples in the interval [trial onset + trial_start_offset_samples, trial onset + trial duration + trial_stop_offset_samples] around each trial, with a separation of window_stride_samples between consecutive windows. If the last window around an event does not end at trial_stop_offset_samples and drop_last_window is set to False, an additional overlapping window that ends at trial_stop_offset_samples is created.

Windows are extracted from the interval defined by the following:

                                        trial onset +
                trial onset                duration
|--------------------|------------------------|-----------------------|
trial onset -                                             trial onset +
trial_start_offset_samples                                   duration +
                                            trial_stop_offset_samples
Parameters:
  • concat_ds (BaseConcatDataset) – A concat of base datasets each holding raw and description.

  • trial_start_offset_samples (int) – Start offset from original trial onsets, in samples. Defaults to zero.

  • trial_stop_offset_samples (int) – Stop offset from original trial stop, in samples. Defaults to zero.

  • window_size_samples (int | None) – Window size. If None, the window size is inferred from the original trial size of the first trial and trial_start_offset_samples and trial_stop_offset_samples.

  • window_stride_samples (int | None) – Stride between windows, in samples. If None, the window stride is inferred from the original trial size of the first trial and trial_start_offset_samples and trial_stop_offset_samples.

  • drop_last_window (bool) – If False, an additional overlapping window that ends at trial_stop_offset_samples will be extracted around each event when the last window does not end exactly at trial_stop_offset_samples.

  • mapping (dict(str: int)) – Mapping from event description to numerical target value.

  • preload (bool) – If True, preload the data of the Epochs objects. This is useful to reduce disk reading overhead when returning windows in a training scenario, however very large data might not fit into memory.

  • drop_bad_windows (bool) – If True, call .drop_bad() on the resulting mne.Epochs object. This step allows identifying e.g., windows that fall outside of the continuous recording. It is suggested to run this step here as otherwise the BaseConcatDataset has to be updated as well.

  • picks (str | list | slice | None) – Channels to include. If None, all available channels are used. See mne.Epochs.

  • reject (dict | None) – Epoch rejection parameters based on peak-to-peak amplitude. If None, no rejection is done based on peak-to-peak amplitude. See mne.Epochs.

  • flat (dict | None) – Epoch rejection parameters based on flatness of signals. If None, no rejection based on flatness is done. See mne.Epochs.

  • on_missing (str) – What to do if one or several event ids are not found in the recording. Valid keys are ‘error’ | ‘warning’ | ‘ignore’. See mne.Epochs.

  • accepted_bads_ratio (float, optional) – Acceptable proportion of trials with inconsistent length in a raw. If the number of trials whose length is exceeded by the window size is smaller than this, then only the corresponding trials are dropped, but the computation continues. Otherwise, an error is raised. Defaults to 0.0 (raise an error).

  • use_mne_epochs (bool) – If False, return EEGWindowsDataset objects. If True, return mne.Epochs objects encapsulated in WindowsDataset objects, which is substantially slower that EEGWindowsDataset.

  • n_jobs (int) – Number of jobs to use to parallelize the windowing.

  • verbose (bool | str | int | None) – Control verbosity of the logging output when calling mne.Epochs.

Returns:

windows_datasets – Concatenated datasets of WindowsDataset containing the extracted windows.

Return type:

BaseConcatDataset

braindecode.preprocessing.windowers.create_windows_from_target_channels(concat_ds, window_size_samples=None, preload=False, picks=None, reject=None, flat=None, n_jobs=1, last_target_only=True, verbose='error')[source]#