braindecode.datasets.MOABBDataset

class braindecode.datasets.MOABBDataset(dataset_name, subject_ids)

A class for moabb datasets.

Parameters
dataset_name: name of dataset included in moabb to be fetched
subject_ids: list(int) | int

(list of) int of subject(s) to be fetched

Attributes
cummulative_sizes
transform

Methods

get_metadata()

Concatenate the metadata and description of the wrapped Epochs.

save(path[, overwrite])

Save dataset to files.

split([by, property, split_ids])

Split the dataset based on information listed in its description DataFrame or based on indices.

cumsum

get_metadata()

Concatenate the metadata and description of the wrapped Epochs.

Returns
pd.DataFrame:

DataFrame containing as many rows as there are windows in the BaseConcatDataset, with the metadata and description information for each window.

save(path, overwrite=False)

Save dataset to files.

Parameters
path: str

Directory to which .fif / -epo.fif and .json files are stored.

overwrite: bool

Whether to delete old files (.json, .fif, -epo.fif) in specified directory prior to saving.

split(by=None, property=None, split_ids=None)

Split the dataset based on information listed in its description DataFrame or based on indices.

Parameters
by: str | list(int) | list(list(int))

If by is a string, splitting is performed based on the description DataFrame column with this name. If by is a (list of) list of integers, the position in the first list corresponds to the split id and the integers to the datapoints of that split.

property: str

Some property which is listed in info DataFrame.

split_ids: list(int) | list(list(int))

List of indices to be combined in a subset.

Returns
splits: dict{str: BaseConcatDataset}

A dictionary with the name of the split (a string) as key and the dataset as value.