# braindecode.training.CroppedTrialEpochScoring¶

class braindecode.training.CroppedTrialEpochScoring(scoring, lower_is_better=True, on_train=False, name=None, target_extractor=<function to_numpy>, use_caching=True)

Class to compute scores for trials from a model that predicts (super)crops.

Methods

 get_test_data(dataset_train, dataset_valid) Return data needed to perform scoring. (Re-)Set the initial state of the callback. on_batch_begin(net[, X, y, training]) Called at the beginning of each batch. on_batch_end(net, y, y_pred, training, **kwargs) Called at the end of each batch. on_epoch_begin(net, dataset_train, …) Called at the beginning of each epoch. on_epoch_end(net, dataset_train, …) Called at the end of each epoch. on_grad_computed(net, named_parameters[, X, …]) Called once per batch after gradients have been computed but before an update step was performed. on_train_begin(net, X, y, **kwargs) Called at the beginning of training. on_train_end(*args, **kwargs) Called at the end of training.
 get_params set_params
get_test_data(dataset_train, dataset_valid)

Return data needed to perform scoring.

This is a convenience method that handles picking of train/valid, different types of input data, use of cache, etc. for you.

Parameters
dataset_train

Incoming training data or dataset.

dataset_valid

Incoming validation data or dataset.

Returns
X_test

Input data used for making the prediction.

y_test

Target ground truth. If caching was enabled, return cached y_test.

y_predlist

The predicted targets. If caching was disabled, the list is empty. If caching was enabled, the list contains the batches of the predictions. It may thus be necessary to concatenate the output before working with it: y_pred = np.concatenate(y_pred)

initialize()

(Re-)Set the initial state of the callback. Use this e.g. if the callback tracks some state that should be reset when the model is re-initialized.

This method should return self.

on_batch_begin(net, X=None, y=None, training=None, **kwargs)

Called at the beginning of each batch.

on_batch_end(net, y, y_pred, training, **kwargs)

Called at the end of each batch.

on_epoch_begin(net, dataset_train, dataset_valid, **kwargs)

Called at the beginning of each epoch.

on_epoch_end(net, dataset_train, dataset_valid, **kwargs)

Called at the end of each epoch.

on_grad_computed(net, named_parameters, X=None, y=None, training=None, **kwargs)

Called once per batch after gradients have been computed but before an update step was performed.

on_train_begin(net, X, y, **kwargs)

Called at the beginning of training.

on_train_end(*args, **kwargs)

Called at the end of training.