# braindecode.training.PostEpochTrainScoring¶

class braindecode.training.PostEpochTrainScoring(scoring, lower_is_better=True, name=None, target_extractor=<function to_numpy>)

Epoch Scoring class that recomputes predictions after the epoch on the training in validation mode.

Note: For unknown reasons, this affects global random generator and therefore all results may change slightly if you add this scoring callback.

Parameters
scoringNone, str, or callable (default=None)

If None, use the score method of the model. If str, it should be a valid sklearn scorer (e.g. “f1”, “accuracy”). If a callable, it should have the signature (model, X, y), and it should return a scalar. This works analogously to the scoring parameter in sklearn’s GridSearchCV et al.

lower_is_betterbool (default=True)

Whether lower scores should be considered better or worse.

namestr or None (default=None)

If not an explicit string, tries to infer the name from the scoring argument.

target_extractorcallable (default=to_numpy)

This is called on y before it is passed to scoring.

Methods

 get_test_data(dataset_train, dataset_valid) Return data needed to perform scoring. (Re-)Set the initial state of the callback. on_batch_begin(net[, X, y, training]) Called at the beginning of each batch. on_batch_end(net, y, y_pred, training, **kwargs) Called at the end of each batch. on_epoch_begin(net, dataset_train, …) Called at the beginning of each epoch. on_epoch_end(net, dataset_train, …) Called at the end of each epoch. on_grad_computed(net, named_parameters[, X, …]) Called once per batch after gradients have been computed but before an update step was performed. on_train_begin(net, X, y, **kwargs) Called at the beginning of training. on_train_end(*args, **kwargs) Called at the end of training.
 get_params set_params
__hash__(/)

Return hash(self).

get_test_data(dataset_train, dataset_valid)

Return data needed to perform scoring.

This is a convenience method that handles picking of train/valid, different types of input data, use of cache, etc. for you.

Parameters
dataset_train

Incoming training data or dataset.

dataset_valid

Incoming validation data or dataset.

Returns
X_test

Input data used for making the prediction.

y_test

Target ground truth. If caching was enabled, return cached y_test.

y_predlist

The predicted targets. If caching was disabled, the list is empty. If caching was enabled, the list contains the batches of the predictions. It may thus be necessary to concatenate the output before working with it: y_pred = np.concatenate(y_pred)

initialize()

(Re-)Set the initial state of the callback. Use this e.g. if the callback tracks some state that should be reset when the model is re-initialized.

This method should return self.

on_batch_begin(net, X=None, y=None, training=None, **kwargs)

Called at the beginning of each batch.

on_batch_end(net, y, y_pred, training, **kwargs)

Called at the end of each batch.

on_epoch_begin(net, dataset_train, dataset_valid, **kwargs)

Called at the beginning of each epoch.

on_epoch_end(net, dataset_train, dataset_valid, **kwargs)

Called at the end of each epoch.

on_grad_computed(net, named_parameters, X=None, y=None, training=None, **kwargs)

Called once per batch after gradients have been computed but before an update step was performed.

on_train_begin(net, X, y, **kwargs)

Called at the beginning of training.

on_train_end(*args, **kwargs)

Called at the end of training.