braindecode.visualization.compute_metrics#
- braindecode.visualization.compute_metrics(explanations, reference, chs_info=None, abs_reference=True, abs_explanation=False, prctile_val=95)[source]#
Compute attribution-quality metrics between explanations and reference.
If
chs_infois provided, attributions are first averaged over time and projected onto a 2-D scalp topography via MNE before computing metrics (topographic mode). Otherwise metrics are computed directly on the raw attribution maps (channel-wise mode).- Parameters:
explanations (numpy.ndarray) – Attribution maps of shape
(n_samples, n_chans, n_times).reference (numpy.ndarray) – Ground truth or baseline attribution maps, same shape as
explanations.chs_info (list of dict, optional) – Channel info list (braindecode
chs_infoformat). If provided, enables topographic projection.abs_reference (bool, default=True) – If True, take absolute value of
reference(ground truth mode). If False, usereferenceas-is (comparison mode, e.g. randomized weights).abs_explanation (bool, default=False) – If True, take absolute value of
explanations. If False, clip negative values to zero.prctile_val (float, default=95) – Top-percentile threshold (e.g. 95 keeps the top 5%) used for
*_toppercmasks.
- Returns:
metrics (numpy.ndarray) – Array of shape
(n_samples, 12)with metric values per sample. SeeMETRIC_NAMESfor the metric at each index. Skipped samples have an all-zero row.n_skipped (int) – Number of samples skipped due to all-zero or constant attributions or reference.