ClassificationEvaluation#
- class ClassificationEvaluation[source]#
Bases:
ABC
Base class for representing prediction evaluations.
Evaluations can be keyed, for instance, if evaluations happen per class.
- avg_item#
Averaged evaluation over all classes.
- conf_mat#
Confusion matrix.
Methods
__init__
()compute
(ground_truth_labels, prediction_labels)Compute metrics for a single scene.
Compute average metrics over all classes.
merge
(other[, scene_id])Merge Evaluation for another Scene into this one.
reset
()Reset the Evaluation.
save
(output_uri)Save this Evaluation to a file.
to_json
()Serialize to a dict or list.
- abstract compute(ground_truth_labels, prediction_labels)[source]#
Compute metrics for a single scene.
- Parameters:
ground_truth_labels – Ground Truth labels to evaluate against.
prediction_labels – The predicted labels to evaluate.
- merge(other: ClassificationEvaluation, scene_id: str | None = None) None [source]#
Merge Evaluation for another Scene into this one.
This is useful for computing the average metrics of a set of scenes. The results of the averaging are stored in this Evaluation.
- Parameters:
other (ClassificationEvaluation) – Evaluation to merge into this one
scene_id (str | None) – ID of scene. If specified, (a copy of)
other
will be saved and be available into_json()
’s output. Defaults to None.
- Return type:
None