ClassificationEvaluation#
- class ClassificationEvaluation[source]#
Bases:
ABC
Base class for representing prediction evaluations.
Evaluations can be keyed, for instance, if evaluations happen per class.
- class_to_eval_item#
Mapping from class IDs to ``ClassEvaluationItem``s.
- Type
Dict[int, ClassEvaluationItem]
- scene_to_eval#
Mapping from scene IDs to ``ClassificationEvaluation``s.
- Type
Dict[str, ClassificationEvaluation]
- conf_mat#
Confusion matrix.
- Type
Optional[np.ndarray]
Methods
__init__
()compute
(ground_truth_labels, prediction_labels)Compute metrics for a single scene.
Compute average metrics over all classes.
merge
(other[, scene_id])Merge Evaluation for another Scene into this one.
reset
()Reset the Evaluation.
save
(output_uri)Save this Evaluation to a file.
to_json
()Serialize to a dict or list.
- abstract compute(ground_truth_labels, prediction_labels)[source]#
Compute metrics for a single scene.
- Parameters
ground_truth_labels – Ground Truth labels to evaluate against.
prediction_labels – The predicted labels to evaluate.
- merge(other: ClassificationEvaluation, scene_id: Optional[str] = None) None [source]#
Merge Evaluation for another Scene into this one.
This is useful for computing the average metrics of a set of scenes. The results of the averaging are stored in this Evaluation.
- Parameters
other (ClassificationEvaluation) – Evaluation to merge into this one
scene_id (Optional[str], optional) – ID of scene. If specified, (a copy of)
other
will be saved and be available into_json()
’s output. Defaults to None.
- Return type
None