ObjectDetectionEvaluation#

class ObjectDetectionEvaluation[source]#

Bases: ClassificationEvaluation

__init__(class_config: ClassConfig, iou_thresh: float = 0.5)[source]#
Parameters

Methods

__init__(class_config[, iou_thresh])

compute(ground_truth_labels, prediction_labels)

Compute metrics for a single scene.

compute_avg()

Compute average metrics over all classes.

compute_eval_items(gt_labels, pred_labels, ...)

merge(other[, scene_id])

Merge Evaluation for another Scene into this one.

reset()

Reset the Evaluation.

save(output_uri)

Save this Evaluation to a file.

to_json()

Serialize to a dict or list.

__init__(class_config: ClassConfig, iou_thresh: float = 0.5)[source]#
Parameters
compute(ground_truth_labels: ObjectDetectionLabels, prediction_labels: ObjectDetectionLabels)[source]#

Compute metrics for a single scene.

Parameters
compute_avg() None#

Compute average metrics over all classes.

Return type

None

static compute_eval_items(gt_labels: ObjectDetectionLabels, pred_labels: ObjectDetectionLabels, class_config: ClassConfig, iou_thresh: float = 0.5) Dict[int, ClassEvaluationItem][source]#
Parameters
Return type

Dict[int, ClassEvaluationItem]

merge(other: ClassificationEvaluation, scene_id: Optional[str] = None) None#

Merge Evaluation for another Scene into this one.

This is useful for computing the average metrics of a set of scenes. The results of the averaging are stored in this Evaluation.

Parameters
  • other (ClassificationEvaluation) – Evaluation to merge into this one

  • scene_id (Optional[str], optional) – ID of scene. If specified, (a copy of) other will be saved and be available in to_json()’s output. Defaults to None.

Return type

None

reset()#

Reset the Evaluation.

save(output_uri: str) None#

Save this Evaluation to a file.

Parameters

output_uri (str) – string URI for the file to write.

Return type

None

to_json() Union[dict, list]#

Serialize to a dict or list.

Returns

Class-wise and (if available) scene-wise evaluations.

Return type

Union[dict, list]