ClassEvaluationItem#

class ClassEvaluationItem[source]#

Bases: EvaluationItem

A wrapper around a binary (2x2) confusion matrix of the form

[TN FP]
[FN TP]

where TN need not necessarily be available.

Exposes evaluation metrics computed from the confusion matrix as properties.

class_id#

Class ID.

Type:

int

class_name#

Class name.

Type:

str

conf_mat#

Confusion matrix: [[TN, FP], [FN, TP]].

Type:

np.ndarray

extra_info#

Arbitrary extra key-value pairs that will be included in the dict returned by to_json().

Type:

dict

Attributes

f1

F1 score = 2 * (precision * recall) / (precision + recall)

false_neg

False negative count.

false_pos

False positive count.

gt_count

Positive ground-truth count.

precision

TP / (TP + FP)

pred_count

Positive prediction count.

recall

TP / (TP + FN)

sensitivity

Equivalent to recall.

specificity

TN / (TN + FP)

true_neg

True negative count.

true_pos

True positive count.

__init__(class_id: int, class_name: str, tp: int, fp: int, fn: int, tn: int | None = None, **kwargs)[source]#

Constructor.

Parameters:
  • class_id (int) – Class ID.

  • class_name (str) – Class name.

  • tp (int) – True positive count.

  • fp (int) – False positive count.

  • fn (int) – False negative count.

  • tn (int | None) – True negative count. Defaults to None.

  • **kwargs – Additional data can be provided as keyword arguments. These will be included in the dict returned by to_json().

Methods

__init__(class_id, class_name, tp, fp, fn[, tn])

Constructor.

from_multiclass_conf_mat(conf_mat, class_id, ...)

Construct from a multi-class confusion matrix and a target class ID.

merge(other)

Merge with another ClassEvaluationItem.

to_json()

Serialize to a dict.

__init__(class_id: int, class_name: str, tp: int, fp: int, fn: int, tn: int | None = None, **kwargs)[source]#

Constructor.

Parameters:
  • class_id (int) – Class ID.

  • class_name (str) – Class name.

  • tp (int) – True positive count.

  • fp (int) – False positive count.

  • fn (int) – False negative count.

  • tn (int | None) – True negative count. Defaults to None.

  • **kwargs – Additional data can be provided as keyword arguments. These will be included in the dict returned by to_json().

classmethod from_multiclass_conf_mat(conf_mat: ndarray, class_id: int, class_name: str, **kwargs) Self[source]#

Construct from a multi-class confusion matrix and a target class ID.

Parameters:
  • conf_mat (np.ndarray) – A multi-class confusion matrix.

  • class_id (int) – The ID of the target class.

  • class_name (str) – The name of the target class.

  • **kwargs – Extra args for __init__().

Returns:

ClassEvaluationItem for target class.

Return type:

ClassEvaluationItem

merge(other: ClassEvaluationItem) None[source]#

Merge with another ClassEvaluationItem.

This is accomplished by summing the confusion matrices.

Parameters:

other (ClassEvaluationItem) –

Return type:

None

to_json() dict[source]#

Serialize to a dict.

Return type:

dict

property f1: float#

F1 score = 2 * (precision * recall) / (precision + recall)

property false_neg: int#

False negative count.

property false_pos: int#

False positive count.

property gt_count: int#

Positive ground-truth count.

property precision: float#

TP / (TP + FP)

property pred_count: int#

Positive prediction count.

property recall: float#

TP / (TP + FN)

property sensitivity: float#

Equivalent to recall.

property specificity: float | None#

TN / (TN + FP)

property true_neg: int | None#

True negative count.

Returns:

Count as int if available. Otherwise, None.

Return type:

int | None

property true_pos: int#

True positive count.