Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version. Compare with Current  |   View Page History

Metrics Catalogue

To unify metrics across all evaluations all metrics should be registered in this Metrics Catalogue. So - when picking metrics for an evaluation run through the catalogue and pick any already defined or enter a new metric when needed.

Use CamelCase notation for metric names - e.g. NumberOfObjectsPerHour

Metric
Datatype
Description
Example
Comments
NumberOfObjectsPerHour integer Number of objects that can be processed per hour
250
Could be used both for component evaluations on a single machine and on entire platform setups
         
         
         
         
         

Binary evaluation method

We use sensitivity and specificity as statistical measures of the performance of the binary classification test where 
Sensitivity = Σ true different / (Σ true different + Σ false similar
and 
Specificity = Σ true similar / (Σ true similar + Σ false different
and the F-measure is calculated on this basis as shown in the table below:

 

This is one suggested way which is nicely applicable if we test for binary correctness of calculations, i.e. it is applicable for characterisation and QA

Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.