| Evaluation seq. num.
|| Use only of sub-sequent evaluations of the same evaluation is done in another setup than a previous one.
In that case copy the Evaluation specs table and fill out a new one with a new sequence number.
For the first evaluation leave this field at "1"
|Evaluator-IDemail@example.com|| Unique ID of the evaluator that carried out this specific evaluator.
|Evaluation describtion||text|| The IMF takes into account the quality of archived web sites. The quality is assured by a visual inspection : comparing the site in Internet with the archived site in IMF servers.
In order to improve that process, IMF is trying to develop an application, using the Markalizer developped UPMC, which compares two images. These two images are produced by Selenium based framework ( V.2.24.1) by taking two snapshots : ideally, one is taken from the archive access and the second from the live.
This evaluation uses screenshots taken from the IMF Web Archive at two different dates in time.
Note also that for this specific test, only one node of the platform was used.
1° Loading a pair of Web Archive pages (2 urls given)
2° Take screenshots (Selenium)
3° Visual comparison of screenshots (Markalizer)
4° Produce the output result file (score of comparison)
Goal / Sub-goal:
Performance efficiency / Throughput
| Textual description of the evaluation and the overall goals
|Evaluation-Date||DD/MM/YY||01/11/2012|| Date of evaluation
|| Platform IMF 1
|| Unique ID of the platform involved in the particular evaluation - see Platform page included below
|| Pairs of urls from IMF web archive
|| Link to dataset page(s) on WIKI
For each dataset that is a part of an evaluation
make sure that the dataset is described here: Datasets
|Workflow method|| string
|| Python application wrapping and managing Selenium and the Marcalizer tool
|| Taverna / Commandline / Direct hadoop etc...
| Workflow(s) involved
|| Link(s) to MyExperiment if applicable
| Tool(s) involved
||URL(s)|| Link(s) to distinct versions of specific components/tools in the component registry if applicable
|Link(s) to Scenario(s)|| URL(s)
|| Link(s) to scenario(s) if applicable
|Platform-ID||String||IMF Cluster|| Unique string that identifies this specific platform.
Use the platform name
|Platform description||String|| Cloudera CDH3u2.
3 dual-core low consumption nodes
|Human readable description of the platform. Where is it located, contact info, etc.|
|Number of nodes||integer|| 3
|| Number of hosts involved - could be both physical hosts as well as virtual hosts
|Total number of physical CPUs||integer|| 3
|| Number of CPU's involved
|CPU specs||string||Dual core AMD G-T56N on 1600MHz|| Specification of CPUs
|Total number of CPU-cores||integer||6 Cores (3 * 2 Cores)|| Number of CPU-cores involved
| Total amount of RAM in Gbytes
||integer||24GB (3 * 8GB)|| Total amount of RAM on all nodes
| average CPU-cores for nodes
|| Number of CPU-cores in average across all nodes
| avarage RAM in Gbytes for nodes
|| Amount of memory in average across all nodes
| Operating System on nodes
||String||Debian 6 squeeze (64bit)||Linux (specific distribution), Windows (specific distribution), other?|
|Storage system/layer||String||HDFS||NFS, HDFS, local files, ?|
|Network layer between nodes||String||Local copy between two nodes : 80 MB/s 640 Mbps||Speed of network interfaces, general network speed|
metrics must come from / be registered in the metrics catalogue
|Metric||Baseline definition||Baseline value||Goal|| Evaluation 1 (01/11/2012)
|| Evaluation 2 (date)
|| Evaluation 3 (date)
|| Number of comparisons made per hour
|NumberOfFailedFiles||Number of images screenshots that failed in the workflow||0||0||0|