Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version. Compare with Current  |   View Page History

Evaluator(s)

Bolette Jurik (SB)

Evaluation points

Assessment of measurable points
Metric Description Metric baseline Metric goal 2014 April 8th*
NumberOfObjectsPerHour Performance efficiency - Capacity / Time behaviour 18 (9th-13th November 2012) 1000 204
NumberOfFailedFiles Reliability - Runtime stability 0 0 0
QAFalseDifferentPercent Functional suitability - Correctness 0.412 % (5th-9th November 2012) 0.412 % 82.76 %
      2Gb  

*Based on the small experiment with max split size 128 below. See explanation for the abysmal correctness score.

Small Experiments

All run on a file list of 58 files (7.2Gb in total).

max split size duration
launched maps success
failure
1024
37m, 58.593s = 2278.593s
3,3,7
18
40
512
24m, 1.9s = 1441.9s
6,6,14
0
58
256
18m, 17.917 = 1097.917
12,12,28
0
58
128
17m, 3.176 = 1023.176
24,24,57
10
48
64
16m, 54.703s = 1014.703s
47,47,113
0
58
32
17m, 29.96s = 1049.96
93,93,225
4
54

The big question is why we get so many failures? The answer is of course that the list of pairs of files to compare is wrong! This list is created by Taverna beanshells, and we are missing a correct sort of the two output lists from the FFmpeg and mpg321 Hadoop jobs, before we combine the lists to a list of pairs as input to the waveform-compare Hadoop job. This has now been fixed.

The exact number of MR maps seem not to have a big influence on performance, as long as we have more than 12. That is as long as max split size is at most 256 on an input file list of 58 files. We note that we get approximately twice as many launched maps for the waveform-compare Hadoop job, simply because the input list is approximately twice as big, as it is a list of pairs. We can of course adjust this to get approximately the same number of jobs, but it does not seem to be important for the performance.

The first line of tests were to decide on expected optimal max split size.

The next line of tests will vary on the size of the input. If max-split-size=256 (bytes) gives us 2*12 maps on an input-txt-file with 58 files, this means 256 bytes is approx 58/12=4,8333 files, so one file is approx 256/4.8333=52.9655 bytes. Then if we want approx 2*12 maps on an input-txt-file with 1000 files, we want max-split-size to be approx 1000/12*52.9655 = 4413.7931 bytes.

The current hold-up is hardware. As this job writes very much data, the Isilon disk I/O and CPU use is being maxed out, even though we are trying to "play nice" and only run 24 maps concurrently. We hope this can be solved by more network cables.

Assessment of non-measurable points

ReliableAndStableAssessment Reliability - Runtime stability

For some evaluation points it makes most sense to a textual description/explanation

A note about goals-objectives omitted, and why

This evaluation covers performance, reliability and functional suitability to some extent_._ We did not look at the metrics MaxObjectSizeHandledInGbytes and MinObjectSizeHandledInMbytes. These measures would certainly contribute to the evaluation. Our collection ([Danish Radio broadcasts, mp3\|../../../../../../../../../../display/SP/Danish+Radio+broadcasts%2C+mp3|]) has mp3 files varying very little in size (approx. 2 hours, average file size 118Mb, largest file: 124Mb) and the workflow thus produces wav files varying very little in size (around 1.4Gb *2 per mp3 file). The test mp3 files used under development were of course considerably smaller (around 7Mb) and produced smaller output (around 50Mb*2 per mp3 file).

We did also not look at the metrics ThroughputGbytesPerMinute, ThroughputGbytesPerHour, or AverageRuntimePerItemInHours. These are all possible to compute fairly easily though.

WORK IN PROGRESS

The evaluation does not cover

  • Organisational maturity
  • Maintainability
  • Planning and monitoring efficiency
  • Commercial readiness

Technical details

Remember to include relevant information, links, versions about workflow, tools, APIs (e.g. Taverna, command line, Hadoop, links to MyExperiment, link to tools or SCAPE name, links to distinct versions of specific components/tools in the component registry)

WebDAV

We would like to store sufficient information about an experiment (hadoop program, configuration, etc.), so we are able to rerun it. For this purpose, ONB is providing a WebDAV - if you have questions and need more information, please contact Sven or Reinhard at ONB.
Taverna workflows will still be stored on myexperiment.org.

Link: http://fue.onb.ac.at/scape-tb-evaluation

Please use the following structure for storing experiment results

Evaluation notes

Could be such things as identified issues, workarounds, data preparation, if not already included above

QAFalseDifferentPercent

Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.