Key
This line was removed.
This word was removed. This word was added.
This line was added.

Changes (33)

View Page History

In this testbed experiment we focus on performance. The earlier experiment [EVAL-LSDR6-1|SP:EVAL-LSDR6-1] on mp3 to wav migration and QA using xcorrSound also focused on correctness. Moving the workflow to Hadoop to prove scalability, should not affect correctness of the tool.
* *Scalability* The workflow must be able to process a large collection within reasonable time. That is we want to be able to migrate and QA a large collection of radio broadcast mp3-files (20 Tbytes - 175.000 files) within weeks rather than years. The goal of 1000 for _NumberOfObjectsPerHour_ _Number Of Objects Per Hour_ (or 0.28 for _number of objects per second_) would mean that we can migrate the 20TB radio broadcast mp3 collection in a week.
* *Reliability* The workflow must run reliably without failing on a large number of files, and it must be possible to restart the workflow without loosing work.
* *Correctness/Scalability* We must believe to some extent that the automatic QA correctly identifies the "questionable" migrations, such that these can be checked in a manual QA process. We must however also insist that the number of migrations to check manually is minimal, as this is a very resource demanding process. The goal for _QAFalseDifferentPercent_ has been changed to 2%. This means that we would have to check 3500 migrated 2 hour wav files manually. This is already too resource demanding. However the poor quality of the original files is a great challenge for the content comparison tool, and it turns out this is also too much to ask\!

|| Metric || Description || Metric baseline || Metric goal || Evaluation 2014 April 8th\* || Evaluation 2014 June 17th-23rd*\* \\ ||
| [number of objects per second|http://www.purl.org/DP/quality/measures#418] | *Performance efficiency - Capacity / Time behaviour*\\
Number of objects that can be processed per second | 0.005 | 0.28\\ | 0.0567 | 0.0619 |
| Number Of Objects Per Hour**\*\\ | *Performance efficiency - Capacity / Time behaviour*\\
Number of objects that can be processed per second | NumberOfObjectsPerHour | *Performance efficiency - Capacity / Time behaviour* | 18 (9th-13th November 2012) | 1000 | 204 \\ | 223 |
| QAFalseDifferentPercent | *Functional suitability - Correctness* | 0.412 % (5th-9th November 2012) | 2% | \\ | \~8.7% |
| [QAFalseDifferentPercent|http://ifs.tuwien.ac.at/dp/vocabulary/quality/measures#416] | *Functional suitability - Correctness*\\
Ratio of 'QA decided different'/'human judged same', \\
that is ratio of content comparisons resulting in original and migrated different, \\
even though human evaluation define original and migrated similar 0.412 % | 0.412 % (5th-9th November 2012) | 2% | \\ | \~8.7% |


\**Based on the large scale experiment from June below.

\***This measure is not defined in the [Metrics Catalogue|http://ifs.tuwien.ac.at/dp/vocabulary/quality/measures], but we have kept it as a more readable extra supplement to _number of objects per second_.



h4. Discussion and Conclusion

All run on a file list of *58 files (7.2Gb in total)*.

|| max-split-size || duration \\ || launched maps || map tasks \\
on the three Hadoop jobs  ||
| 1024 \\ | 37m, 58.593s = 2278.593s \\ | 3,3,7 \\ |
| 512 \\ | 24m, 1.9s = 1441.9s \\ | 6,6,14 \\ |
| 64 \\ | 16m, 54.703s = 1014.703s \\ | 47,47,113 \\ |
| 32 \\ | 17m, 29.96s = 1049.96 \\ | 93,93,225 \\ |
The small experiments were mainly run to decide an optimal max-split-size (or an optimal number of map-reduce map tasks). The exact number of MR map tasks seem not to have a big influence on performance, as long as we have more than 12. That is as long as max split size is at most 256 on an input file list of 58 files. We note that we get approximately twice as many launched maps for the waveform-compare Hadoop job, simply because the input list is approximately twice as big, as it is a list of pairs. We can of course adjust this to get approximately the same number of jobs, but it does not seem to be important for the performance.

The large scale experiments were held up for a while, due to too few connection to storage. As this job writes very much data, the Isilon disk I/O and CPU use were being maxed out, even though we were trying to "play nice" and only run 24 maps concurrently. The number of connections to the 16 nodes Isilion storage solution at SB were 2 when the small scale experiments were run. It was then set up to five connections before we ran the large scale experiments.
The small experiments were mainly run to decide an optimal max-split-size (or an optimal number of map-reduce map tasks). A split is a part of an input file that one map task is working on. Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. When working on text files, the default of letting the number of map tasks depend on the number of DFS blocks in the input files works well. We are not working on text files. The input to our map-reduce jobs are text files, but very small text files only containing lists of paths to the audio files, we actually want to work on. We thus want much smaller splits of only a few lines each. The max-split-size (*mapred.max.split.size*) is the maximum size of such a split in bytes.

The exact number of MR map tasks seem not to have a big influence on performance, as long as we have more than 2*12. That is as long as max split size is at most 256 on an input file list of 58 files. We note that we get approximately twice as many launched maps for the waveform-compare Hadoop job, simply because the input list is approximately twice as big, as it is a list of pairs. We can of course adjust this to get approximately the same number of jobs, but as the two first jobs _FfmpegMigrate_ and _Mpg321Convert_ run simultaneously, and the _WaveformCompare_ job runs alone, we actually have approximately the same number of map tasks throughout the workflow.

The large scale experiments were held up for a while, due to too few connection to storage. Remember we are using the [SP:SB Hadoop Platform]. As this job writes very much data, the Isilon disk I/O and CPU use were being maxed out, even though we were trying to "play nice" and only run 28 maps concurrently. The number of connections to the 16 nodes Isilon storage solution at SB were 2 when the small scale experiments were run. It was then set up to five connections before we ran the large scale experiments.



h4. Large Scale Experiments June 2014

This line of tests will focus on scalability. If max-split-size=256 (bytes) gives us 2*12 maps on an input-txt-file with 58 files, this means 256 bytes is approx 58/12=4,8333 files, so one file is approx 256/4.8333=52.9655 bytes. Then if we want approx 2*12 maps on an input-txt-file with 1000 files, we want max-split-size to be approx 1000/12*52.9655 = 4413.7931 bytes.

The jobs were run on file-lists of approximately 1000 files (100MB); (129GB); the max.split.size was set to 4414; and each job writes approximately 3.1TB of intermediate and output wav files (\+ some small log files).


|| date || size:#mp3s || total size || duration || total duration || NumberOfObjectsPerHour || failure || total failure || QAFalseDifferentPercent ||
| 2014 Jun 17 | 1000 | 1000 (129GB) | 4h, 33m | 4h, 33m | 220 \\ | 63 | 63 | 6.3 \\ |
| 2014 Jun 18 | 1000 | 2000 (258GB) | 4h, 23m | 8h, 56m | 224 \\ | 111 | 174 | 8.7 \\ |
| 2014 Jun 19 | 999 \\ | 2999 (387GB) | 4h, 20m | 13h, 29m | 222 \\ | 52 | 226 | \~7.5 \\ |
| 2014 Jun 20 | 1000 | 3999 (516GB)\\ | 4h, 27m | 17h, 56m | 223 \\ | 142 | 368 | \~9.2 \\ |
| 2014 Jun 23 | 999 | 4998 (645GB) | 4h, 28m | 22h, 24m | 223 \\ | 67 | 435 | \~8.7 \\ |


h3. Assessment of non-measurable points

In the last evaluation, we did include _ReliableAndStableAssessment_ *{_}Reliability - Runtime stability{_}* in the evaluation points, and we wrote *true* both in baseline value (Manual assessment: the experiment performed reliably and stably for 13 days, but then Taverna failed with  java.lang.OutOfMemoryError: Java heap spacedue to /tmp/ being filled up. All results were however saved, and the workflow could simply be restarted with a new starting point in the input list) and in goal. This time I could simply write *true* again, as this experiment also performed reliable and stably for around 4 hours. I will however note that this experiment was not focused on reliability, and all intermediate results are potentially lost if the workflow is killed. I will also note that we partitioned the input to the workflow, so it worked on only 1000 files at a time. This was done as the test environment had on upper limit on available storage, and the workflow produces approximately 3.1TB of output files for each 1000 input files. The workflow will fail if it does not have enough output storage.
In the last evaluation, we did include _ReliableAndStableAssessment_ *{_}Reliability - Runtime stability{_}* in the evaluation points, and we wrote *true* both in goal and in baseline value (Manual assessment: the experiment performed reliably and stably for 13 days, but then Taverna failed with  java.lang.OutOfMemoryError: Java heap space due to /tmp/ being filled up. All results were however saved, and the workflow could simply be restarted with a new starting point in the input list). This measure is not a part of the scape metrics catalogue, but [stability judgement|http://purl.org/DP/quality/measures#108] is and an evaluation follows here.


The experiment performed reliable and stably for around 4 hours. I will however note that this experiment was not focused on reliability, and all intermediate results are potentially lost if the workflow is killed. I will also note that we partitioned the input to the workflow, so it worked on only 1000 files at a time. This was done as the test environment had on upper limit on available storage, and the workflow produces approximately 3.1TB of output files for each 1000 input files. The workflow will fail if it does not have enough output storage. Working on only 1000 files at a time of course has the benefit, that only 1000 results can be lost at a time, and as the workflow seems to run stably for this size input it is reliable and stable in this configuration. Using this configuration however means that for a 20TB 175000 file collection, I need 175 input files and a script that starts the workflow 175 times sequentially (and roughly .5 Petabyte available storage).



h3. A note about goals-objectives omitted, and why

This evaluation covers _performance_, _reliability_ and _functional suitability_ to some extent. We did not look at the metrics _MaxObjectSizeHandledInGbytes_ and _MinObjectSizeHandledInMbytes._ These measures would certainly contribute to the evaluation. Our collection ([SP:Danish Radio broadcasts, mp3]) has mp3 files varying very little in size (approx. 2 hours, average file size 118Mb, largest file: 124Mb) and the workflow thus produces wav files varying very little in size (2 wav files of around 1.4Gb for one 118Mb mp3 file). The test mp3 files used under development were of course considerably smaller (around 7Mb) and produced smaller output (around 50Mb*2 per mp3 file). We think that the workflow can handle larger files as well, but this was not tested. We can report that for input _MinObjectSizeHandledInMbytes_ is around 7Mb and _MaxObjectSizeHandledInGbytes_ is around 0.124Gb. For output _MinObjectSizeHandledInMbytes_ is around 50Mb and _MaxObjectSizeHandledInGbytes_ is around 1.4Gb. This would be an interesting measure to experiment further with.
This evaluation covers _performance_, _reliability_ and _functional suitability_ to some extent. We did not look at the metrics _[max object size handled in bytes|http://purl.org/DP/quality/measures#404]_ and _[min object size handled in bytes|http://purl.org/DP/quality/measures#405]__._ These measures would certainly contribute to the evaluation. Our collection ([SP:Danish Radio broadcasts, mp3]) has mp3 files varying very little in size (approx. 2 hours, average file size 118Mb, largest file: 135Mb) and the workflow thus produces wav files varying very little in size (2 wav files of around 1.4Gb for one 118Mb mp3 file). The test mp3 files used under development were of course considerably smaller (around 7Mb) and produced smaller output (around 50Mb*2 per mp3 file). We think that the workflow can handle larger files as well, but this was not tested. We can report that for input _min object size handled in bytes_ is around 7Mb (7000000 bytes) and _max object size handled in bytes_ is around 135Mb (135000000 bytes). For output _min object size handled in bytes_ is around 50Mb (50000000 bytes) and _max object size handled in bytes_ is around 1.4Gb (1400000000 bytes). This would be an interesting measure to experiment further with.


We did also not look at the metrics _ThroughputGbytesPerMinute_, _ThroughputGbytesPerHour_, or _AverageRuntimePerItemInHours_. These are all possible to compute though. The evaluation 2014 June 17th-23rd gave us _NumberOfObjectsPerHour_=223. We can turn this around to get _AverageRuntimePerItemInHours_ 1/223 = 0.004484305 hours. The _NumberOfObjectsPerHour_=223 is nicer for humans to read. To compute the others, we need the throughput size in Gb. Our question here is what throughput means. We wrote that the 1000 files in input were only approximately 100Mb but they produced 3.1Tb of intermediate and output wav files. Half of these (1.55TB) is output, and we will use this as the throughput size. Then for _NumberOfObjectsPerHour_=223, we 1.55/1000*223 = 0.34565TB or 0.34565×1024 = 353.9456Gb of throughput, that is _ThroughputGbytesPerHour~=354Gb, or_ _ThroughputGbytesPerMinute~=_{_}354/60=5.9Gb._
We did also not look at the metrics _[throughput in bytes per second|http://purl.org/DP/quality/measures#406]__._ This measure can be computed from _number of objects per second_ or _Number Of Objects Per Hour_. The evaluation 2014 June 17th-23rd gave us _Number Of Objects Per Hour_=223. To compute throughput in bytes per second, we need the throughput size. Our question here is what throughput means. We wrote that the 1000 files in input were only approximately 129GB but they produced 3.1TB of intermediate and output wav files. Half of these (1.55TB) is output, and we will use this as the throughput size. Then for _Number Of Objects Per Hour_=223, we get 1.55/1000*223 = 0.34565TB or 0.34565×1024 = 353.9456Gb of throughput per hour, that is 353945600000 / 60 / 60 = 98318222 bytes or 98 MB of throughput per second.




The workflow that was used is version 4 of the _Slim Migrate And QA mp3 to Wav Using Hadoop Jobs_ workflow available from [http://www.myexperiment.org/workflows/4080.html]


The Hadoop jobs that were used are from commit e1ec47d of the [https://github.com/statsbiblioteket/scape-audio-qa-experiments] project.

The waveform-compare tool that was used was from xcorrSound release v2.0.2 [https://github.com/openplanets/scape-xcorrsound/releases/tag/v2.0.2].
h2. Evaluation notes

_Could be such things as identified issues, workarounds, data preparation, if not already included above_

QAFalseDifferentPercent
* We have (stubbornly) kept the old measure _Number Of Objects Per Hour_ in our evaluation, as it is simply easier to read when the processing time is as long as in this experiment.
* QAFalseDifferentPercent was introduced as a measure, when we were working on smaller annotated datasets. When we are working on large scale real life datasets it is problematic. A better idea would probably be to have a _Dissimilar in Percent_ measure along with a _Correctness judgement_ based on the _Dissimilar in Percent_ measure along with prior correctness evaluations on annotated data. We would then also need a discussion of the adequacy of the solution when taken into acount the level of automation and the human resources still needed.