h2. Investigator(s)
Sven Schlarb
h2. Dataset
[SP:Austrian National Library Tresor Music Collection]
h2. Platform
[SP:ONB Hadoop Platform]
h2. Purpose of this experiment
The purpose of this experiment is to evaluate the performance of a scalable workflow for migrating TIFF images to images in the JPEG2000 format compared to an equivalent Taverna version of the workflow processing the data sequentially.
h2. Evaluation method
A Taverna workflow for sequential processing serves as a reference point for the large-scale execution. Out of the full Austrian National Library Tresor Music Collection data set subsets of increasing size are selected by a random process.
The following bash statement prepends a random number and subsequently orders the list to extract a random sample from the full data set:
{code}
find . -type f -exec ls -l -sd {} + | grep ".tif" | \
awk 'BEGIN {srand()} {printf "%05.0f %s \n",rand()*99999, $0; }' | \
sort -n | awk '{print $10 "\t" $7}' | head -$NUM > ~/tresormusicfilepaths${NUM}_withsize.csv
{code}
The resulting file contains the local file paths and can be used as input for the Taverna workflow presented in the next section.
Additionally the files are uploaded to HDFS as input for the large-scale workflow execution.
By that way it is possible to compare the sequential execution time to the large-scale processing time. The results depend on the size and configuration of the Hadoop cluster, therefore the relation between sequential and large-scale processing is expressed by the "parallelisation efficiency" which is the sequential execution time, divided by the distributed execution time, and again divided by the number of nodes available for parallel processing:
{code}
e := parallelisation efficiency
d := distributed execution time in seconds
s := sequential execution time in seconds
n := number of nodes (cores) available for parallel processing
e = s/d/n;
{code}
h2. Taverna workflow - sequential processing
The proof-of-concept version of the TIFF to JPEG2000 image migration workflow with quality assurance was created as a Taverna workflow illustrated by the following workflow diagram:
!TavernaWorkflow4276.png|border=1,width=235,height=786!
Diagram of the TIFF to JPEG2000 image migration workflow, Workflow available on MyExperiment at [http://www.myexperiment.org/workflows/4276.html]
The Taverna workflow reads a textfile containing absolute paths to TIF image files and converts them to JP2 image files using OpenJPEG ([https://code.google.com/p/openjpeg|https://code.google.com/p/openjpeg]).
Based on the input text file, the workflow creates a Taverna list to be processed file by file. A temporary directory is created (createtmpdir) where the migrated image files and some temporary tool outputs are stored.
Before starting the actual migration, it is checked if the TIF input images are valid file format instances using Fits ([https://code.google.com/p/fits|https://code.google.com/p/fits], JHove2 under the hood, [http://www.jhove2.org|http://www.jhove2.org]). An XPath service is used to extract the validity information from the XML-based Fits validation report.
If the images are valid TIF images, they are migrated to the JPEG2000 (JP2) image file format using OpenJPEG 2.0 (opj_compress).
Subsequently, it is again checked if the migrated images are valid JP2 images using SCAPE tool Jpylyzer ([http://www.openplanetsfoundation.org/software/jpylyzer|http://www.openplanetsfoundation.org/software/jpylyzer]). An XPath service (XPathJpylyzer) is used to extract the validity information from the XML-based Jpylyzer validation report.
Finally, we verify if the migrated JP2 images are valid surrogates of the original TIF images by restoring the original TIF image from the converted JP2 image and comparing whether original and restored images are identical.
The sequential execution of this workflow is used as a reference point for measuring the parallelisation efficiency of the scalable version and it allows measuring how the processing times of the different components compare to each other.
The following diagram shows the average execution time of each component of the workflow in seconds and was created from a 1000 images sample of the Austrian National Library Tresor Music Collection:
!distribution_execution_times.PNG|border=1,width=473,height=264!
h2. SCAPE Platform workflow - distributed processing
Apache Pig was used to create a scalable version of this workflow. The different processing steps of the Taverna workflow for sequential processing are represented by Pig Latin statements.
The comments of each processing step In the script below indicate which is the corresponding processing component in the Taverna workflow.
{code}
REGISTER tomar-1.5.2-SNAPSHOT.jar;
DEFINE ToMarService eu.scape_project.pt.udf.ControlLineUDF();
DEFINE XPathService eu.scape_project.pt.udf.XPathFunction();
SET job.name 'Tomar-Pig-Taverna-OpenJpeg';
SET pig.noSplitCombination true;
%DECLARE toolspecs_path '/user/onbfue/alan/toolspecs';
%DECLARE xpath_exp1 '/fits/filestatus/valid';
%DECLARE xpath_exp2 '/fits/identification/identity/@mimetype';
%DECLARE xpath_exp3 '/jpylyzer/isValidJP2';
/* STEP 1: load image paths - Taverna: image_paths_from_dir */
image_pathes = LOAD '$image_pathes' USING PigStorage() AS (image_path: chararray);
/* STEP 2: validation of tiff image files using fits - Taverna: fitsValidation */
fits = FOREACH image_pathes GENERATE image_path as image_path, ToMarService('$toolspecs_path', CONCAT(CONCAT('fits stdxml --input="hdfs://', image_path), '"')) as xml_text;
/* STEP 3: extract tiff validity using xpath - Taverna: XPathJhove2 */fits_validation_list = FOREACH fits GENERATE image_path, XPathService('$xpath_exp1', xml_text) AS node_list1, XPathService('$xpath_exp2', xml_text) AS node_list2;
fits_validation = FOREACH fits_validation_list GENERATE image_path, FLATTEN(node_list1) as node1, FLATTEN(node_list2) as node2;
store fits into 'output/fits';
store fits_validation into 'output/fits_validation';
/* STEP 4: migration of tiff image files to jpeg2000 - Taverna: opj_compress */
openjpeg = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT( CONCAT( CONCAT('openjpeg image-to-j2k --input="hdfs://', image_path), '" --output="'), CONCAT( CONCAT( CONCAT('hdfs://', image_path), '.jp2'),'"'))) as ret_str;
STORE openjpeg INTO 'output/openjpeg';
/* STEP 5: validation of migrated jpeg2000 files using jpylyzer - Taverna: jpylyzerValidation */
jpylyzer = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT(CONCAT(CONCAT('jpylyzer validate --input="hdfs://', CONCAT(image_path,'.jp2')), '" --output="'),CONCAT(CONCAT( CONCAT('hdfs://', image_path), '.jp2.xml'),'"'))) as jpy_xml;
STORE jpylyzer INTO 'output/jpylyzer';
/* STEP 6: extract jpylyzer validity using xpath - Taverna: XPathJpylyzer */
jpylyzer_validation_list = FOREACH jpylyzer GENERATE image_path, XPathService('$xpath_exp3', jpy_xml) AS jpy_node_list;
jpylyzer_validation = FOREACH jpylyzer_validation_list GENERATE image_path, FLATTEN(jpy_node_list) as node1;
store jpylyzer_validation into 'output/jpylyzer_validation';
/* STEP 7: migrate jpeg2000 image file back to tiff - Taverna: opj_decompress */
j2k_to_img = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT( CONCAT( CONCAT('openjpeg j2k-to-image --input="hdfs://', CONCAT(image_path,'.jp2')), '" --output="'), CONCAT( CONCAT( CONCAT('hdfs://', image_path), '.jp2.tif'),'"'))) as j2k_to_img_ret_str;
STORE j2k_to_img INTO 'output/j2k_to_img';
/* STEP 8: compare orignal to restored image file - Tavera: compare */
imgcompare = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT( CONCAT(CONCAT('imagemagick compare-pixelwise --inputfirst="hdfs://', image_path), CONCAT(CONCAT('" --inputsecond="hdfs://',CONCAT(image_path,'.jp2.tif')),'" --diffoutput="hdfs://')),CONCAT(image_path,'.cmp.txt"'))) as imgcompare_ret_str;
STORE imgcompare INTO 'output/imgcompare';
{code}
h2. Evaluation summary
Files := Size of random sample
Total GB := Total size in Gigabytes
Secs := Processing time in seconds
Mins := Processing time in minutes
Hrs := Processing time in hours
Afg.p.f. := Average processing time per file in seconds
Obj/h := Number of objects processed per hour
GB/min := Throughput in Gigabytes per minute
GB/min := Throughput in Gigabytes per hour
Err := Number of processing errors
RT/it/s := Runtime per item in seconds
Par.Eff. := Parallelisation efficiency (sequential execution time, divided by the distributed execution time, and again divided by the number of nodes available for parallel processing)
h3. Taverna Workflow - Sequential execution
| *Files* | *Total GB* | *Secs* | *Mins* | *Hrs* | *Avg.p.f.* | *Obj/h* | *GB/min* | *GB/h* | *Err* | *RT/it/s* |
| 5 | 0,31 GB | 179 | 2,98 | 0,05 | 35,80 | 101 | 0,10 | 6,22 | 0 | 36 |
| 7 | 0,89 GB | 438 | 7,30 | 0,12 | 62,57 | 58 | 0,12 | 7,29 | 0 | 63 |
| 10 | 0,90 GB | 478 | 7,97 | 0,13 | 47,80 | 75 | 0,11 | 6,8 | 0 | 48 |
| 20 | 2,23 GB | 1150 | 19,17 | 0,32 | 57,50 | 63 | 0,12 | 6,98 | 0 | 58 |
| 30 | 2,99 GB | 1541 | 25,68 | 0,43 | 51,37 | 70 | 0,12 | 6,98 | 0 | 51 |
| 40 | 3,60 GB | 1900 | 31,67 | 0,53 | 47,50 | 76 | 0,11 | 6,81 | 0 | 48 |
| 50 | 3,46 GB | 2039 | 33,98 | 0,57 | 40,78 | 88 | 0,10 | 6,1 | 0 | 41 |
| 75 | 6,05 GB | 3425 | 57,08 | 0,95 | 45,67 | 79 | 0,11 | 6,36 | 0 | 46 |
| 100 | 8,30 GB | 4693 | 78,22 | 1,30 | 46,93 | 77 | 0,11 | 6,37 | 0 | 47 |
| 200 | 15,19 GB | 9246 | 154,10 | 2,57 | 46,23 | 78 | 0,10 | 5,91 | 0 | 46 |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| 1000 | 71,82 GB | 42376 | 706,27 | 11,77 | 42,38 | 85 | 0,10 | 6,1 | 0 | 2543 |
h3. Pig Workflow - Distributed Execution
| *Files* | *Secs* | *Mins* | *Hrs* | *Avg.p.f.* | *Obj/h* | *GB/min* | *GB/h* | *Err* | *RT/it/s* | *Par.Eff.* |
| 5 | 96 | 1,6 | 0,02666667 | 19,2 | 187,5 | 0,19338202 | 11,602921 | 0 | 19,2 | 0,07458333 |
| 7 | 101 | 1,68333333 | 0,02805556 | 14,4285714 | 249,50495 | 0,5272605 | 31,6356301 | 0 | 14,4285714 | 0,17346535 |
| 10 | 103 | 1,71666667 | 0,02861111 | 10,3 | 349,514563 | 0,52605799 | 31,5634791 | 0 | 10,3 | 0,18563107 |
| 20 | 114 | 1,9 | 0,03166667 | 5,7 | 631,578947 | 1,17412934 | 70,4477604 | 0 | 5,7 | 0,40350877 |
| 30 | 138 | 2,3 | 0,03833333 | 4,6 | 782,608696 | 1,29975452 | 77,9852711 | 0 | 4,6 | 0,44666667 |
| 40 | 161 | 2,68333333 | 0,04472222 | 4,025 | 894,409938 | 1,34013823 | 80,4082937 | 0 | 4,025 | 0,47204969 |
| 50 | 183 | 3,05 | 0,05083333 | 3,66 | 983,606557 | 1,13351091 | 68,0106549 | 0 | 3,66 | 0,44568306 |
| 75 | 272 | 4,53333333 | 0,07555556 | 3,62666667 | 992,647059 | 1,33521614 | 80,1129685 | 0 | 3,62666667 | 0,50367647 |
| 100 | 373 | 6,21666667 | 0,10361111 | 3,73 | 965,147453 | 1,33577629 | 80,1465774 | 0 | 3,73 | 0,50327078 |
| 200 | 669 | 11,15 | 0,18583333 | 3,345 | 1076,23318 | 1,36218858 | 81,7313147 | 0 | 3,345 | 0,55282511 |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| 1000 | 2746 | 45,7666667 | 0,76277778 | 2,746 | 1310,99782 | 1,56929349 | 94,1576093 | 0 | 2,746 | 0,61727604 |
Sven Schlarb
h2. Dataset
[SP:Austrian National Library Tresor Music Collection]
h2. Platform
[SP:ONB Hadoop Platform]
h2. Purpose of this experiment
The purpose of this experiment is to evaluate the performance of a scalable workflow for migrating TIFF images to images in the JPEG2000 format compared to an equivalent Taverna version of the workflow processing the data sequentially.
h2. Evaluation method
A Taverna workflow for sequential processing serves as a reference point for the large-scale execution. Out of the full Austrian National Library Tresor Music Collection data set subsets of increasing size are selected by a random process.
The following bash statement prepends a random number and subsequently orders the list to extract a random sample from the full data set:
{code}
find . -type f -exec ls -l -sd {} + | grep ".tif" | \
awk 'BEGIN {srand()} {printf "%05.0f %s \n",rand()*99999, $0; }' | \
sort -n | awk '{print $10 "\t" $7}' | head -$NUM > ~/tresormusicfilepaths${NUM}_withsize.csv
{code}
The resulting file contains the local file paths and can be used as input for the Taverna workflow presented in the next section.
Additionally the files are uploaded to HDFS as input for the large-scale workflow execution.
By that way it is possible to compare the sequential execution time to the large-scale processing time. The results depend on the size and configuration of the Hadoop cluster, therefore the relation between sequential and large-scale processing is expressed by the "parallelisation efficiency" which is the sequential execution time, divided by the distributed execution time, and again divided by the number of nodes available for parallel processing:
{code}
e := parallelisation efficiency
d := distributed execution time in seconds
s := sequential execution time in seconds
n := number of nodes (cores) available for parallel processing
e = s/d/n;
{code}
h2. Taverna workflow - sequential processing
The proof-of-concept version of the TIFF to JPEG2000 image migration workflow with quality assurance was created as a Taverna workflow illustrated by the following workflow diagram:
!TavernaWorkflow4276.png|border=1,width=235,height=786!
Diagram of the TIFF to JPEG2000 image migration workflow, Workflow available on MyExperiment at [http://www.myexperiment.org/workflows/4276.html]
The Taverna workflow reads a textfile containing absolute paths to TIF image files and converts them to JP2 image files using OpenJPEG ([https://code.google.com/p/openjpeg|https://code.google.com/p/openjpeg]).
Based on the input text file, the workflow creates a Taverna list to be processed file by file. A temporary directory is created (createtmpdir) where the migrated image files and some temporary tool outputs are stored.
Before starting the actual migration, it is checked if the TIF input images are valid file format instances using Fits ([https://code.google.com/p/fits|https://code.google.com/p/fits], JHove2 under the hood, [http://www.jhove2.org|http://www.jhove2.org]). An XPath service is used to extract the validity information from the XML-based Fits validation report.
If the images are valid TIF images, they are migrated to the JPEG2000 (JP2) image file format using OpenJPEG 2.0 (opj_compress).
Subsequently, it is again checked if the migrated images are valid JP2 images using SCAPE tool Jpylyzer ([http://www.openplanetsfoundation.org/software/jpylyzer|http://www.openplanetsfoundation.org/software/jpylyzer]). An XPath service (XPathJpylyzer) is used to extract the validity information from the XML-based Jpylyzer validation report.
Finally, we verify if the migrated JP2 images are valid surrogates of the original TIF images by restoring the original TIF image from the converted JP2 image and comparing whether original and restored images are identical.
The sequential execution of this workflow is used as a reference point for measuring the parallelisation efficiency of the scalable version and it allows measuring how the processing times of the different components compare to each other.
The following diagram shows the average execution time of each component of the workflow in seconds and was created from a 1000 images sample of the Austrian National Library Tresor Music Collection:
!distribution_execution_times.PNG|border=1,width=473,height=264!
h2. SCAPE Platform workflow - distributed processing
Apache Pig was used to create a scalable version of this workflow. The different processing steps of the Taverna workflow for sequential processing are represented by Pig Latin statements.
The comments of each processing step In the script below indicate which is the corresponding processing component in the Taverna workflow.
{code}
REGISTER tomar-1.5.2-SNAPSHOT.jar;
DEFINE ToMarService eu.scape_project.pt.udf.ControlLineUDF();
DEFINE XPathService eu.scape_project.pt.udf.XPathFunction();
SET job.name 'Tomar-Pig-Taverna-OpenJpeg';
SET pig.noSplitCombination true;
%DECLARE toolspecs_path '/user/onbfue/alan/toolspecs';
%DECLARE xpath_exp1 '/fits/filestatus/valid';
%DECLARE xpath_exp2 '/fits/identification/identity/@mimetype';
%DECLARE xpath_exp3 '/jpylyzer/isValidJP2';
/* STEP 1: load image paths - Taverna: image_paths_from_dir */
image_pathes = LOAD '$image_pathes' USING PigStorage() AS (image_path: chararray);
/* STEP 2: validation of tiff image files using fits - Taverna: fitsValidation */
fits = FOREACH image_pathes GENERATE image_path as image_path, ToMarService('$toolspecs_path', CONCAT(CONCAT('fits stdxml --input="hdfs://', image_path), '"')) as xml_text;
/* STEP 3: extract tiff validity using xpath - Taverna: XPathJhove2 */fits_validation_list = FOREACH fits GENERATE image_path, XPathService('$xpath_exp1', xml_text) AS node_list1, XPathService('$xpath_exp2', xml_text) AS node_list2;
fits_validation = FOREACH fits_validation_list GENERATE image_path, FLATTEN(node_list1) as node1, FLATTEN(node_list2) as node2;
store fits into 'output/fits';
store fits_validation into 'output/fits_validation';
/* STEP 4: migration of tiff image files to jpeg2000 - Taverna: opj_compress */
openjpeg = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT( CONCAT( CONCAT('openjpeg image-to-j2k --input="hdfs://', image_path), '" --output="'), CONCAT( CONCAT( CONCAT('hdfs://', image_path), '.jp2'),'"'))) as ret_str;
STORE openjpeg INTO 'output/openjpeg';
/* STEP 5: validation of migrated jpeg2000 files using jpylyzer - Taverna: jpylyzerValidation */
jpylyzer = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT(CONCAT(CONCAT('jpylyzer validate --input="hdfs://', CONCAT(image_path,'.jp2')), '" --output="'),CONCAT(CONCAT( CONCAT('hdfs://', image_path), '.jp2.xml'),'"'))) as jpy_xml;
STORE jpylyzer INTO 'output/jpylyzer';
/* STEP 6: extract jpylyzer validity using xpath - Taverna: XPathJpylyzer */
jpylyzer_validation_list = FOREACH jpylyzer GENERATE image_path, XPathService('$xpath_exp3', jpy_xml) AS jpy_node_list;
jpylyzer_validation = FOREACH jpylyzer_validation_list GENERATE image_path, FLATTEN(jpy_node_list) as node1;
store jpylyzer_validation into 'output/jpylyzer_validation';
/* STEP 7: migrate jpeg2000 image file back to tiff - Taverna: opj_decompress */
j2k_to_img = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT( CONCAT( CONCAT('openjpeg j2k-to-image --input="hdfs://', CONCAT(image_path,'.jp2')), '" --output="'), CONCAT( CONCAT( CONCAT('hdfs://', image_path), '.jp2.tif'),'"'))) as j2k_to_img_ret_str;
STORE j2k_to_img INTO 'output/j2k_to_img';
/* STEP 8: compare orignal to restored image file - Tavera: compare */
imgcompare = FOREACH fits_validation GENERATE image_path as image_path, ToMarService('$toolspecs_path',CONCAT( CONCAT(CONCAT('imagemagick compare-pixelwise --inputfirst="hdfs://', image_path), CONCAT(CONCAT('" --inputsecond="hdfs://',CONCAT(image_path,'.jp2.tif')),'" --diffoutput="hdfs://')),CONCAT(image_path,'.cmp.txt"'))) as imgcompare_ret_str;
STORE imgcompare INTO 'output/imgcompare';
{code}
h2. Evaluation summary
Files := Size of random sample
Total GB := Total size in Gigabytes
Secs := Processing time in seconds
Mins := Processing time in minutes
Hrs := Processing time in hours
Afg.p.f. := Average processing time per file in seconds
Obj/h := Number of objects processed per hour
GB/min := Throughput in Gigabytes per minute
GB/min := Throughput in Gigabytes per hour
Err := Number of processing errors
RT/it/s := Runtime per item in seconds
Par.Eff. := Parallelisation efficiency (sequential execution time, divided by the distributed execution time, and again divided by the number of nodes available for parallel processing)
h3. Taverna Workflow - Sequential execution
| *Files* | *Total GB* | *Secs* | *Mins* | *Hrs* | *Avg.p.f.* | *Obj/h* | *GB/min* | *GB/h* | *Err* | *RT/it/s* |
| 5 | 0,31 GB | 179 | 2,98 | 0,05 | 35,80 | 101 | 0,10 | 6,22 | 0 | 36 |
| 7 | 0,89 GB | 438 | 7,30 | 0,12 | 62,57 | 58 | 0,12 | 7,29 | 0 | 63 |
| 10 | 0,90 GB | 478 | 7,97 | 0,13 | 47,80 | 75 | 0,11 | 6,8 | 0 | 48 |
| 20 | 2,23 GB | 1150 | 19,17 | 0,32 | 57,50 | 63 | 0,12 | 6,98 | 0 | 58 |
| 30 | 2,99 GB | 1541 | 25,68 | 0,43 | 51,37 | 70 | 0,12 | 6,98 | 0 | 51 |
| 40 | 3,60 GB | 1900 | 31,67 | 0,53 | 47,50 | 76 | 0,11 | 6,81 | 0 | 48 |
| 50 | 3,46 GB | 2039 | 33,98 | 0,57 | 40,78 | 88 | 0,10 | 6,1 | 0 | 41 |
| 75 | 6,05 GB | 3425 | 57,08 | 0,95 | 45,67 | 79 | 0,11 | 6,36 | 0 | 46 |
| 100 | 8,30 GB | 4693 | 78,22 | 1,30 | 46,93 | 77 | 0,11 | 6,37 | 0 | 47 |
| 200 | 15,19 GB | 9246 | 154,10 | 2,57 | 46,23 | 78 | 0,10 | 5,91 | 0 | 46 |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| 1000 | 71,82 GB | 42376 | 706,27 | 11,77 | 42,38 | 85 | 0,10 | 6,1 | 0 | 2543 |
h3. Pig Workflow - Distributed Execution
| *Files* | *Secs* | *Mins* | *Hrs* | *Avg.p.f.* | *Obj/h* | *GB/min* | *GB/h* | *Err* | *RT/it/s* | *Par.Eff.* |
| 5 | 96 | 1,6 | 0,02666667 | 19,2 | 187,5 | 0,19338202 | 11,602921 | 0 | 19,2 | 0,07458333 |
| 7 | 101 | 1,68333333 | 0,02805556 | 14,4285714 | 249,50495 | 0,5272605 | 31,6356301 | 0 | 14,4285714 | 0,17346535 |
| 10 | 103 | 1,71666667 | 0,02861111 | 10,3 | 349,514563 | 0,52605799 | 31,5634791 | 0 | 10,3 | 0,18563107 |
| 20 | 114 | 1,9 | 0,03166667 | 5,7 | 631,578947 | 1,17412934 | 70,4477604 | 0 | 5,7 | 0,40350877 |
| 30 | 138 | 2,3 | 0,03833333 | 4,6 | 782,608696 | 1,29975452 | 77,9852711 | 0 | 4,6 | 0,44666667 |
| 40 | 161 | 2,68333333 | 0,04472222 | 4,025 | 894,409938 | 1,34013823 | 80,4082937 | 0 | 4,025 | 0,47204969 |
| 50 | 183 | 3,05 | 0,05083333 | 3,66 | 983,606557 | 1,13351091 | 68,0106549 | 0 | 3,66 | 0,44568306 |
| 75 | 272 | 4,53333333 | 0,07555556 | 3,62666667 | 992,647059 | 1,33521614 | 80,1129685 | 0 | 3,62666667 | 0,50367647 |
| 100 | 373 | 6,21666667 | 0,10361111 | 3,73 | 965,147453 | 1,33577629 | 80,1465774 | 0 | 3,73 | 0,50327078 |
| 200 | 669 | 11,15 | 0,18583333 | 3,345 | 1076,23318 | 1,36218858 | 81,7313147 | 0 | 3,345 | 0,55282511 |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| 1000 | 2746 | 45,7666667 | 0,76277778 | 2,746 | 1310,99782 | 1,56929349 | 94,1576093 | 0 | 2,746 | 0,61727604 |