Skip to end of metadata
Go to start of metadata

Evaluator(s)

Tomasz Hoffmann (PSNC)

Evaluation points

The objective of this evaluation task was to test the efficiency of one instance of the DICOM data receiver available within the Medical Data Center platform at PSNC. In order to measure ingest performance the metric named number of objects per second has been used. Other metrics gathered during the tests provide additional information and include throughput in bytes per secondmax object size handled in bytes and min object size handled in bytes. The metric goal was set to 0.25 objects per second to make sure that it is possible to ingest 10GB of data per day, which is the approx. amount of data produced by WCPT hospital each day. This evaluation is composed of 4 tests for 5, 10, 15, 20 concurrent threads sending DICOM files from one computer (client)._ _This evaluation did not include the process of copying data to the archiving system (cloud data storage).

Assessment of measurable points
Metric Description Metric goal June 30, 2014 [5T] July 1, 2014 [10T] July 1, 2014 [15T] June 27, 2014 [20T]
number of objects per second number of ingested DICOM files* per second 0.25 [obj/s] 5 x 7.01 = 35,05 [obj/s]
10 x 4.50 = 45 [obj/s]
15 x 3.08 = 46,2 [obj/s]
20 x 2.36 = 47,2 [obj/s]
throughput in bytes per second
_bytes per second_ 116 000 [bytes/s] 3 252 209 [bytes/s]
2 088 492 [bytes/s]
1 429 771 [bytes/s]
1 091 593 [bytes/s]
max object size handled in bytes
max size of DICOM file - 1 827 084 [bytes]
1 827 084 [bytes] 1 827 084 [bytes]
1 827 084 [bytes]
min object size handled in bytes
min size of DICOM file - 44 090 [bytes]
44 090 [bytes] 44 090 [bytes]
44 090 [bytes]

Notes:

  • an object is a single DICOM file
  • xT - means x sending threads
  • results in table presents statistics per one sending thread that are multiplied by the number of threads to get overall statistic  

Metrics must be registered in the metrics catalogue

Visualisation of results

The chart below shows relation between ingest speed (in objects per second) and the number of sending threads used in the test. Tests show that a single client can reach up to approx. 45 objects per second, which is in fact the limit of the client computer.

 

Tables below provide additional statistics. Table 1. presents a summary of all files and series used in the test (please note that a single series is composed of multiple DICOM files related to a single patient's examination, e.g. CT scan, RTG). Table 2. and Table 3 presents overall statistics related to series. Table 4. and Table 5. presents overall statistics related to files (objects).

Table 1. All files and series stats

Parameter Test 1 Test 2 Test 3 Test 4
No sending threads 5 10 15 20
All series number
263 263 
263 254
All DICOM files number
27 053 27 053
27 053 
26 300
All DICOMs size
12 550 [MB]
12 550 [MB]
12 550 [MB]
12 153 [MB]
Saving time
3 859 [s]
6 009  [s]
8 778 [s]
11 134 [s]
HDFS saving time
3 086 [s]
3 285 [s]
7 730 [s]
9 959 [s]
Hbase saving time
457 [s] 500 [s]
544 [s]
646 [s]

Table 2. Maximum series stats

Parameter Test 1 Test 2 Test 3 Test 4
No sending threads
5 10 15 20
Max series size
233 [MB] 233 [MB] 233 [MB]
233 [MB]
Max series sending time
67 [s] 112 [s] 163 [s]
226 [s]
Max series hdfs saving time
52 [s] 91 [s] 143 [s]
198 [s]
Max series hbase saving time
10 [s] 14 [s] 14 [s]
28 [s]
Max series items number
444 444 444 444

Table 3. Minimum series stats

Parameter Test 1 Test 2 Test 3 Test 4
No sending thread 
5 10 15 20
Min series size 
0.04 [MB]
0.04 [MB]
0.04 [MB] 
0.04 [MB]
Min series sending time 
0.06 [s] 0.06 [s] 0.08 [s]
0.09 [s]
Min series hdfs saving time 
0.04 [s] 0.04 [s] 0.07 [s]
0.07 [s]
Min series hbase saving time 
0.01 [s] 0.01 [s] 0.01 [s]
0.01 [s]
Min series items number 
1 1
1

Table 4. Maximum files stats

Parameter Test 1 Test 2 Test 3 Test 4
No sending thread 
5 10 15 20
Max file size 
1.82[MB] 1.82 [MB]
1.82 [MB] 
1.82 [MB]
Max file sending time 
3.55 [s]
3.05 [s]
2.61 [s]
3.74 [s]
Max file hdfs saving time 
2.58 [s]
1.92 [s]
1.42 [s]
2.35 [s]
Max file hbase saving time 
1.10 [s]
1.20 [s]
1 [s] 
1.49 [s]

Table 5. Minimum files stats

Parameter Test 1 Test 2 Test 3 Test 4
No sending thread  
5 10 15 20
Min file size  
0.04 [MB]   
0.04 [MB] 
0.04 [MB]  
0.04 [MB]
Min file sending time  
0.06 [s]
0.06 [s] 0.08 [s] 
0.07 [s]
Min file hdfs saving time  
0.04 [s] 0.04 [s] 0.07 [s] 
0.05 [s]
Min file hbase saving time  
0.01 [s] 0.01 [s] 0.01 [s] 
0.01 [s]

Raw log files

All log files created during the evaluation are available online at:

https://git.man.poznan.pl/stash/projects/SCAP/repos/test-scripts/browse/dicom-test-results/ingest/without_sftp/test01_30_06_2014.log

https://git.man.poznan.pl/stash/projects/SCAP/repos/test-scripts/browse/dicom-test-results/ingest/without_sftp/test02_1_07_2014.log

https://git.man.poznan.pl/stash/projects/SCAP/repos/test-scripts/browse/dicom-test-results/ingest/without_sftp/test03_1_07_2014.log

https://git.man.poznan.pl/stash/projects/SCAP/repos/test-scripts/browse/dicom-test-results/ingest/without_sftp/test04_27_06_2014.log

Technical details

Workflow

The experiment is composed of the following steps:

  1. prepare data set of DICOM files (over ~10GB, which is amount of data produced by WCPT hospital in one day),
  2. send data to MDC server ( [dicomSender.py]), and log on the server information about:
    1. size of recived DICOM file
    2. time of saving data in:
      1. HDFS
      2. Hbase
  3. parse server log file in order to evaluate metrics [resultsParser.py] (for metrics description please see previous point)

Scripts used to execute evaluation

https://git.man.poznan.pl/stash/projects/SCAP/repos/test-scripts/browse/python/DICOM-tests/dicomSender.py

https://git.man.poznan.pl/stash/projects/SCAP/repos/test-scripts/browse/python/DICOM-tests/resultsParser.py

Execution commands

 python dicomSender.py ~/tmp/dicom/
 python resultsParser.py ~/tests/dicom_tests/1_07_2014/dcmreceiverStats.log test01_1_07_2014.txt

Notes:

  • scripts are prepared for Python in version 2.7.
  • dcmreceiverStats.log is a dcmrcv tool log file 
Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.