Skip to end of metadata
Go to start of metadata


Tomasz Hofmann (PSNC)

Evaluation points

The main goal of this evaluation was to execute analysis on the number of abnormal laboratory examination results for a given disease codes in a given period. The investigated period and list of ICD10 codes are the input parameters for analysis algorithm. Statistics were gathered using PSNC Hadoop cluster and the map-reduce approach. As the evaluation metric the of objects per second  has been selected (the object is defined as a single HL7 file stored in HDSF). 

Assessment of measurable points
Metric Description Metric baseline Metric goal July 21, 2014 [Test 1] July 28, 2014 [Test 2]
July 30, 2014 [Test 3]
number of objects per second number of HL7 files processed per second - - 4.196 [obj/s]
4,761 [obj/s] 4,979 [obj/s]

Note:  *as an object we proposed to use one HL7 file

Metrics must be registered in the metrics catalogue

Visualisation of results

The chart below presents results of analysis for Test 2. Colours indicate different ICD10 disease codes. Test has been performed for patients who visited WCPT hospital between 1-01-2013 and 31-12-2013. Each column indicates the number of abnormal results in laboratory examinations for all patients. The ICD10 disease codes investigated in this analysis are as follows:

  • A15.0 - Tuberculosis of lung, confirmed by sputum microscopy with or without culture
  • A15.1 - Tuberculosis of lung, confirmed by culture only
  • J85.1 - Abscess of lung with pneumonia 


Additional information

Table 1 presents processing time of the whole job per test. Tables 2 and 3 provide information on the execution time  and number of processed rows related to map and reduce tasks respectively. The execution times (and performance) for three tests are similar because regardless of the analysed period it is necessary to process all HL7 files stored in the cluster.

Table 1. Overall statistics

Parameter Test 1 Test 2 Test 3
Analyzed period     
1.07.2012-1.07.2014 1.01.2013-31.12.2013      
Processing time      
80 [m] 71 [m] 68 [m]

Table 2. Statistics for map task

Parameter Test 1 Test 2 Test 3
Processing time (for all records)   
80 [m] 71 [m]   
68 [m]
Number of records   
20 141  20 285  20 315

Table 3. Statistics for reduce task

Parameter Test 1 Test 2 Test 3
Processing time (for all records)    
30 [s] 26 [s]    
24 [s]
Number of records    
- -

Technical details


The experiment is composed of the following steps (accordingly to the MapReduce schema):

  1. the map task []:
    1. for each HL7 file saved on HDFS do:
      1. parse document in order to find out abnormal laboratory results - count them and next add into the context the following pair: Key=icd10 code, Value=the count of the abnormal results 
  2. the reduce task []:
    1. for each icd10 code accumulate the count of the abnormal results
    2. produce the result pair Key=icd10 code, Value=the number of abnormal results
  1. statistics are gathered by downloading and parsing log files []

Scripts used to execute evaluation

Execution commands

./ -admission 20090601 -destination ./test3/laboratory.png -discharge 20140710 -hospital wcpit -icd10s J85.1 -icd10s A15.0 -icd10s A15.1 -laboratory RDW -width 800 -height 600
./ laboratory

-admission : date of patient admission to hospital
-discharge : date of patient discharge from hospital
-destination : folder for hadoop job results (only one per job execution)
-width : width of the chart in pixels
-height : height of the chart in pixels
-icd10s : list of idc10 codes
-laboratory : laboratory examination code [example: RDW or other from hl7 files]

Important note: please change the -destination for each job execution.

Hadoop job

Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.