Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version. Compare with Current  |   View Page History

This page describes the installation of SCAPE platform in a High Performance Computation centre. The hardware platform is described here.

Overview

Deployment of SCAPE platform in large data centres or in Cloud Computing environments raises specific challenges, such as dynamic allocation of components to compute nodes, monitoring the platform, quality of service etc. Automated cluster provisioning and platform deployment is achieved by integrating and/or extending a set of specific tools, such as: a Node Deployment system (Cobbler)

  • a node deployment system, such as Cobbler, helps administrators to dynamically allocate the nodes: systems can be added and removed from the management of the node deployment and configuration management systems on the fly, both on bare-bones computing hardware and on virtualized computing resources
  • On the fly software deployment using the customized Configuration Management System (Puppet): it allows the evolution of SCAPE software packages by providing “high-level” recipes describing the tools and relations between them. It enables dynamic allocation of SCAPE components to computing resources with minimal human intervention, providing, in this way, a more deterministic software deployment process. It ensures that software is deployed as expected by the developers, meeting all required expectations.
  • Monitoring/Quality measures: the integrated Puppet Configuration Management system natively provides capabilities for integration with and deployment of the Nagios monitoring solution. This allows operators/administrators to provide a better QoS.
Cloud Deployment Toolkit for SCAPE Platform

The toolkit aims to provide the software components and corresponding puppet modules for deploying critical SCAPE Components in Cloud Environments. The PoC (Proof of Concept) aims at demonstrating the operation of selected SCAPE Platform Components in Cloud Environments, focusing on Eucalyptus and Amazon EC2 and fostering their scalability for providing on demand computing capacity. The toolkit is composed of:

  • Developing a GUI (web based portal) for the management of an SCAPE Platform deployment on Eucalyptus based clouds, and in the next stages on Amazon Web Services EC2
  • Integrating Puppet and PuppetDB Rest API’s
  • Abstracting EC2 and Eucalyptus API for providing an uniform programming environment, ensuring this way the portability

More details, plus user guide on SCAPE Cloud Toolkit, one can find on Bitbucket. In order to orchestrate the deployment of different components we are using Puppet Configuration Management System customized to SCAPE needs. More details on modules used within the SCAPE project are given in this Bitbucket project. Below are Puppet recipes for most common components and tools of the SCAPE platform:

Another tool aims to provide integration between the Hadoop Filesystem (HDFS) and more ‘classical’ products like FTP. It is an Apache Mina based FTP server for exposing HDFS filesystem to local/remote clients that lack HDFS capabilities. One of the main use cases of this tool is to facilitate data staging between legacies HPC systems and Hadoop based computing clusters. See Bitbucket project for source code and installation instructions.

PSNC Data Center services

Based on the scenarios identified by WCPT it was agreed to implement and deploy several services at PSNC for integrating WCPT working environment. The following services are needed to execute specific scenarios at WCPT:

  • DICOM download service - this service is responsible for providing access to all anonymized DICOM files stored at PSNC. This service is necessary to execute scenario named large-scale access at hospital because the working environment at WCPT needs stored DICOM files in order to present them for all interested WCPT users.
  • DICOM HDFS-enabled server - this service provides the possibility to upload anonymized DICOM files on the PSNC's HDFS cluster. It is necessary to execute scenario named large-scale ingest of medical data as the working environemnt at WCPT needs to transfer anonymized DICOM files to PSNC Data Center.
  • HL7 HDFS-enabled gateway - 
Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.