Skip to end of metadata
Go to start of metadata

Status

Active

Contact

Pavel Smrz, BUT

User Story

Today’s digital cinematography, game industry, advanced robotics, and many other fields take advantage of data-intensive video content analysis and processing. One of key tasks involved consists in large-scale 3D reconstruction from photographic images/video sequences and special remote sensing devices such as the LiDAR. Many-core CPU and GPU clusters available in data centres provide a natural platform for such a task.

This story focuses on a preservation scenario dealing with large-scale video processing data and all related processes. It aims at preserving interlinks among raw and derived data, created models and metadata and it deals with information quality management procedures within the whole process. It also employs advanced workflows and preservation actions concerned with preserving contextual information of the processed datasets – enhancing capturing/harvesting information, meta-representation framework for the analysis models, data reuse models, etc.

Three levels of preservation components need to be combined to cope with user needs in this context:

  • consistency checking and quality assurance of resulting analysis results;
  • preservation of the static context and pre-defined links among data, semantic relationships between the raw data and derived knowledge components;
  • selection of characteristics profiling actual runs of particular tasks and influencing their results, preservation of log components and data-centre performance characteristics.

Preservation strategies for data centres also need to take into account specific characteristics of the platform provider / customer setting. The centres usually operate independently of the particular application domain and they are accessed remotely to process a specific set of data. Thus, voluminous input data needs to be transferred first and results sent back to the task owner. The preservation has to reflect distinct roles of the data owner and the platform provider and pay attention to access rights and security in general.

A particular story that will define a base for specific preservation experiments aims at building detailed 3D models of a large area. Involved algorithms take advantage of the GPU cluster available at the Timisoara data centre. The quality of results (e.g., the coverage of an area in focus) generally depends on a task dispatching mechanism and actual performance characteristics of individual nodes and an overall load during the processing. Thus, these features need to be logged and taken as a part of preserved links between the raw data and the results.

The input data will be transferred from BUT servers to the UVT data centre first. It takes form of image and video files as well as LiDAR measurements. The input needs to be pre-processed first to enter the main 3D reconstruction and rendering component. For example, a video file needs to be split into individual shots. To preserve links between the input and the results in a consistent form, it is thus necessary to validate the data transfer, formats of input files and results of their pre-processing. These processes will take advantage of the map-reduce schema and its Hadoop implementation running at standard servers available in the data centre.

An initial setting of the large-scale experiment (referred to as the first phase in the DoW) will involve a simple distribution schema of the 3D reconstruction and rendering tasks over subsets of the data and individual nodes available. The processing will be defined as specific Taverna workflows that will be compared in terms of the effectiveness to reach a defined result.

The final version of the preservation experiment will involve more advanced workflows and preservation actions focusing of semantic-aware dynamic context of processed datasets, sophisticated dispatching techniques based on harvested information, and throughout analysis of results aiming at their low-barrier reusing.

Specific User Story Definition

As a researcher dealing with SLAM (simultaneous localization and mapping) and visual geo localization, I need to preserve results of large-scale scene reconstruction and rendering, together with related source objects and parameters of the computation process, so that the results will be available for (re-)use and further refinement in a long term.

User Requirements/Components

A suite of tools for preservable large-scale scenes reconstruction and annotation is needed:

  • Toolkit for preserving large-scale experiments providing functions divided into following groups:
  1. Functions for preserving data centre environment details
  2. Functions for creating and analysing interlinks
  3. Functions and tools for preserving input and output files
  4. Functions for checking metadata consistency
  • Experimental video processing applications:
  1. Application for distributed 3D scene reconstruction equipped with preservation toolkit.
    Application for large-scale scene annotation and localization

Experiments


Developer Notes

Space for discussion, suggested solutions, links to other user stories, etc.

Related Documents

Scenarios, case studies, etc. that provide background to this story.

Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.