Skip to end of metadata
Go to start of metadata

Definition of top-10 goals and objectives

Top-10 goals and objectives will be defined by testbed WP-leads and reviewed by SP-leads. The following two documents were used in the process of defining the overall goals and objectives.

  • An overview of scenarios (now named user stories) and how they relate to work packages in this matrix
  • An overview of goals, objectives and suggested metrics defined by TB.WP4 and reviewed by SP-leads in this document

Goals and objectives on the components and platform level will be mapped to the SQUARE software quality model. The following diagram is taken from D14.1.

A number of project wide objectives is defined in the Description of Work Part-B page 9-11

  • DoW-1: Addressing the problem of scalability in four dimensions: number of objects, size of objects, complexity of objects, and heterogeneity of collections
  • DoW-2: Introducing automation and scalability in the areas of (2a) Preservation actions, (2b) Quality assurance, (2c) Technical watch , and (2d) Preservation planning
  • DoW-3: Answering the question, what tools and technologies are optimal for scalable preservation actions, given a defined set of institutional policies?
  • DoW-4: Providing a methodology and tools for capturing contextual information across the entire digital object lifecycle
  • DoW-5: Producing a reliable, robust integrated preservation system prototype within the timeframe of the project
  • DoW-6: Validating and demonstrating the scalability and reliability of this system against large collections from three different Testbeds
  • DoW-7: Developing a skills-base through training
  • DoW-8: Ensuring a viable future for the results of this and other successful digital preservation projects and engaging with users, vendors, and stakeholders from outside the digital preservation community
  • DoW-9: Providing insight into remaining barriers to take-up, clarifying the business cases for preservation, and investigating models for the provision of scalable preservation services
  • DoW-10: Increase the variety of SCAPE deployments to include data center environments and new hardware facilities
  • DoW-11: Extend the functionality of SCAPE services to ensure the integrity and privacy of data that is preserved by remote and third party institutions
  • DoW-12&13: Extend the SCAPE user-base by large-scale preservation scenarios from domain scientists and data-center customers
  • DoW-14: Increased publication and dissemination activities in particular beyond the preservation community

The specific evaluations in the testbed evaluation methodology will be linked to these where appropriate.

Top-10 goals and objectives

The table describes the 10 goals and objectives that has been chosen as topics for evaluating experiments, the last three columns are solely for overview and should be filled when experiments and evaluations are performed. Each experiment will select the goals and objectives, which are relevant for the particular experiment.
More information about this topic can be found in deliverable D18.1, being the first deliverable of Evaluation of Results work package (TB.WP4).

No Goal Sub-goal Objective Comments DoW objectives Relevant user stories Evaluations
1 Performance efficiency Capacity
Resource utilization
Time behaviour
Improve DP technology to handle large preservation actions within a reasonable amount of time on a multi node cluster Evaluates different kinds of performance - e.g. throughput, time per MB time per sample, memory per sample, maximum files      
2 Reliability Stability indicators Package tools with known methods and run development with good open source practices Support available, release cycle, active community. Not directly relevant for testbeds but components developed in SCAPE in connection with scenarios in all testbeds could be used to evaluate this      
3 Reliability Runtime stability Improve DP technology (platform and tools) to run automated with proper error handling and fault tolerance E.g. ability to handle invalid input, error codes      
4 Functional suitability Completeness Improve number of file formats correctly identified within a heterogeneous corpus Identification, Automated Watch      
5 Functional suitability Correctness Develop and improve components to do preservation actions more correctly Valid and well-formed objects from action tools QA accuracy (e.g. correct similarity between two files) Automated Watch: Correct information      
6 Organisational maturity Dimensions of maturity: Awareness and Communication; Policies, Plans and Procedures; Tools and Automation; Skills and Expertise; Responsibility and Accountability; Goal Setting and Measurement Improve the capabilities of organisations to monitor and control preservation operations to a point where SCAPE methods, models and tools enable a best-practice organisation to be on level 4 This is the compound effect of policy-based planning and watch, cf. the vision described in the paper at ASIST-AM 2011      
7 Maintainability Reusability Increase number of tools registered in components catalogue making them discoverable This is more like a platform/watch evaluation - not directly linked to any specific scenarios or components      
8 Maintainability Organisational fit Ensure SCAPE technology fits organisational needs and competences How does it fit in an organisation. How easy is it to integrate with existing infrastructure and processes. Should be implicit in all we're doing rather than an explicit testbed requirement. We should be able to evaluate this in any solutions actually implemented in real organisations within the project      
9 Planning and monitoring efficiency Information gathering and decision making effort Drastically reduce the effort required to create and maintain a preservation plan cf. the metrics described in the paper at ASIST-AM 2011      
10 Commercial readiness   Evaluate to what extent SCAPE technology is going in a direction that makes it ready for commercial exploitation        

Example table

No Goal Sub-goal Objective Comments DoW objectives Relevant user stories Evaluations
EXAMPLE-1 Performance efficiency Capacity Improve DP technology to handle large preservation actions within a reasonable amount of time on a multi node cluster This is an example - all definitions and figures are examples DoW-1:
number of objects
  EVAL-EX-1
EXAMPLE-2 Functional suitability Completeness Improve DP technology to identify the majority of web content files with the correct MIME type This is an example - all definitions and figures are examples DoW-1:
heterogeneity of collections
  EVAL-EX-2
Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.
  1. Jun 29, 2012

    We have to be very careful with the context of this hierarchy - SQUARE only describes software component quality. For example, performance efficiency - time behaviour refers to the SYSTEM runtime. You can't use it to describe a decision making process.

    SQUARE is perfect for components, workflows and platform issues, but not necessarily for human control processes such as planning.

    1. Jun 29, 2012

      I have added two goals for PW. I havent removed goal 2 though (but it is now submerged in #9)

    2. Jul 05, 2012

      I totally agree - we should only use SQUARE measures where it actually makes sence.

  2. Jun 29, 2012

    If we go for top 10 goals, I find it easier to define goals in general, not for year 2. (#9 and #10 certainly are. I can break them up further, but then it's more than 2 ;)

    How do we plan to relate overall 42-months goals with goals for months 13-24?

    1. Jul 05, 2012

      I don't think we need (or should) call the goals we select for the first round for year-2 goals. So we could just call them overall SCAPE goals.

      The relation between what we do for the year-2 evaluation report and the year-4 evaluation is hopefully contained within this structure. We should continue to measure things we define also in the year-2 evaluation (at least until our defined goal for a specific metric is reached) - so I see this as continously development - we keep adding goals while developing solutions and evaluation these within the framework. So we should move goals from the back log section the the top-evaluation section whenever we feel ready to prove results and improvement to a specific goal.

      1. Jul 05, 2012

        Totally agree. It is difficult (possible at all?) to uniformly apply all the same goals across all testbeds because the testbeds are very different. In reality, different testbeds face different challenges, thus it is reasonable to come up with different priorities.

        Although the evaluation goals can evolve thoughout the phases of the project and differ between TBs, I think it is also important to bear the original SCAPE objectives in mind so that it is possible that we reach coherent sets of results at the end.

  3. Jul 25, 2012

    I am not sure I understand how the relevance mapping between objectives and testbeds have been decided and how relevance is judged. It looks like we have some misunderstandings in these mappings....

    For example, it is quite clear that information gathering and decision making effort are objectives in all testbeds. In RDST, this is judged relevant for the year 1 workflows, but not relevant to year 2. Similarly, LSDR does not do anything about either Planning or Watch now? I doubt that :) There are a number of scenarios that are very relevant to this and the other way around. As Erica rightfully pointed out, "it is also important to bear the original SCAPE objectives in mind so that it is possible that we reach coherent sets of results at the end."