View Source

| *Title* \\ | Determine render-ability of displayable web objects |
| *Detailed description* | To make a digital object render-able depends on standards, agreements, and understandings in interfaces and hardware, and there are strong interdependencies between these conditions. Because of these technical dependencies, the content of the web archive might not be render-able. \\
Generally, in order to preserver web content, it is required to be aware of new hardware expectations that arise. For example, as WebGL becomes standard, then a 3D accelerator card will probably be required for some site. Or, if multi-touch gets added to the standard, there is a dependency on having having multi-touch hardware, etc. |
| *Scalability Challenge* \\ | \\ |
| *Issue champion* | [Maureen Pennock|] (BL) |
| *Other interested parties* \\ | SB: <comment_missing> \\
KB: Low priority. This approach might be of help to improve tools for identification (scenario 1). \\
ONB: Would be interesting, but low priority \\
BL: <comment_missing> |
| *Possible Solution approaches* | * BL:
** One idea would be to apply a brute force approach where simply every file is passed through every renderer that is available, in order to check if the renderer can technically execute the object. If the process fails, then it can serve at least as an indicator for some difficulties with that data that require a closer look.
* EXL:
** I'm not clear as to how we could know if a file was rendered correctly. An application can open a file successfully but the contents might not be what we expect.
** Mapping between formats and possible renders can be established, so not all renderers must be tested. It is not clear how to detect correct renderer automatically, the process may not fail, but the rendering can be incorrect.
** Watch can contribute to the solutions with the triggers:
*** Monitor new browsers, browser versions, browser plugins, or use trends
*** Monitor changes in standards (e.g. new web standards)
*** Monitor changes in standard adoption by different browsers and browser versions (e.g. Acid3)
*** Monitor agreements and understandings in interfaces and hardware \(?)
*** Monitor new renderers, to be added to the brute-force tests |
| *Context* | \\ |
| *Lessons Learned* | _Notes on Lessons Learned from tackling this Issue that might be useful to inform the development of Future Additional Best Practices, Task 8 (SCAPE TU.WP.1 Dissemination and Promotion of Best Practices)_ \\ |
| *Training Needs* | _Is there a need for providing training for the Solution(s) associated with this Issue? Notes added here will provide guidance to the SCAPE TU.WP.3 Sustainability WP._ \\ |
| *Datasets* | \\ |
| *Solutions* | |

h1. Evaluation

| *Objectives* | _Which scape objectives does this issues and a future solution relate to? e.g. scaleability, rubustness, reliability, coverage, preciseness, automation_ |
| *Success criteria* | _Describe the success criteria for solving this issue - what are you able to do? - what does the world look like?_ |
| *Automatic measures* | _What automated measures would you like the solution to give to evaluate the solution for this specific issue? which measures are important?_ \\
_If possible specify very specific measures and your goal - e.g._ \\
_&nbsp;\* process 50 documents per second_ \\
_&nbsp;\* handle 80Gb files without crashing_ \\
_&nbsp;\* identify 99.5% of the content correctly_ \\ |
| *Manual assessment* | _Apart from automated measures that you would like to get do you foresee any necessary manual assessment to evaluate the solution of this issue?_ \\
_If possible specify measures and your goal - e.g._ \\
_&nbsp;\* Solution installable with basic linux system administration skills_ \\
_&nbsp;\* User interface understandable by non developer curators_ \\ |
| *Actual evaluations* | links to acutual evaluations of this Issue/Scenario |