Checklist for Evaluation Participation

From Federation of Earth Science Information Partners
Revision as of 21:56, April 20, 2016 by Sscott (talk | contribs) (Created page with "''Note: this document is in draft and does not necessarily reflect the final policies and procedures.'' ==Checklist for Project Evaluation== This document describes the tech...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Note: this document is in draft and does not necessarily reflect the final policies and procedures.

Checklist for Project Evaluation

This document describes the technical requirements for participation in an ESIP (P&S) technical evaluation. This is similar to a testbed accepting projects into the system that are at least a TRL 4/5; however, we are concerned with specific features and functionality being present before any effective evaluation can occur.

In some cases, your project may consist of multiple research object types, each of which can be evaluated. For example, your project consists of a web application and an ontology. Currently, this requires multiple assessment activities. Alternately, you can limit the assessment to just one of the research objects, for example, request an assessment only of the ontology.

If your project doesn’t meet one or more requirement, ESIP won’t accept the project into the formal evaluation process; however, we can provide an informal assessment to provide guidance on moving your project forward.

Any Evaluation

For any external assessment, the evaluators need some information regarding the system expectations. Consider - if the evaluator has no information on what the system, or some part of the system, is meant to do, the evaluator will necessarily need to make assumptions during testing that may cast the software in a negative light. The evaluator is assessing the wrong thing, not having information about the right thing. Most, if not all, of the information can be provided through one of the documentation modes encouraged for supporting software, so this should not be considered an additional burden on the PI for participation in the evaluation process. Indeed, code and project documentation are often key components of the evaluation criteria.

Software Evaluation

  1. Security Constraints
    1. If you are hosting the system to be evaluated (it can’t be run locally by an evaluator or on a test platform by ESIP), please ensure that temporary credentials can be provided to the evaluators for the duration of the assessment process and that those credentials can and will be revoked on completion of the test process.
    2. If ESIP or the evaluators are hosting the system, provide documentation for the safe installation and administration of access credentials. Should the evaluatable project be part of a containerized system provided by another group, provide enough information regarding that external system for ESIP of the evaluators to locate the appropriate documentation.
  2. Access to Code
    1. The full software evaluation process is predicated on evaluators having access to the codebase. However, this may not be possible in all cases and does not automatically exclude the project from evaluation. It does change the nature of the evaluation to one that is evaluating reusability from the end user perspective.

INSERT: outline cases when withholding code is allowed.

  1. Documentation
    1. If no documentation is provided, in the code itself, as API documentation or other common documentation practices, it can be difficult for the evaluators to effectively assess the project as a whole. As noted above, this can cause issues in the validity of the final assessment to the detriment of both evaluators and PIs. ESIP strongly encourages you to add relevant documentation before continuing with the full evaluation process but will consider the informal assessment if both parties agree.

Other Research Object Evaluation

This section will be updated as new RO evaluation types are added.