P&S Telecon - January 19, 2016

From Federation of Earth Science Information Partners
Revision as of 23:51, 10 February 2016 by Graybeal (Talk | contribs)

Jump to: navigation, search

Google Doc Notes

Webex Recording Streaming Access

Webex Recording Download (note: to play the downloaded recording, you will need to install the Webex ARF Player.)

Contents

Technology Evaluation Framework Final Report

TEF final report out and recommendations for the TEF given by John Graybeal.

Presentation Slides (pdf)

Final Report

TEF TRL Spreadsheet

TEF TRL Development Spreadsheet - an archival copy of a development path (with Fitness questions) not chosen

Please read the provided documents and watch the first 20 minutes of the archived recording for the recommendations based on the development of the spreadsheet and the feedback from the evaluation teams.

Final outcome is that the evaluation teams felt that the TEF-generated TRL score matched what they felt was an appropriate TRL for the project. Feedback from the evaluation teams indicated a need for some customization based on the kind of technology evaluated and guidance on handling situations where some kinds of information aren't, or can't be provided, by the PIs, such as source code, or where the evaluator felt the question was outside their knowledge/expertise. The larger question is to find a balance between repeatability and customization.

No specific discussion for modifying the TEF for the next AIST evaluation round.

AIST updates from the Winter 2016 meeting

An update from Annie Burgess on the AIST evaluation session.

From Mike Little: there's room to improve but impressed with the first evaluation round and will continue for another AIST evaluation round.

There was a lot of discussion in the room and a good mix of people, computer scientists, managers, etc. A question of whether we could create a template for software evaluation and interest in creating and refining one as well as interest in using one. Also interest in comparing other sources of evaluation recommendations against those used for the TEF (sources listed in the report document).

New or Ongoing Evaluation Efforts

P&S awarded two new testbed projects in December 2015, one through the Disaster cluster and one through the Semantic Web Committee.

We discussed the evaluation needs of both. For the Semantic Web testbed, they will be looking at current practices for evaluating ontologies and P&S and ESIP staff will be working with them to coordinate an evaluation process for those knowledge artifacts. The Disaster project is interested in data evaluation and possible use of the Data Maturity Matrix from the Data Stewardship committee.

Data Maturity Matrix References:

Summer 2015 Session

template on figshare

Long-term maintenance and improvements for the Evaluation process, either with the TEF or either of the outcomes of the above testbed projects, will take additional effort. We should see who in ESIP is interested in the topics and in helping to identify evaluation needs and processes. ESIP hasn't been a longterm maintainer of projects previously, but will need to identify someone to handle the ongoing maintenance of the TEF at least through upcoming AIST evaluation rounds.

Ken Keiser noted that EarthCube is also working to develop a testbed activity and evaluation processes for that community. There's interest in building off of lessons learned and products developed by ESIP for that effort with additional evaluation needs targeted at interoperability and re-use.

In addition, there's earlier work from ESIP members on infusion potential evaluation, a self-assessment process from 4-5 years ago with Peter Fox and Chris Lynnes. Also efforts from ESDSWG (Karen Moe) as well. (References to come.) See also Bob Downs' work on reuse readiness levels presented at the winter meeting.

We are (Soren Scott & Anne Wilson) discussing software evaluation rubrics and progressions through the Science Software cluster and BESSIG. John Graybeal noted that there aren't currently rubrics for that now. For BESSIG, that discussion will take place at the February meeting. We may also want to consider a cross-cluster working group, with members of the Science Software cluster, the Disasters cluster (stated interest in evaluating services) and Data Stewardship to start.

OGC 12/13 Testbed Update

The initial discussion came out of the Testbed session at the 2016 Winter meeting (Christine White and George Percival (OGC)). ESIP members can participate in OGC's Testbed 12 but the RFP deadline has passed so that is unfunded participation. Funding is available for participation in Testbed 13.

We will continue the discussions, particularly as they relate to OGC Testbed 13, in early February.

Action Items

  1. Find resources for earlier infusion potential work;
  2. Continue discussions with the Semantic Web and Disaster testbed participants about their evaluation needs and process development;
  3. Look for other ESIP members interested in participating in a cross-cluster effort related to software/technology evaluation;
  4. Participate in the OGC Testbed 12 & 13 participation and potential ESIP testbed integrations (early February);
  5. Co-chair nominations and election.
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox