Past Testbed Tasks
|ESIP Sponsoring Group
|Addressing an Immediate Need: Establishing the Multi-State Fleet Response Working Group C-COP to Accelerate Geospatial Data Testing Across Public and Private Sectors
|Address the need to connect disparate agencies and organizations for sharing real-time data during a disaster event. Will use GeoCollaborate™ to access public and private sector data sources that help operations professionals locate and be routed to open places of business that provide supplies and services during a disaster or prolonged power outage. StormCenter and The All Hazards Consortium (AHC) will lead collaborative decision-making sessions that include Fleet Response Working Group members to gather feedback for how this approach may impact their decision-making environment.
|Expanding a Collaborative Common Operating Picture (CCOP) to Accelerate Geospatial Data Testing
|The Disaster Lifecycle Cluster seeks to build upon the previous funded proposal that established a Collaborative Common Operating Picture (CCOP) to provide a platform from which to share geospatial data in a collaborative environment. By increasing the number of instances that can be used by an initial group of users/participants within the ESIP Disaster Lifecycle Cluster, two additional organizations can engage in their own testing and collaborative connections among ESIP member data providers and potential users that support disaster lifecycle and end user communities.
|Evaluating Prototypes in ESIP’s Testbed Ecosystem (FastTrack)
|Responds to the ESIP Fast-Track solicitation and addresses the need for product evaluation criteria and mechanisms suitable for internal and community use. We propose an analysis which researches and consolidates existing evaluation strategies for community products, and recommends an approach suitable for use with submissions to ESIP’s Testbed process. While targeting initial criteria for products entering and exiting ESIP Prototype status, the proposal anticipates a framework that can be developed and rolled out incrementally, and later applied to Testbed products with increasing readiness levels.
|Products & Services
|Disaster Life Cycle Testbed - An ESIP Product & Service Testbed Proposal: Establishing a Collaborative Common Operating Picture (C-COP)
|Start a framework for addressing recent Presidential Executive Orders (PEOs) that address the importance of building resilience in the face of a changing climate both nationally and internationally. The testbed will provide a forum by which ESIP members can not only improve their products, but also share best practices for other members considering how they too might have data products to offer to the disasters response community.
|Connect, Share and Discover ESIP Research and Expertise using VIVO Technology
|ESIP community needs a searchable database cataloguing the research and expertise of ESIP members to promote integrated and interdisciplinary research. VIVO is a semantic-web-based research and expertise discovery tool. This work is to use this technology to research and develop a testbed system for the collection and discovery of ESIP research and expertise, and includes extending the VIVO ontology to include the ESIP research and expertise ontology.
|ESIP Testbed Web Support
|An entity linking service for documents and datasets in Earth and environmental sciences
|(1) Engage member organizations of ESIP to use the services and to share their ontologies and vocabularies to build the knowledge base; and (2) Design, build and online deployment of the service that support entity linking in Earth and environmental sciences.
|ToolMatch Service Testbed Project Proposal to Expand Community Engagement
|In order to make further progress on the viability and robustness of the ToolMatch service, much more instance data needs to be added to the knowledge store .... Testing the service by means of an online hackathon should also allow the service to be known more broadly... In-depth analysis of the types of data collection, visualization tools, and technologies used by these data catalogs and registries.
|ToolMatch - Semantic Web Cluster
|ToolMatch Service Testbed Project
|Address two use cases by developing out The ToolMatch service - 1) that its difficult to know what tools can be used on a dataset, and 2) the converse; it is difficult to know what datasets a tool is capable of working upon. The ToolMatch service will have, at its foundation, a simple ontology and set of rules that will describe what kinds of tools work with what kinds of datasets. For both use cases, a simple user interface for user interaction, and a simple RESTful web service for use by applications and data portals, will give the client access to the ToolMatch knowledge base with the same goal of matching tools with data.
|Semantic Web Cluster, Energy & Climate Cluster
|Michael Huhns, Line Pouchard
|Evaluating the ESIP Ontologies for Mapping and Reconciliation
|Many organizations, groups, and individual scientists are developing ontologies to specify the semantics of their domains of interest in environmental sciences. The ontologies are useful, but largely exist in isolation. There are major benefits to be obtained by relating the ontologies to each other and reconcile their differing specification languages. The objective of our effort is to develop a semi‐automated means for curating ontologies and reconciling their representations. The result will be greatly improved accessibility and usability of the ontologies, which will help to accelerate research in environmental sciences.
|Semantic Web Cluster
ESIP Testbed Task Archive
Below is an archive of past Testbed activities with a short description for each.
|Consuming and Reusing Semantic Geoscience Data
|Geoscience data is an underrepresented component of the Linked Data cloud. It has since become a central tenet of the Semantic Web Cluster's long term plan to facilitate the publication and consumption of geoscience Linked Data as well as promote ontology reuse. This project will provide initial feedback regarding the benefits of Ontology Design Patterns (ODPs) in geoscience data publication and consumption by analyzing the existing ontologies from the ESIP ontology portal, creating and evaluating an ODP for direct geoscience data access, and finally comparing the ODP-based approach to the World Wide Web Consortium (W3C) approach to publishing, retrieving, and reasoning over large amounts of geoscience data.
|Expert Skills Database
|The Federation collectively includes an exceptionally wide range of expertise among its participating members. These expert skills of Federation members will be categorized in a knowledge base and offered as a service. We use the master ESIP email list of over 700 names and Drupal tools to enable any member to associate their name to a skill and associated expertise level. Currently, the skill list consists of 60 information technology (IT) skills, but members can add additional categories. A GUI enables users to search this skill list by multiple criteria. Ultimate Benefit: Promotion of expert skills available within the Federation.
|Unique Data Identifiers
|The Preservation and Stewardship Cluster and the NASA Technology Infusion Working Group have been considering permanent identifier schemes for data products http://wiki.esipfed.org/index.php/Preservation_and_Stewardship. These identifiers can serve as references in journal articles as well as inventory nodes in data archives and must include representations for versions of the entity being identified. Many identifier options have been proposed for different kinds of data, but the best choices for Earth science data require careful examination. For example, two datasets may differ only in format, byte order, data type, access method, etc., creating distinctions between them that may not be addressed adequately by identifier schemes used for typical "published" items such as books and journals. Last year's activity included a recommendation on identifier schemes to use for Earth Science data, but did not address the implementation issues that arise with the identifier schemes considered. The next Task for this work is to examine several different kinds of Federal datasets, assign identifiers from up to nine identifier schemes considered in the previously mentioned paper, evaluate and compare the implementation implications and other practical considerations associated with the use of each identifier scheme applied, and develop recommendations. Practical considerations may include the need to integrate with other metadata schemes such as ISO, and application to data citation formats and practices.
Ultimate Benefit: Permanent, unique names for Federation data products and recommendations for practice based on testbed experience.
|Zhipeng Gui, Qunying Huang, Kai Liu, Jizhe Xia
|Semantic Registration of Data and Services
|The Semantic Web Cluster has been developing ontologies for Data Service, Data types, and science concepts. The testbed enables providers to register their products and services semantically, which will provide more precise descriptions of their offerings. Ultimate Benefit: Better classification and discovery of specialized Federation products and services
|The Air Quality Working Group has been developing an inventory of air quality data and data services. Other GEOSS Societal Benefit Areas could benefit from a similar capability to highlight offerings from Federation members. For this task, the Air Quality has been cloned for use by other application areas. Initially, a Water portal has been developed. Ultimate Benefit: Better marketing of targeted Federation products and services.
|Chaowei (Phil) Yang
|Cloud Computing Resource Calculator
|Many scientist and geospatial application providers are considering transforming their current computing infrastructure into clouds (IaaS and PaaS); however, it is a big challenge to select the most suitable cloud platforms and configuration solutions for the cloud novices and even for experienced cloud users. The Cloud Computing Resource Calculator meets this need by providing an advisory tool for: 1) Helping cloud novices understand the basic concepts and potential applications of cloud computing providers, services and technologies; 2) Assisting cloud computing early adopters to easily and effectively select the best solutions based on their unique application requirements; and 3) Periodically collecting/updating the mainstream cloud platforms’ information and build an expert system and database.
|Abdelmounaam Rezgui, Zhipeng Gui, Min Sun, Chaowei Yang
|Data and information Quality
|An automatic classification/annotation system that assesses, monitors, and accurately reports on the quality of ESIP data and services. The project sought to include: (1) a quality model and classification engine that established a set of quality metrics for data and services. The engine will automatically derive the quality of ESIP products and services, (2) work on metadata quality which is not usually addressed, and (3) accounting feedback from users to help rate quality of data and services.
|Open Search and Discovery
|The Discovery cluster provides a medium for Federation members to coordinate on development, deployment, and creation of interoperable specifications for Discovery services such as OpenSearch, DataCasting, and ServiceCasting. The initial vision of the Discovery Testbed was to support the following items:
The Esri Geoportal Server was used in this case to provide such an interface. || media:StateOfTheArt_ESIP_Discovery_Testbed-20120307rev1.pdf, http://wiki.esipfed.org/index.php/Discovery_Testbed_Work_Plan, http://18.104.22.168:8080/geoportal
|Greg Janee and Nancy Hoebelheinrich
|The datasets to be addressed will include a relatively simple image collection and a second containing granule-level data objects such as a longtime series from multiple sensors/satellites. The project tasks include:(1) Preparing, transforming and performing quality control tasks on the metadata for each dataset in a storage environment that can be queried, and appended to add the identifiers from each scheme to each entity in the two datasets,(2) Map the existing metadata for each dataset into the metadata requirements for each identifier scheme for the purposes of identification and citation, (3) Track and discuss the implementation issues associated with each task per the questions previously identified by the Data Stewardship & Preservation cluster (see the initial list on the ESIP wiki at: http://wiki.esipfed.org/index.php/Implementation_Issues_to_be_addressed ), and others as they arise, (4) Bring implementation issues to the Data Stewardship cluster as needed for discussion and resolution/decision, (5) Develop list of practical considerations for each identifier scheme, and (6) develop draft set of best practices for discussion at future ESIP Federation meetings.
|Eric Rozell, Tom Narock
|Linked Open Research Data for Earth and Space Science Informatics
|The ability to discover the technical competencies of other researchers in the Earth and Space Science Informatics (ESSI) community can help in the discovery of collaborations. In addition to collaboration discovery, social network information can be used to analyze trends in the field, which will help project managers identify irrelevant, well-established, and emerging technologies and specifications. This information will help keep projects focused on the technologies and standards that are actually being used, making them more useful to the ESSI community. This problem was addressed with a solution involving two components: a pipeline for generating structured data from AGU-ESSI abstracts and ESIP member information, and an API and Web application for accessing the generated data.
|Jerry Yun Pan, Nigel Banks
|Re-usable Metadata Editor
|Develop a generic, reusable software system to facilitate the support for multiple metadata standards and their variations. The tool will be flexible and reusable for multiple metadata standards, and allows an administrator to design and tailor the metadata authoring tool/editor according to the targeted metadata schema without writing new code. The core of the tool suite consists of two parts: (1) a designing tool for "super" users who are responsible for designing the metadata editors, and (2) a rendering engine that makes use of a pre-made metadata editor definition. The designing tool defines a metadata editor based on user inputs and saves the definition for reuse. The rendering engine makes use of the definitions to facilitate metadata authoring and editing. The "editor-of-editors" is schema driven. The design tool allows for the selection of a subset of a whole schema (a "profile") to form an editor, or the selection of an extension of a schema. The editor definitions can be exported and shared among multiple installations. Ultimate Benefit: A general purpose metadata authoring and editing tool that is easily shareable across organizations. The code is open source for free use.