Main Page/Start here

From Federation of Earth Science Information Partners
< Main Page(Difference between revisions)
Jump to: navigation, search
Line 79: Line 79:
  
 
== Recommended practices for citations ==  
 
== Recommended practices for citations ==  
The Data Citation Principles cover purpose, function and attributes of citations. These principles recognize the dual necessity of creating citation practices that are both human understandable and machine-actionable (Force 11).  
+
The core required elements of a citation from :
 +
# '''Author(s)'''--the people or organizations responsible for the intellectual work to develop the data set. The data creators.
 +
# '''Release Date'''--when the particular version of the data set was first made available for use (and potential citation) by others.
 +
# '''Version'''-- the precise version of the data used. Careful version tracking is critical to accurate citation.
 +
# '''Title'''-- the formal title of the data set
 +
# '''Archive and/or Distributor'''-- the organization distributing or caring for the data, ideally over the long term.
 +
# '''Locator/Identifier'''-- this could be a URL but ideally it should be a persistent service, such as a DOI, Handle or ARK, that resolves to the current location of the data in question.
 +
# '''Access Date and Time'''-- because data can be dynamic and changeable in ways that are not always reflected in release dates and versions, it is important to indicate when on-line data were accessed.
 +
 
 +
# Additional fields can be added as necessary to credit other people and institutions, etc. Additionally, it is important to provide a scheme for users to indicate the precise subset of data that were used. This could be the temporal and spatial range of the data, the types of files used, a specific query id, or other ways of describing how the data were subsetted.
  
  
Line 101: Line 110:
 
# '''Flexibility and interoperability''': Data citation should be sufficienty flexible to accomodate the variant practices among communities, but should not differ so much that they compromise interoperability of data citation practices across communities.
 
# '''Flexibility and interoperability''': Data citation should be sufficienty flexible to accomodate the variant practices among communities, but should not differ so much that they compromise interoperability of data citation practices across communities.
  
'''Examples''':
+
'''Example reference citations''':
The plots shown in Figure X show the distribution of selected measures from the main data [author(s), year, portion of subset used]
+
 
+
'''References section''':
+
 
:Author, year, article title, journal, publisher, DOI
 
:Author, year, article title, journal, publisher, DOI
:Author, year dataset title, data repository or archive, version, global persistant identifier
 
 
:Author, year, book title, publisher, ISBN
 
:Author, year, book title, publisher, ISBN
  
The core required elements of a citation from :
 
# '''Author(s)'''--the people or organizations responsible for the intellectual work to develop the data set. The data creators.
 
# '''Release Date'''--when the particular version of the data set was first made available for use (and potential citation) by others.
 
# '''Version'''-- the precise version of the data used. Careful version tracking is critical to accurate citation.
 
# '''Title'''-- the formal title of the data set
 
# '''Archive and/or Distributor'''-- the organization distributing or caring for the data, ideally over the long term.
 
# '''Locator/Identifier'''-- this could be a URL but ideally it should be a persistent service, such as a DOI, Handle or ARK, that resolves to the current location of the data in question.
 
# '''Access Date and Time'''-- because data can be dynamic and changeable in ways that are not always reflected in release dates and versions, it is important to indicate when on-line data were accessed.
 
# Additional fields can be added as necessary to credit other people and institutions, etc. Additionally, it is important to provide a scheme for users to indicate the precise subset of data that were used. This could be the temporal and spatial range of the data, the types of files used, a specific query id, or other ways of describing how the data were subsetted.
 
  
'''An example citation''': Cline, D., R. Armstrong, R. Davis, K. Elder, and G. Liston. 2002, Updated 2003. CLPX-Ground: ISA snow depth transects and related measurements ver. 2.0. Edited by M. Parsons and M. J. Brodzik. National Snow and Ice Data Center. Data set accessed 2008-05-14 at http://dx.doi.org/10.5060/D4MW2F23z
+
'''Example data citations''':
 +
:Cline, D., R. Armstrong, R. Davis, K. Elder, and G. Liston. 2002, Updated 2003. CLPX-Ground: ISA snow depth transects and related measurements ver. 2.0. Edited by M. Parsons and M. J. Brodzik. National Snow and Ice Data Center. Data set accessed 2008-05-14 at http://dx.doi.org/10.5060/D4MW2F23z
 +
:Author, year dataset title, data repository or archive, version, global persistant identifier
 +
:The plots shown in Figure X show the distribution of selected measures from the main data [author(s), year, portion of subset used]
  
  

Revision as of 20:39, 8 September 2014

Test page for the Editor's Roundtable - an Initiative for Best Practices for Data Publication

Contents

What is the Editor's Roundtable?

The Editor's Roundtable is a community-based initiative, conceived by editors, publishers, and operators of data facilities at a series of workshops organized by IEDA (Integrated Earth Data Applications) and EarthChem at major scientific conferences. It is an effort to foster and facilitate communication and knowledge exchange among editors and publishers of Earth Science journals as well as data facilities. Our goal is to develop and promote best practices for scholarly publishing, with an emphasis on data publication in support of open access policies.

The Editors Roundtable builds on a successful initiative started in 2007 by EarthChem, to develop and promote best practices for the reporting of geochemical data in scholarly articles and data systems, and that resulted in a Policy Recommendation Requirements for the Publication of Geochemical Data (Goldstein et al. 2014, doi:10.1594/IEDA/100426), which was endorsed by all major scientific journals that publish geochemical data and has guided policies for the disclosure and documentation of geochemical data.

Goals and Objectives

  1. Archiving and curating data sets;
  2. Setting references and identifiers;
  3. Linking datasets to publications;
  4. Integrating with emerging data citation practices and bibliometrics for data;
  5. Complying with interoperability standards.

Recommended practices for data submission

  • Data Accessibility and Format
 Access to the complete data is a fundamental requirement for the reproducibility of scientific results. 

All NEW geochemical data used in a publication must be made available for future use by:

  1. submission to an accessible, persistent source such as a public database or data archive (for example, personal web sites are not persistent data archives), if it exists for the specific data type, or by
  2. listing the data explicitly in a data table associated with the publication.
 The data must be available in downloadable format. 

For chemical abundance data of samples, elemental or oxide abundance data must be given unless a compelling reason can be provided; elemental abundance ratios are acceptable only if the compositional data do not exist. Isotope ratios are, of course, acceptable.

 Data should be reported in a tabular format. 

Data must always be available as a downloadable file in a format that can be easily converted into spreadsheet format (for example, .csv, .txt). The file should include units for the listed measured values. This means that if a publication contains a data table in the main text or a pdf or image version of the data table as an electronic supplement, the data in the table(s) must also be available in a downloadable form that can be easily converted to a spreadsheet.


  • Data Quality Information
 Proper documentation of data quality is essential.

Proper documentation of data quality is fundamental for comparison of research results and estimation of uncertainty. Authors must provide sufficient information (metadata) about the analytical process and reproducibility of measurement in order that the data quality can be properly evaluated. Correction procedures need to be clearly presented. This information is necessary to allow for scholarly reproduction of the results. Basic metadata such as analytical technique, lab, and values measured on reference materials need to accompany the data. If possible, metadata should be provided in standardized tabular format to facilitate access to this information for editors, reviewers, readers, and data managers.

 Analytical metadata must be provided for each measured parameter. 

If a parameter has been analyzed by more than one method, each method must be documented separately. If possible, this information should be provided in a tabular format.

 General analytical metadata include:
  1. Analytical technique (e.g. ICP, XRF, EMP)
  2. Laboratory (name of department/lab & institution)
  3. Analytical accuracy & reproducibility
a. Name(s) and measured value(s) of (internationally recognized) reference standard(s) measured as unknown sample
b. Estimated uncertainty of reference standard measurement, and, if applicable, number of measurements
 Method specific metadata must include, as appropriate to the method:
  1. Fractionation correction
  2. Standardization (Normalization)
  3. Total procedural blank
  4. Detection limit
  5. Calibration
 The list below identifies some of the metadata sets that are relevant for geochemical data:

I. Bulk Elemental Analyses (e.g. AAS, HPLC, ICPAES, ICPMS, INAA, XRF)

  1. Standardization (Normalization)
  2. Total procedural blank
  3. Detection limit

II. In-situ Elemental Analyses (e.g. EMP, SIMS, LA-ICPMS)

  1. Standardization (Normalization)
  2. Detection limit
  3. Calibration

III. Bulk Isotopic Analyses (e.g. TIMS, MC-ICPMS)

  1. Standardization (Normalization)
  2. Fractionation correction
  3. Total procedural blank
  4. Detection limit

IV. In-situ Isotope Analyses (e.g. SIMS, LA-MC-ICPMS, LA-ICPMS)

  1. Standardization (Normalization)
  2. Detection limit
  3. Normalization
  4. Fractionation correction


  • Sample Information

The geochemical data addressed in this policy are tied to samples. Essential information about the samples must be provided in order to allow for proper identification of their origin and type, and to trace their analytical history.

 Sample specific metadata should include, if availalble:
  1. Sample name or global unique identifications: global unique identifiers such as the International Geo Sample Number (IGSN) can be unambiguously referenced to a sample. The IGSN is a global unique 9-digit alphanumeric unique identifier provided and administered by SESAR (System for Earth Sample Registration). It is used together with a person’s or institution’s sample name to ensure unambiguous identification of a sample. IGSNs can be obtained from SESAR by submitting sample metadata. This allows a complete analytical profile of a sample to be established that includes data generated at different times or in different labs, and reported in different publications.
  2. Sample location: all natural samples for which data are reported require, if possible, information about the sample location, including latitude and longitude (if these are unknown, approximate coordinates obtained by using Google Earth would suffice). Marine samples require a depth below sea level. If applicable, the position of a sample within a stratigraphic section or within a core should be reported.
  3. Sample classification: samples should be classified (e.g. lithology for rocks and sediments, species for minerals and fossils and age).
  4. Sampling information such as the cruise or field program (if applicable)


Recommended practices for citations

The core required elements of a citation from :

  1. Author(s)--the people or organizations responsible for the intellectual work to develop the data set. The data creators.
  2. Release Date--when the particular version of the data set was first made available for use (and potential citation) by others.
  3. Version-- the precise version of the data used. Careful version tracking is critical to accurate citation.
  4. Title-- the formal title of the data set
  5. Archive and/or Distributor-- the organization distributing or caring for the data, ideally over the long term.
  6. Locator/Identifier-- this could be a URL but ideally it should be a persistent service, such as a DOI, Handle or ARK, that resolves to the current location of the data in question.
  7. Access Date and Time-- because data can be dynamic and changeable in ways that are not always reflected in release dates and versions, it is important to indicate when on-line data were accessed.
  1. Additional fields can be added as necessary to credit other people and institutions, etc. Additionally, it is important to provide a scheme for users to indicate the precise subset of data that were used. This could be the temporal and spatial range of the data, the types of files used, a specific query id, or other ways of describing how the data were subsetted.


Data citation is an evolving but increasingly important scientific practice. We see several important purposes of data citation:

  1. To aid scientific reproducibility through direct, unambiguous reference to the precise data used in a particular study.
  2. To provide fair credit for data creators or authors, data stewards, and other critical people in the data production and curation process.
  3. To ensure scientific transparency and reasonable accountability for authors and stewards.
  4. To aid in tracking the impact of data set and the associated data center through reference in scientific literature.
  5. To help data authors verify how their data are being used.
  6. To help future data users identify how others have used the data.
  1. Importance: Data should be considered legitimate, citable products of research. Data citations should be accorded the same importance in the scholarly record as citations of other research objects, such as publications.
  2. Credit and attribution: Data citations should facilitate giving scholarly credit and normative and legal attribution to all contributors to the data, recognizing that a single style or mechanism of attribution may not be applicable to all data. For example: data citations should provide sufficient information to identify cited data reference within included reference list;
  3. Evidence: In scholarly literature, whenever and wherever a claim relies upon data, the corresponding data should be cited; e.g., citations should be in close proximity to the claims relying on the data;
  4. Unique Identification: A data citation should include a persistent method for identification that is machine actionable, globally unique, and widely used by a community
  5. Access: Data citations should facilitate access to the data themselves and to such associated metadata, documentation, code, and other materials, as are necessary for both humans and machines to make informed use of the referenced data.
  6. Persistance: Unique identifiers, and metadata describing the data, and its disposition, should persist -- even beyond the lifespan of the data they describe;
  7. Specificity and verification: Data citation should facilitate access to the data themselves and to such associated metadata, documentation, code, and other material, thus, citation metadata should include additional information that can help identify specific portion of the data related supporting that claim. For example, versions or timeslice information should be supplied with any updated or dynamic dataset;
  8. Flexibility and interoperability: Data citation should be sufficienty flexible to accomodate the variant practices among communities, but should not differ so much that they compromise interoperability of data citation practices across communities.

Example reference citations:

Author, year, article title, journal, publisher, DOI
Author, year, book title, publisher, ISBN


Example data citations:

Cline, D., R. Armstrong, R. Davis, K. Elder, and G. Liston. 2002, Updated 2003. CLPX-Ground: ISA snow depth transects and related measurements ver. 2.0. Edited by M. Parsons and M. J. Brodzik. National Snow and Ice Data Center. Data set accessed 2008-05-14 at http://dx.doi.org/10.5060/D4MW2F23z
Author, year dataset title, data repository or archive, version, global persistant identifier
The plots shown in Figure X show the distribution of selected measures from the main data [author(s), year, portion of subset used]


Existing data facilities

  1. EarthChem
  2. GEOROC
  3. Pangea
  4. Data cite
  5. EarthCube: Council of Data Facilities
  6. NAVDAT
  7. GANSEKI
  8. USGS publication warehouse
  9. NASA
  10. NOAA
  11. Smithsonian
  12. NERC data centers includes the following:
Centre for Environmental Data Archival
National Geoscience Data Centre


Existing Resources and Guidelines

  1. ESIP Data Citation Guidelines
  2. Force 11 Data Citation Principles
  3. GENSEKI data policy
  4. NERC data policy
  5. USGS
  6. Datacite
  7. CODATA Task Group on Data Citation Standards and Practices in collaboration with the International Council for Scientific and Technical Information and the National Academy of Sciences.
  8. DataVerse Network Project - an approach from the social science community using a Handle locator and “Universal Numerical Fingerprint” as a unique identifier.
  9. Digital Curation Center


Main Publishers for Earth and Environmental Sciences

  1. Elsevier
  2. Wiley
  3. AGU publications
  4. Nature
  5. Science
  6. GSA publications
  7. Springer
  8. eEarth
  9. EGU publications
  10. Oxford Journals
  11. ICDP
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox