P&S Monthly April 25, 2006
Back to: Products and Services
- 1 Four Main Quality Dimensions
- 2 Subdimensions of each dimension
- 3 Use of the Quality Assessments
- 4 Building the Assessments Database
- 5 Strategy for getting the assessments entered
- 6 Edits to Discussion from March 28, 2006
- 7 Action
Four Main Quality Dimensions
- instrument accuracy
- environmental effects
- data processing
Subdimensions of each dimension
- Focus on end use--how well is it received (by ESIP Cluster, decision maker)
- What words should be used to break down into a second tier ten-scale in each Main Dimension?
- How much will it depend on each context of the data collection and use?
- Ease of use by decision support systems
- Issues of availability, accessability, reliability
- Educational ratings may be separate (age appropriateness, curriculum, etc)
One simple set of criteria
I saw the following in reference to astronomical data quality, but could be equally applicable: A=Data are fully calibrated, fully documented, and suitable for professional research. B= Data are calibrated and documented, but calibration quality is inconsistent. Users are advised to check data carefully and recalibrate. C= Data are uncalibrated. U= Data quality is unknown. If a resource does not provide a data quality assessment, class U should be assumed.
Reference: Resource Metadata for the Virtual Observatory, Version 1.01 G. Major
Use of the Quality Assessments
Is it risk or degree of confidence?
- Related to use in making decisions--there the decision maker needs to determine level of risk aversion appropriate--relates to the cost to a business of making a "bad" decision
Maybe we can provide guides-to-use for these assessments
Might be useful even to have a simple five-star system. This might be all that is needed for one category of user--whereas other areas of research might want much more.
Building the Assessments Database
- Under what conditions was it tested? Lab, over time, multiple sites and situations
Surveys Peer review Recording voluntary user responses Statistics (variability, covariance across studies, Bayesian causality) (Bruces work)
Strategy for getting the assessments entered
Simple assessments can come from the providers themselves
- How do we come up with incentives and tools to assure that the provider will/can do this?
- Could we use some measure of data quality to determine how aggressively to promote certain products?
- Might be sufficient to simply set the criteria by which a provider receives the ratings (stars or ranking from 1-10)
Assessments might also come from the users
- Might be related to number of responses
- We could simply report numbers of positive and negative comments
- Survey of users: accessibility, usability
- If you require people to comment, you may lose users.
Survey of ESIPs
- might even pay for people to review
Data access vs reliability/quality issues
- Federation might only talk about reliability and leave quality to provider
Role of the Clusters
- This may be the driver for ESIP participation
- The Societal Benefit Clusters might take over the rating in each domain
Edits to Discussion from March 28, 2006
Create a common set of data quality metrics across all Federation data products. Data providers can provide measures for their own products. 3rd parties can provide their own ratings. Quality can refer to accuracy, completeness, and consistency. It is not clear how to measure consistency. It is desirable to provide quality assurance.
We would like to create a 1-10 Data quality scale, where:
1 = no accuracy claimed 10 = fully reliable data that has withstood the test of time
This measure can be applied to any of the quality dimensions:
- Sensor/Instrument (well calibrated, stable, checked across instruments, V/V)
- Spacecraft (locational and communication accuracy)
- Environment Issues (contamination from clouds, rainfall, ground, sea, dirt, etc.)
- Data Processing (accuracy of interpolation, algorithms, ancillary source data)
Create a 1-10 scale for each dimension. We will work with Federation members to associate a quality description with each value.
- Quality assurance (someone tags it as valid)
- Useful metadata provided?
- Instrument Verification and Validation
- Data processing
- Re-processing tag and notification
- input errors and forcings
- missing data
- Usage issues
- High enough resolution?
- Valid inference about what is measured
- Chain of Custody (for legal use)
Completeness Can we come up with categories of data completeness?
3rd party ratings
- NCDC Certified data (only states that it is in the archive -- designates as official, not a quality statement)
- Dataset docs use FGDC quality section, with different levels of detail
- DIF records have some minimum required fields to accept
- then have a text field to describe quality
- "measured parameters" from ECS model
- QA percent cloud cover; missing pixels;
- CLASS/Climate Data Record
- Maturity Model approach for data (John Bates application from software maturity)
- Level of maturity (five levels of improved treatment)
- See CDR Maturity paper
- Whole section on quality, text only
- Peer review
- Is this a measure of quality?
- Depends on stated offering from the provider; if they claim it is complete and it isn't
Assertions about datasets
We may want some standard for claiming and measuring how valid a claim may be
- What common data quality standards can the Federation offer within the Earth Information Exchange?
- How can we enforce these standards within the Earth Information Exchange?
- Are there similar ratings for "data services"?
Rob will send advertisement to the whole group for next months meeting.