Difference between revisions of "AQ 2007 10 31 Discussion"

From Earth Science Information Partners (ESIP)
m
m
Line 46: Line 46:
 
*How would we provide data to justify the statement, "Man-made emissions of nitrogen oxides dominate total emissions"?   
 
*How would we provide data to justify the statement, "Man-made emissions of nitrogen oxides dominate total emissions"?   
 
:::See http://www.apis.ac.uk/overview/pollutants/overview_NOx.htm
 
:::See http://www.apis.ac.uk/overview/pollutants/overview_NOx.htm
 +
===Discuss Earnie's Thessolonika slide===

Revision as of 14:20, October 31, 2007

back to 2007-10-31 Workshop page


What is the particular niche for this group?

Comparison to science meeting

  • Was this more like a science meeting?
Can we segment topics so all parties participate in all discussions?
Is there a way to capture all the discussion that happened around presentations?

Special opportunities to build "knowledge base"

  • Journal articles may not capture all available discussion
Could we test tools that would allow us to elaborate and apportion the significance of topics? Maybe we could use a wiki for this.
The comment was made that our measurements are "unstable", which elicited the comment, "Then what are we doing here?" How do we avoid defaming all our data with details? Certainly these satellite differences are significant. How can we state that while questioning the details of our understanding?

Opportunity to reach across communities

  • Clearly an advantage of the ESIP Federation is the opportunity to put technology people together with data people, scientists, educators, and applied science people. How can we take best advantage of this opportunity?
It might be that we could use discussion tools to help create an effective knowledge base that communicates appropriately to different audiences. Policy makers need to see that there is a human effect, not that there is still some question which of the effects are the most significant.

Data/tool Decision Tree

wonder if we could set up table comparing products
might do by datasource, visualization tool, processing tool, etc.

Usage considerations

Legal

  • EPA always has to defend its judgments in court
  • Court needs "preponderance of evidence" or "beyond a reasonable doubt"
This is quite different from 0.95% certainty.

Science

  • Needs detailed information about sources and models used in "correcting" data

Education

  • Special considerations for "real-time" data
  • Special considerations for hiding and introducing complexity

Data lineage tracking

  • Do we need to coordinate conventions for tracking provenance, even in readme files,?
  • How do we track sources and magnitude of variance within and across datasets throughout the processing chain?
    • Take as an example the case x and y variance are not the same.
    • Which processing tools compound error? How do we account for it?
  • Can we list spurious sources of variance that must be taken into consideration as we visualize or composite datasets?
    • Cloud cover
    • Registration issues
    • Instrument issues
      • resolution
      • interference (NOx)

Data quality considerations

  • What is the best way to determine "best available evidence"?
    • How do we know if a remote sensing product has been verified with ground data and in situations comparable to use?
  • How and why might we tag a dataset "bad data"?

Use Case

How much NO2 is man made?

  • How would we provide data to justify the statement, "Man-made emissions of nitrogen oxides dominate total emissions"?
See http://www.apis.ac.uk/overview/pollutants/overview_NOx.htm

Discuss Earnie's Thessolonika slide