Difference between revisions of "NSF Air Quality Observatory:AQ Observatory Proposal"
Line 383: | Line 383: | ||
=References Cited= | =References Cited= | ||
− | 1. Husar, R.B., et al. The Asian Dust Events of April 1998; J. Geophys. Res. Atmos. | + | 1. Husar, R.B., et al. The Asian Dust Events of April 1998; J. Geophys. Res. Atmos. 106, 18317-18330, 2001. See event website: http://capita.wustl.edu/Asia-FarEast/ |
− | 2. Wayland, R.A.; Dye, T.S. AIRNOW: America’s Resource for Real-Time and Forecasted Air Quality Information; Environmental Manager 2005, September, 19-27. | + | 2. Wayland, R.A.; Dye, T.S. AIRNOW: America’s Resource for Real-Time and Forecasted Air Quality Information; Environmental Manager 2005, September, 19-27, 2005. |
− | 3. | + | 3. {(NAAMS)} National Ambient Air Monitoring Strategy |
− | 4. | + | 4. {TG2005} Thorvaldsson, Haraldur D.; Goldman, Kenneth J. "Architecture and Execution Model for a Survivable Workflow Transaction Infrastructure." Washington University Department of Computer Science and Engineering, Technical Report TR-2005-61, December 2005. |
− | 5. | + | 5. {GSMAS1995} Kenneth J. Goldman, Bala Swaminathan, T. Paul McCartney, Michael D. Anderson, and Ram Sethuraman. “The Programmers' Playground: I/O Abstraction for User-Configurable Distributed Applications.” IEEE Transactions on Software Engineering, 21(9):735-746, September 1995. |
− | 6. | + | 6. {SGM2005} Sajeeva L. Pallemulle, Kenneth J. Goldman, and Brandon E. Morgan. Supporting Live Development of SOAP and CORBA Servers. In Proceedings of the 25th IEEE International Conference on Distributed Computing Systems (ICDCS’05), pages 553-562, Washington DC, 2005. |
+ | |||
+ | 7. {PEMDG2000} Jyoti K. Parwatikar, A. Maynard Engebretson, T. Paul McCartney, John D. Dehart, and Kenneth J. Goldman. “Vaudeville: A High Performance, Voice Activated Teleconferencing Application,” Multimedia Tools and Applications, 10(1): 5-22, January 2000. | ||
Husar, R.; Poirot, R. DataFed and Fastnet: Tools for Agile Air Quality Analysis; Environmental Manager 2005, September, 39-41 | Husar, R.; Poirot, R. DataFed and Fastnet: Tools for Agile Air Quality Analysis; Environmental Manager 2005, September, 39-41 | ||
− | |||
− | |||
− | |||
=Biographical Sketches= | =Biographical Sketches= |
Revision as of 18:22, January 24, 2006
Links to: AQO Proposal Main Page > Proposal | Proposal Discussion| NSF Solicitation | NSF Solicitation Discussion | People |
Project Summary
[VERY ROUGH] The research and management of air quality is addressed by the several diverse communities. Pollutant emissions are determined by environmental engineers, atmospheric transport and removal processes are mainly in the domain of meteorologist, the pollutant transformations are in the purview of atmospheric chemists, air quality analysts, and the effects of air pollution are assessed by health, scientist, ecologists, and economists. The most complex and intertwined multi-disciplinary linkage is between atmospheric chemistry and meteorology. The goal of this project is to enhance the link between these communities through an effective cyberinfrastructure. In air quality there is an recently developed cyberinfrastructure (DataFed) that provides access to over 30 distributed AQ datasets (emissions, concentrations, depositions) and some web-based processing tools that has shown to support to both research and AQ management. In meteorology, the extant infrastructure, spearheaded by Unidata and its community, supports research, teaching, and decision-making by the matching of end-user tools to needed observational data.
Unfortunately, there is a significant gap between the meteorological and air quality communities, particularly when it comes to sharing data and tools. Thus, AQ researchers and analysts wishing to combine the data face considerable hurdles, especially where the needed data involve aggregations of observed and simulated information from multiple sources. The data filtering, aggregation and fusion is becoming more difficult as end-to-end-user needs increase in complexity, data volumes grow, and data types evolve. Consequently, the air-quality-meteorology cross-community data use is very low because the above difficulties are greatly compounded by interdisciplinary variations in tool usage and the attendant data syntax and semantics. The AQO will leverage, augment and integrate DataFed and Unidata in a prototype cyberinfrastructure component that better serves researchers, decision-makers and teachers of air quality, meteorology, and related fields by overcoming the listed difficulties. The research team Washington University and Unidata has decades of experience in building information infrastructures and their application air quality analysis, meteorology, environemtatal engineering..computer scientist.
The underpinning for these advances will be an end-to-end system that exhibits advanced functional and cyberinfrastructure design. The functional design of the systems will incorporate responding to AQ event in real time; delivering need information to decision makers (AQ managers, public) and overcoming the syntactic and semantic impedances. The cyberinfrastructure design of the AQO will accommodate a variety of data sources, respond to different types of AQ events, offer simplicity via a Common Data Model, fosters interoperability through standard protocols, provide user-programmable components for data filtering, aggregation and fusion, employ a service-oriented architecture for linking loosely coupled web-services, and facilitate the creation of user-configurable monitoring consoles. The software application framework and the prototype will also serve as a test bed for testing advanced computer science ideas on ... The AQO prototype will demonstrate these features through use cases that will include active researchers, analysts and managers of air quality. (1) quantifying natural and anthropogenic pollution transport from Asia, Africa and Central America to the US (2) forecasting, observing and responding to exceptional air quality events such as forest smoke, windblown dust and regional pollution (3) analyzing and explaining the Midwestern Nitrate Anomaly.
Intellectual merit. The overarching intellectual and technological contribution of this project is to advance cross-community interoperability among cyberinfrastructure components that are critical in contemporary environmental observation. This icludes semantic homogenisation of diverse environmental data models, the development of a mediated peer-to-peer service-based processing architecture and the ?? of robust, evolvable dustributed systems.
Broader Impact. The AQO is expected to habe brader impact in several areas. Builders of Cyberinfrastructure will benefit from the infusion of novel web service architectures, robust distributed application framework and from technologies for semi-automatic service-wrapping of legacy data. The observatory will support federal and state AQ managers in performing status and trend analysis, exceptional event management and for assessing the effectiveness of the existing monitoring networks. The AQO will also contribute to general atmospheric science by providing real-time data/tools for international field studies and by aiding chemical model evaluation and data assimilation. AQO will also be a near-term application of GEOSS.
[[[KJG: NSF requires separate identifiable sections in the project summary, in addition to within the proposal, on 'intellectual merit' and 'broader impact']]]
Intellectual merit: [[[need summary]]]
Broader impact: [[[need summary]]]
NSF and Related Projects
CAPITA NSF small ITR 2001-2004
Unidata NSF projects?
Together 2-4 paragraphs??
Intellectual and Technical Merit
The overarching technological contribution of this project is to advance cross-community interoperability among cyberinfrastructure components that are critical in contemporary environmental observation. The tangible outcomes will include a prototype observatory that provides genuine end-to-end services needed by two distinct communities of users and simultaneously advances the state of the art in designing observatories for multidisciplinary communities of users. Each of the communities participating in this study has operational systems that will be leveraged to create the prototype, but the marriage of their systems presents significant design challenges within which to study important interoperability questions.
Specifically, the joining of the air-quality and meteorology communities will require (1) effective global access to distinct but overlapping, heterogeneous data streams and data sets; (2) use of these data in distinct but overlapping sets of tools and services, to meet complex needs for analysis, synthesis, display and decision support; and (3) new combinations of these data and (chained) services such as can be achieved only in a distributed, service-oriented architecture that exhibits excellence of functional and technical design, including means to overcome the semantic differences that naturally arise when communities develop with distinct (though overlapping) motivations and interests.
Interoperable Data Access Methods. Re effective global access to heterogeneous data streams and data sets, the contributions of this research will advance the state of interdisciplinary data use. Specific emphasis will be placed on mediating access to diverse types of data from remote and in-situ observing systems, combined with simulated data from sophisticated, operational forecast models. The observational sources will include satellite- and surface-based air quality and meteorological measurements, emission inventories and related data from the remarkably rich arrays of resources presently available via Unidata and DataFed.
The tangible output of this research component will be an extended Common Data Model (including the associated metadata structures), realized in the form of (interoperable) Web services that will meet the data-access needs of both communities and that can become generally accepted standards.
Interoperable Data-Processing Services. Interoperability among Web services at the physical and syntactic level is, of course, assured by the underlying Internet protocols, though semantic interoperability is not. In the case of SOAP-based services, WSDL descriptions permit syntax checking, but higher-level meanings of data exchange are inadequately described in the schema. Hence, SOAP-based services developed by different organizations for different purposes are rarely interoperable in a meaningful way. The research contribution on this topic will include--within key contexts for environmental data use--development of Web-service adapters that provide loosely coupled, snapping interfaces for Web services created autonomously in distinct communities.
Distributed Applications. The Service Oriented Architecture (SOA) movement, among others, indicates ongoing intellectual interest in the (unmet) challenges of distributed computing. Our team’s experience with SOA in recent years has demonstrated that useful applications can be built via Web-service chaining, but our current prototypes--including DataFed--operate within a context where service interoperability is assured by internal, community-specific conventions. As the AQO evolves into a fully networked system, distributed applications built upon its infrastructure will need to be robust, evolvable and linkable to the system of systems (Ken Goldman) that is the Web.
This project will advance understanding about distributed, service-oriented architectures in the presence of semantic impedance. Particular emphasis will be placed on designing for simplicity and extensibility through abstract data typing, service adaptors and polymorphism. Each of these builds upon the above paragraphs and leverages prior work as follows:
- Abstract Data Typing - A crucial factor in the successes of Unidata and DataFed has been the generalization of data access methods through the use of data models. Note, for example, Unidata's development of the netCDF data model in 1988, which deeply informed much follow-on work, such as OpenDAP. Such models are equivalent to very high-level abstract data types, and they set the stage for syntactically and, potentially, semantically type-checked interfaces and interoperability.
- Service Adaptors - Though theoretically possible, it is entirely unrealistic to expect data providers or the builders of tools and services to adopt high-level data abstractions or retrofit their data-access interfaces to use them. Such expectations are especially unrealistic with the passage of time, as new users create new applications of older data and tools. The approach envisioned in this project is to make extensive use of Web services that function as data wrappers, transformers, aggregators and so forth, each of which exploits data abstraction as a framework in which to project one data type onto another, i.e., to perform semantic impedance matching along with other useful data-synthesis functions.
- As described above (see Interoperable Data-Processing Services), prototype service adaptors will be among the outcomes of this project. These will be leveraged on prior work, especially THREDDS and GALEON, where experience has been gained, for example, in
- creating virtual aggregations of data from distributed servers;
- cross-projecting data types and coordinate systems to yield semantic interoperability between the 4-dimensional atmospheric modeling community and the GIS community.
- Polymorphism - [planned to be written by Dave Fulker (Dave.Fulker)]
Significant simplicity is gained through this design approach. If the air-quality and meteorology communities jointly have interest in applying N distinct classes of tools and services to M distinct classes of data, then a straightforward approach to universal access and usability requires order NxM code-development efforts. In contrast--though the challenges of abstraction and polymorphism are great--the intended outcome of this project is a prototype that requires order N+M code-development efforts, roughly one each for creating service adaptors that project the semantics of a given data type, tool or service onto the Common Data Model.
Broader Impact
The Air Quality Observatory through its technologies and applications will have broader impact on the evolving cyberinfrastructure, air quality management and atmospheric science.
Impact on Cyberinfrastructure
Infusion of Web Service Technologies. The agility and responsiveness of the evolving cyberinfrastructure is accomplished by loose coupling and user-driven dynamic rearrangement of its components. Service orientation and web services are the key architectural and technological features of AQO. These new paradigms have been applied by the proposing team for several years generating applications and best-practice procedures that use these new approaches. Through collaborative activities, multi-agency workgroups and formal publications the web-service based approach will be infused into the broader earth science cyberinfrastructure. [support letter?]
Technologies for Wrapping Legacy Datasets. The inclusion of legacy datasets into the cyberinfrastructure necessitates the wrapping of legacy datasets with formal interfaces for programmatic access, i.e. turning data into services. In the course of developing these interfaces to a wide variety of air quality, meteorology and other datasets the proposal team has developed an array of data wrapping procedures and tools. A wide distribution of these wrappings will assure the rapid growth of the content shareable through the cyberinfrastructure and the science and societal benefits resulting from the “network effect”.
Common Data Models for Multiple Disciplines. The major resistance in the horizontal diffusion of data through the cyberinfrastructure arises from the variety of physical and semantic data structures for earth science applications. Common data models are emerging that allow uniform queries and standardized, self-describing returned data types. Through the development, promotion and extensive application of these common, cross-disciplinary data models, the AQO will contribute to interoperability within the broader earth science community.
Impact on Air Quality Management
Federal and State Air Quality Status and Planning. DataFed has already been used extensively by federal and state agencies to prepare status and trend analysis and to support various planning processes. The new air quality observatory with the added meteorological data and tools will more effectively ???
Exceptional Air Quality Events. AQ management is being increasingly responsive to the detection, analysis and management of short term systems. The combined DataFed-Unidata system and the extended cyberinfrastructure of AQO will be able to support these activities with increased effectiveness and through the just-in-time delivery of actionable knowledge to decision makers in AQ management organizations as well as the general public. [support letter?]
Monitoring Network Assessment. A current revolution in remote and surface based sensing of air pollutants is paralleled by a bold new National Ambient Air Monitoring Strategy (NAAMS ref). The effectiveness of the new strategy will depend heavily on cyberinfrastructure for data collection, distribution and analysis for a variety of applications. The cyberinfrastructure will also be needed to assess the overall effectiveness of the monitoring system that now includes data from multiple Linking Agencies, Disciplines, Media and Global Communities [support letter?]
Impact on Atmospheric Science and Educations
Chemical Model Evaluation and Augmentation. Dynamic air quality models are driven by emissions data and/or scenarios and a module that includes air chemistry and meteorology to calculate the source receptor relationships. The chemistry models themselves can be embedded in larger earth systems, models and they can serve as inputs into models for health, ecological and economic effects. The air quality observatory will provide homogenized data resources for model validation and also for assimilation into advanced models[fix]. A good example is the assimilation of satellite-based smoke emission estimates into near-term forecast models. [support letter?]
International Air Chemistry Collaboration. A significant venue for advancing global atmospheric chemistry is through international collaborative projects that bring together the global research community to address complex new issues such as intercontinental pollutant transport. The AQO will be able to support these scientific projects using real-time global scale data resources, the user-configurable processing chains and the user-defined virtual dashboards. [support letter?]
Near-Term Application of GEOSS. A deeper understanding of the earth system is now being pursued by a Global Earth Observation System of Systems (GEOSS, ref) which now includes the cooperation of over sixty nations. Air quality was identified as one of the near-term opportunities for demonstrating GEOSS through real examples. The AQO prototype can serve as a test bed for GEOSS demonstrations. [support letter?]
Long Term Sustainability.
Project Description: Air Quality Observatory (AQO) - Rough!
Introduction
Traditionally, air quality analysis was a slow, deliberate investigative process occurring months or years after the monitoring data had been collected. Satellites, real-time pollution detection and the World Wide Web have changed all that. Analysts and managers can now observe air pollution events as they unfold. They can ‘congregate’ through the Internet in ad hoc virtual work-groups to share their observations and collectively create the insights needed to elucidate the observed phenomena. Air quality analysis his becoming much more agile and responsive to the needs of air quality managers, the public and the scientific community. In April 1998, for example, a group of analysts keenly followed and documented on the Web, in real-time, the trans-continental transport Asian dust from the Gobi desert1 {Husar, et al., 2001}, its impact on the air quality over the Western US and provided real-time qualitative explanation of the unusual event to the managers and to public pubic. The high value of qualitative real-time air quality information to the public is well demonstrated through EPA’s successful AIRNOW program2 {Weyland and Dye, 2006}.
In recent years, air quality management process has also changed. The old command and control style is giving way to a more participatory approach that includes the key stakeholders and encourages the application of more science-based ‘weight of evidence’ approaches to controls. The air quality regulations now emphasize short-term monitoring while at the same time long-term air quality goals are set to glide toward ‘natural background’ levels over the next decades. In response to these and other development, EPA has undertaken a major redesign of the monitoring system that provides the main sensory data input for air quality management. The new National Ambient Air Monitoring Strategy3 {(NAAMS}, through its multi-tier integrated monitoring system, is geared to provide more relevant and timely data for these complex management needs. The data from surface-based air pollution monitoring networks now provides routinely high-grade, spatio-temporal and chemical patterns throughout the US for the most serious air pollutant, fine particles (PM2.5) and ozone. Satellite sensors with global coverage and kilometer-scale spatial resolution now provide real-time snapshots which depict the pattern of haze, smoke and dust in stunning detail and the new sensors also show the pattern of gaseous compounds such as ozone and nitrogen dioxide. The generous sharing of data and tools now leads to faster knowledge creation through collaborative analysis and management. The emergence a new cooperative spirit is exemplified in the Global Earth Observation System of Systems (GEOSS, 60 + nation membership), where air quality is identified as one of the near-term opportunities for demonstrating the benefits of GEOSS.
Information technologies offer outstanding opportunities to fulfill the information needs for the new agile air quality management system. The ‘terabytes’ of data from these surface and remote sensors can now be stored, processed and delivered in near-real time. The instantaneous ‘horizontal’ diffusion of information via the Internet now permits, in principle, the delivery of the right information to the right people at the right place and time. Standardized computer-computer communication protocols and Service-Oriented Architectures (SOA) now facilitate the flexible processing of raw data into high-grade ‘actionable’ knowledge.
The increased data supply and the demand for higher grade AQ information products is a grand challenge for both and environmental science information science communities. From environmental science and engineering point of view, air quality is a highly multidisciplinary topic which includes air chemistry, atmospheric physics, meteorology, health science, ecology and others. The range of data needed for analysis and interpretation now is much richer including high resolution satellite data on PM concentrations, emissions, meteorology, and effects. Meteorological and air quality simulation and forecast models now also require more input verification, and augmentation. The “data deluge” problem is especially acute for analysts interest in aerosol pollution, since aerosols are so inherently complex and since there are so many different kinds of relevant data.
The AQ data need to be ‘metabolized’ into higher grade knowledge by the AQ analysis systems, but the value-adding chain that turns raw AQ data into 'actionable knowledge' for decision making consists of many steps, include human 'processors'. The data processing nodes are distributed among different organizations (EPA, NOAA, NASA, Regional and State Agencies etc. academia), each organization being both a producer and consumer of AQ-related information. The system must deliver relevant information to a broad range of stakeholders (federal, state, local, industry, international). Furthermore, the type of data, the level of aggregation, filtering, and the frequency at which sensory data are provided to the air quality management system differs greatly whether it is applied to policy, regulatory or operational decisions. The IIT needs to support both real-time, ‘just-in-time’ data analysis as well as the traditional in-depth post-analysis.
While the current AQ science and management system do work, their efficiency and effectiveness is hampered by the marginal support from a suitable information flow infrastructure. [stove-pipes]
Air Quality Observatory to the rescue!!!
The goal of this project is to build an infrastructure to support the science, management and education related to Air Quality. This goal is to be achieved through an Air Quality Observatory Based on a Modular Service-based Infrastructure. By making available many spatio-temporal data sources through a single web interface and in a consistent format, the DataFed and Unidata tools allow anyone to view, process, overlay, and display many types of data to gain insight to atmospheric physical and chemical processes.
A goal of the Observatory is to encourage use of these tools by a broad community of air pollution researchers and analysts, so that a growing group of empowered analysts may soon enhance the rate at which our collective knowledge of air pollution. The current challenge is to incorporate the support of the AQO into the air quality management process in a more regular and robust way.
A particularly goal is to develop and demonstrate the benefits of a mid-tier cyberinfrastructure that can benefit virtually all components of the air quality information system, the data producers, processors, human refiners, and the knowledge-consuming decision makers. ....Internet II , Cyber stuff in NSF, NASA, NOAA, EPA as well as industry....[from info stowepipe to open networking]
Interoperability Infrastructure
Current Interoperability Infrastructure
Conceptual : [CDM], [DataFed-Box], [Distr. Computing]
The Common Data Model(CDM) is a unification of the data models of OpenDAP, netCDF, and HDF5.
Implementation/Testing: [THREDDS], [DataFed ], [Distr. Computing]
DataFed an infrastructure for real-time integration and web-based delivery of distributed monitoring data. The federated data system, DataFed, (http://datafed.net) aims to support air quality management and science by more effective use of relevant data. Building on the emerging pattern of the Internet itself, DataFed assumes that datasets and new data processing services will continue to emerge spontaneously and autonomously on the Internet, as shown schematically in Figure 1. Example data providers include the AIRNOW project, modeling centers and the NASA Distributed Active Archive Centers (DAAC).
DataFed is not a centrally planned and maintained data system but a facility to harness the emerging resources by powerful dynamic data integration technologies and through a collaborative federation philosophy. The key roles of the federation infrastructure are to (1) facilitate registration of the distributed data in a user-accessible catalog; (2) ensure data interoperability based on physical dimensions of space and time; (3) provide a set of basic tools for data exploration and analysis. The federated datasets can be queried, by simply specifying a latitude-longitude window for spatial views, time range for time views, etc. This universal access is accomplished by ‘wrapping’ the heterogeneous data, a process that turns data access into a standardized web service, callable through well-defined Internet protocols.
The result of this ‘wrapping’ process is an array of homogeneous, virtual datasets that can be queried by spatial and temporal attributes and processed into higher-grade data products.
WCS GALEON here
The Service Oriented Architecture (SOA) of DataFed is used to build web-applications by connecting the web service components (e.g. services for data access, transformation, fusion, rendering, etc.) in Lego-like assembly. The generic web-tools created in this fashion include catalogs for data discovery, browsers for spatial-temporal exploration, multi-view consoles, animators, multi-layer overlays, etc.(Figure 2).
A good illustration of the federated approach is the realtime AIRNOW dataset2 (Wayland and Dye, 2005). The AIRNOW data are collected from the states, aggregated by the federal EPA and used for informing the public (Figure 1) through the AIRNOW website. In addition, the hourly real-time O3 and PM2.5 data are also made accessible to DataFed where they are translated on the fly into uniform format. Through the DataFed web interface, any user can access and display the AIRNOW data as time series and spatial maps, perform spatial-temporal filtering and aggregation, generate spatial and temporal overlays with other data layer and incorporate these user-generated data views into their own web pages. As of early 2005, over 100 distributed air quality-relevant datasets have been ‘wrapped’ into the federated virtual database. About a dozen satellite and surface datasets are delivered within a day of the observations and two model outputs provide PM forecasts.
This is about data protocols, discovery, access processing services
DataFed
Unidata
Extending Current Infrastructure
DataFed wrappers - data access and homogenization
Standards Based Interoperability
Data and service interoperability among Air Quality Observatory (AQO) participants will be fostered through the implementation of accepted standards and protocols. Adherence to standards will foster interoperability not only within the AQO but also among other observatories, cyberinfrastrcuture projects, and the emerging GEOSS efforts.
Standards for finding, accessing, portraying and processing geospatial data are defined by the Open Geospatial Consortium (OGC). The AQO will implement many of the OGC specifications for discovering and interacting with its data and tools. The OGC specifications we expect to use in developing the AQO prototype are described in Table X.
The most well established OGC specification is the Web Map Server for exchanging map images but the Web Feature Service and Web Coverage Service are gaining wider implementation. While these standards are based on the geospatial domain, they are being extended to support non-geographic data "dimensions." For example, WCS is being revised to support coverage formats other than grids.
Specification | Description | AQO Use |
---|---|---|
WMS | Web Map Services support the creation, retrieval and display of registered and superimposed map views of information that can come simultaneously from multiple sources. | DataFed supports WMS both as a server and client. By serving WMS layers, other map viewers can access and interact with air quality data. As a WMS client, DataFed is able to make use of the numerous WMS servers available. WMS requests include x,y,bounding box and time. |
WFS | The Web Feature Service defines interfaces for accessing discrete geospatial data encoded in GML. | Within AQO, WFS will allow users to build queries to retrieve point monitoring data in table formats (GML, CSV, etc.). The WFS revision working group is presently revising the specification to include support for time. |
WCS | Web Coverage Services allow access to multi-dimensional data that represent coverages, such as grids and point data of spatially continuous phenomena. | The early phases of AQO development will actively explore the use of the WCS specification. Advances made from GALEON will be incorporated and the WCS will be extended to provide a powerful interface for building multi-dimensional queries to monitoring, model, and satellite data. WCS supports x,y,z and time dimensions and allows server-defined "dimensions" for other parameters. |
CSW | Catalog services support publishing and searching collections of metadata, services, and related information objects. Metadata in catalogs represent resource characteristics that can be queried and presented for humans and software. | The DataFed and THREDDS catalogs have not implemented the OGC catalog service. The CSW specification offers an approach for DataFed and THREDDS to interoperate at the catalog level by exchanging metadata. An AQO catalog service would provide an interface to other catalogs that have implemented the specification, such as Geospatial One Stop. Catalog services are currenly limited to geographic dimensions. |
SWE | Specifications emerging from the Sensor Web Enablement activity include SensorML for describing instruments, observations & measurements for describing sensor data, sensor observation service for retrieving data and sensor planning service for managing sensors. | Ground-based environmental monitors are a key consideration in the development of these "sensor" specifications. Much of the data accessible through THREDDS and DataFed originate from monitoring networks and by making use of the accomplishments in the SWE, AQO could in developing data models for describing and exchanging data, AQO could the SWE specifications. Current draft of SOS supports x,y,time and parameter dimensions. |
WPS | The proposed Web Processing Service offers geospatial operations, including traditional GIS processing and spatial analysis algorithms, to clients across networks. | The proposed AQO plans to include services not only for accessing and visualizing data but also to conduct data analysis. The ongoing WPS specification effort could serve as a useful resource for developing these analysis services and, if WPS is adopted as a specificaiton, would provide another interoperable connection to the broader envrionemntal and geospatial communities. |
The success of OGC specifications have led to efforts to develop interfaces between them and other common data access protocols (e.g. OPeNDAP, THREDDS). For example the GALEON Interoperability Experiment, led by Ben Dominico at Unidata, is developing a WCS interface to netCDF datasets. that account for the multi-dimenional atmospheric model output into the three dimensional geospatial world.
Use of OGC specifications and interaction with OGC during the development of the AQO prototype will be facilitated by Northrop Grumman IT TASC (NG). NG has developed a GeoEnterprise Architecture approach for developing integrated solutions by leveraging the large body of geospatial standards, specifications, architectures, and services. As part of this effort, Northrop Grumman has participated in Technology Interoperability Experiments (TIEs) where multiple organizations collaborate to test ability to exchange distributed geospatial information using standards.
Achieving interoperability among the components of AQO will involve close interaction among its participants. Interoperability testing and prototyping will be conducted through service compliance and standards gap analysis.
Service compliance Northrop Grumman acts as the Chair of the OGC Compliance and Interoperability Subcommittee and is nearing completion of an open source compliance engine soon to be adopted by the OGC. Through compliance testing, NG's technical team has validated interoperability of various platforms and diverse geospatial data warehouses. Proving compliance testing to AQO components early in the prototype development process will ensure faster and more complete interoperability and establish an AQO infrastructure that can be networked with other infrastructure developments.
Standards gap analysis To fully exploit the multi-dimensional nature of the data (x, y, z, time(s), multi-parameters), query statements and portrayal services would need to extend beyond their traditional GIS origins. Current OGC specifications lay a solid foundation upon which to add these capabilities. AQO development will extend and customize standards as needed and will forward these modifications to OGC for consideration in future versions of the specifications. The AQO development team has extensive experience in evaluating and enhancing geospatial standards. For example, Northrop Grumman is pressently involved in a National Technology Alliance project testing and extending OGC specifications to more fully support the temporal dimension.
Unidata THREDDS middleware for data discovery and use; and test beds that assure the data exchange is indeed interoperable, e.g. Unidata-OGC GALEON Interoperability Experiment/Network.
[much more Unidata stuff here] [Stefan OGC W*S standards] [CAPITA data wrapping, virtual SQL query for point data]
New Activities Extending the Infrastructure
Common Data Model [How about Stefano Nativi's semantic mediation]
Networking. [Semantic mediation of distributed data and services] [Jeff Ullman Mediator-as-view] [Purposeful pursuit of maximizing the Network Effect] [Value chains, value networks]
The novel technology development will focus on the framework for building distributed data analysis applications using loosely coupled web service component. By these technologies, applications will be built by dynamically 'orchestrating' the information processing components. .....[to perform an array of user-defined processing applications]. The user-configurable applications will include Analysts Consoles for real-time monitoring and analysis of air pollution events, workflow programs for more elaborate processing and tools for intelligent multi-sensory data fusion. Most of these technologies are already part of the CAPITA DataFed access and analysis system, developed through support from NSF, NASA, EPA and other agencies. Similarly, and increasing array of web service components are now being offered various providers. However, a crucial missing piece is the testing of service interoperability and the development of the necessary service-adapters that will facilitate interoperability and service chaining...... [more on evolvable, fault tolerance web apps ..from Ken Goldman here] [also link to Unidata LEAD project here]
[[[Ken Goldman added the following Friday, January 20, 2006:
The proposed Observatory will consist of a large collection of independent data services, as well as applications that operate on those data services in order to collect, analyze and disseminate information. Applications will be created by multiple organizations and will require interaction with the data and applications created by other organizations. Furthermore, individual applications will need to be modified over time without disruption of the other applications that depend upon them.
To support this high degree of interoperability and dynamic change, we plan to leverage our ongoing research efforts on the creation of a shared infrastructure for the execution of distributed applications3 [TG2005]. Important goals of for this work include support for installation, execution, and evolution (live upgrades) of long-running distributed applications. For those applications that require high levels of robustness, the infrastructure will provide strong guarantees that installed applications will continue to execute correctly in spite of failures and attacks, and that they will not interfere with one another. To support widespread sharing of resources and information, the computing infrastructure is being designed in a decentralized way, with computing resources provided by a multitude of independently administered hosts with independent security policies.
The execution model for this infrastructure captures a wide class of applications and supports integration of legacy systems, including applications written using SOAP. The execution model consists of an interconnected graph of data repositories and the work flow transactions that access them. By separating repositories from computation, the model simplifies the creation of applications that span multiple organizations. For example, one organization might install an application that reads from the data repository of a second organization and writes into a repository used by a third organization. Each application will specify its own security policies and fault-tolerance requirements.
Building on prior experience in constructing distributed systems infrastructure4,5,6 [GSMAS1995, SGM2005, PEMDG2000], work is underway is to design and implement algorithms, protocols, and middleware for a practical shared computing infrastructure that is incrementally deployable. The architecture will feature dedicated data servers and transaction servers that communicate over the Internet, that run on heterogeneous hosts, and that are maintained and administered by independent service providers. For applications that require fault-tolerance, the servers will participate in replica groups that use an efficient Byzantine agreement protocol to survive arbitrary failures, provided that the number of faulty replicas in a group is less than one third of the total number of replicas. Consequently, the infrastructure will provide guarantees that once information enters the system, it will continue to be processed to completion, even though processing spans multiple applications and administrative domains.
The Observatory will be able to benefit from this infrastructure in several ways, most notably ease of application deployment and interoperability. In addition, the infrastructure will provide opportunities for reliable automated data monitoring. For example, we anticipate that ongoing computations, such as those that perform “gridding” operations on generated data points, will be installed into the system and happen automatically. Moreover, that some of the data analysis that is currently performed on demand could be installed into the system for ongoing periodic execution. This will result in the availability of shared data repositories not only for raw data, but also for information that is the result of computational synthesis of data obtained from multiple data sources. Researchers will be able to install applications into the infrastructure to make further use of these data sources, as well as the raw data sources, as input to their application. The fact that installation in this infrastructure is managed as a computation graph provides additional structure for certifying the source and derivation of information. Knowing the sources and destinations of each information flow in the computation graph could enable, for example, the construction of a computation trace for a given result. This could be useful for verifying the legitimacy of the information, since the information trace would reveal the source of the raw data and how the result was computed.
---end of Ken Goldman’s text]]]
Support for networked community interactions by creating web-based communication channels, aid cooperation through the sharing and reuse of multidisciplinary (air chemistry, meteorology, etc) AQ data, services and tools and by providing infrastructure support for group coordination among researchers, managers for achieving common objectives such as research, management and educational projects. [Unidata community support] The exploratory data analysis tools built on top of this infrastructure will seamlessly access these data, facilitate data integration and fusion operations and allow user-configuration of the analysis steps. ...[ including simple diagnostic AQ models driven by data in the Unidata system]. The resulting insights will help developing AQ management responses to the phenomenon and contribute to the scientific elucidation of this unusual phenomenon. [cyberinfrastructure-long end-to-end value chain, many players].
Protoype Air Quality Observatory
The prototype observatory would greater than the sum of the parts: enabling access to the data and functionality of both existing systems.
Unidata Technologies: real time, push technologies, cross-disciplinary desktop vis tools, forecast model outputs, mechanisms for tracking events, standards-based remote access, and so forth.
DataFed Technologies:
Complementary technologie: more complete search systems, expertise in air quality research and education, etc..
DataFed and Unidata Technologies
DataFed
Unidata Technologies
IDD and LDM. The Unidata community of over 150 universities is building a system for disseminating near real-time earth observations via the Internet. Unlike other systems, which are based on data centers where the information can be accessed, the Unidata IDD is designed so a university can request that certain data sets be delivered to computers at their site as soon as they are available from the observing system. The IDD system also allows any site with access to specialized observations to inject the dataset into the IDD for delivery to other interested sites. Unlike most other data systems, there is no data center in the IDD. The participating departments use the Unidata Local Data Manager (LDM) to relay data to one another so the system scales indefinitely and source nodes for new datastreamscan be created when needed. At present, in the aggregate, IDD sources are injecting about 2 GB per hour into the system which delivers the data to nearly 200 sites. The individual products range from the output of numerical forecast models from NCEP to a variety of satellite imager, to NEXRAD radar data (level II and level III), to traditional weather station observations from around the globe. Some of the datasets (lightning strike data, to observations taken on commercial aircraft) are only available for research and education purposes.
Current operational status of the IDD
Available Data. The mission of the Unidata Program is to help researchers and educators acquire and use earth-related data. Most of the data are provided in "real time" or "near-real time" -- that is, the data are sent to participants almost as soon as the observations are made. Unidata is a data facilitator, not a data archive center. We provide a mechanism whereby educators and researchers, by participating in our Internet Data Distribution (IDD) system, may subscribe to streams of current data that interest them. We also provide topical mailing lists for discussing the contents of our data streams. In addition, Unidata provides mechanisms for accessing some archived data sets and case studies, and some Unidata sites do archive our data streams in raw, encoded form. There are guidelines for using these data. The primary datasets available in real-time via the IDD.
LDM. The Unidata Local Data Manager (LDM) is a collection of cooperating programs that select, capture, manage, and distribute arbitrary data products. The system is designed for event-driven data distribution, and is currently used in the Unidata Internet Data Distribution (IDD) project. The LDM system includes network client and server programs and their shared protocols. An important characteristic of the LDM is its support for flexible, site-specific configuration.
The Unidata LDM software acquires data and shares them with other networked computers. A data product is treated as a opaque unit, thus nearly any data can be relayed. In particular, the LDM can handle data from National Weather Service "NOAAport channel 3" data streams, including gridded data from the numerical forecast models. It also handles NEXRAD radar data, lightning data from the National Lightning Detection Network, and GOES satellite imagery.
Data can either be ingested directly from a data source by a client ingester, or the LDM server can talk to other LDM servers to either receive or send data. Ingesters scan the data stream, determine product boundaries, and extract products, passing those products on to the server product queue. These data, in turn, can be processed locally and/or passed on to other LDM servers.
Data passed to the LDM server are processed in a variety of ways; how specific data are processed is determined by data identifiers and a configuration file. Processing actions include placing the data in files and running arbitrary programs on the data. Decoders are also available from Unidata that interface with the LDM and convert data into the forms required by various applications.
THREDDS Data Server. The THREDDS Data Server (TDS) is a web server that provides metadata and data access for scientific datasets, building on and extending a number of existing technologies:
- THREDDS Dataset Inventory Catalogs are used to provide virtual directories of available data and their associated metadata. These catalogs can be generated dynamically or statically.
- The Netcdf-Java library reads NetCDF, OpenDAP, and HDF5 datasets, as well as other binary formats such as GRIB and NEXRAD into a "Common Data Model" (CDM). This is an abstract data model that the netCDF (Unidata), HDF5 (NCSA) and OPeNDAP (University of Rhode Island) developers are using to converge their respective data models towards. The CDM also adds "Georeferencing Coordinate Systems" and specialized "Scientific Data Type" layers, which provides the semantics needed to convert datasets to other protocols and formats such as those required by GIS systems. The library adds this information by parsing well known "attribute conventions", and by using THREDDS metadata to add missing coordinate system information and other metadata.
- An integrated server provides OpenDAP access to any datasets that can be read through the Netcdf-Java library. OpenDAP is a widely used, subsetting data access method built on the HTTP (web) protocol.
- An integrated server provides bulk file access through the HTTP protocol.
- An integrated server provides data access through the OpenGIS Consortium (OGC) Web Coverage Service (WCS) protocol for any "gridded" dataset whose coordinate system information is complete. Users can add missing information to a dataset where needed, in order to make this work.
The THREDDS Data Server is implemented in 100% Java, and is contained in a single war file, which allows very easy installation into the open-source Tomcat web server. Configuration is made as simple and as automatic as possible, and we have made the server as secure as possible.
- Download the latest stable version (3.6).
- THREDDS Data Server documentation
THREDDS Internet Data Distribution (IDD) Server. Much of the realtime data available over the Unidata Internet Data Distribution (IDD) is available through a THREDDS Data Server hosted at Unidata on motherlode.ucar.edu. You are welcome to browse and access these meteorological datasets. If you have an IDD running, you can run your own THREDDS/IDD Server. We have written standard THREDDS/IDD pqact files that have corresponding catalogs. You should download and install the TDS 3.3 version, and follow these instructions.
NetCDF. NetCDF (network Common Data Form) is a set of interfaces for array-oriented data access and a freely-distributed collection of data access libraries for C, Fortran, C++, Java, and other languages. The netCDF libraries support a machine-independent format for representing scientific data. Together, the interfaces, libraries, and format support the creation, access, and sharing of scientific data. NetCDF data is:
- Self-Describing. A netCDF file includes information about the data it contains.
- Portable. A netCDF file can be accessed by computers with different ways of storing integers, characters, and floating-point numbers.
- Direct-access. A small subset of a large dataset may be accessed efficiently, without first reading through all the preceding data.
- Appendable. Data may be appended to a properly structured netCDF file without copying the dataset or redefining its structure.
- Sharable. One writer and multiple readers may simultaneously access the same netCDF file.
- Archivable. Access to all earlier forms of netCDF data will be supported by current and future versions of the software.
netCDF Java. The netCDF Java library incorporates many features that are not yet available in the other implementations of the netCDF as yet.
Common Data Model. One of the most important new features in netCDF Java is the use of a Common Data Model (CDM), a generalization of the NetCDF, OpenDAP and HDF5 data models. NetCDF-Java 2.2 (aka nj22) is also a prototype for the NetCDF-4 project, which provides a C language API for the "data access layer" of the CDM, on top of the HDF5 file format. Nj22 is a 100% Java framework for reading other file formats into netCDF, where the actual writing of the netCDF file is optional. Alpha version of the code and documentation are available now, but some of the APIs are not yet stable.
UML diagrams of the common data model
Decoders. Many datasets are transported by IDD/LDM and THREDDS technologies in forms that are not immediately useful in applications programs. These "decoder" packages perform some of the common transformations needed to get the datasets into more useful forms. Most of the decoders can be triggered automatically by the LDM on the arrival of specified types of data.
The Integrate Data Viewer. The Integrated Data Viewer (IDV) from Unidata is a Java(TM)-based software framework for analyzing and visualizing geoscience data. The IDV brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface.Read more.
To get a sense of the sorts of analysis and visualization that are possible with the IDV, there is a gallery of images, animaitions and movies.
The Air Quality Observatory Network
AQO Architecture. The architecture design of the Air Quality Observatory is that of a network as illustrated in Figure ???.
. The network will consist of nodes which can be both as servers as well as clients of air quality-relevant data. Since the nodes belong to different organizations, the serve a variety of communities, they have different data needs. There is also considerable heterogeneity in which the nodes conduct internally their business of finding, accessing, transforming, and delivering data. In Figure ??? the Unidata and DataFed netorwk nodes are shown in more detail. [Unidata node description…]. The DataFed system accesses air quality-relevant data that is in the form of tables, images, grids, etc. from a variety of sources.
Other server nodes in the AQO network will include the NASA Goddard DAAC server, which provides access to immense volume of satellite data contained in their “data pool.” At that server, a WCS interface is being implemented that will facilitate its joining the AQO network. The managers of the Goddard DAAC node have also expressed strong interest in accessing air quality data from DataFed and meteorological data from Unidata. Similarly, there are good prospects of adding an EPA node to the AQO will be able to serve air quality model forecasts as well as access to an array of monitoring data. The participation of these additional AQO nodes is to be arranged and performed independently. However, this NSF AQO prototype project will provide the architectural framework for such networking, connectivity tools such as adapters and also can serve as the testbed for the expanding AQO nodes. Attracting new nodes and guiding their inclusion will be pursued by the project team members through multiple venues; membership through workgroups, ESIP, training workshops....
Interoperability. The interoperability of these heterogeneous systems can be achieved through the adaptation of uniform query protocol such as provided by the OGC Web Coverage Service. The interaction through this service is through a client-server 'conversation' initiated by the client by requesting the server to offer its list of capabilities (e.g. datasets). Next, the client requests more detailed description of a desired dataset including return data format choices. In the third call to the server, the client sends specific data requests to the server, which is formulated in terms of a physical bounding box (X,Y,H) and time range. Following such universal data requests the server can deliver the data in the desired format, such that further processing in the client environment can proceed in a seamless manner. To achieve this level of interoperability, a significant role is assigned to the adapter services that can translate data formats and make other adjustments of the data syntax. These services can be provided by the server, by the client or by third mediators, such as DataFed [and Unidata?]. This type of “loose coupling” between clients and servers will allow the creation of dynamic, user-defined processing chains. Currently the first two steps of the conversation are preformed by humans but it is hoped that new technologies will aid the execution of 'find' and 'bind' operations for distributed data and services.
AQO Catalog Services. The AQO will be connected through a mediated peer-to-peer network. The mediation will be performed by centralized or virtually centralized catalog services which enable the Publish-Find operations needed for loosely coupled web-service-based networking. The Bind i.e. data access operation will be executed directly through a protocol-driven peer-to-peer approach. The Unidata THREDDS system performs the meteorological data brokering, while the DataFed Catalog serves the same purpose for the AQ data. Other candidate AQO nodes are currently brokered through their own catalog services, e.g. ECHO for the NSAS Goddard DAAC. The unification (physically or virtually) of these distributed catalog services will be performed using the best available emerging web service brokering technologies.
Combined Air Quality - Meteorolgy Tools
"Today the boundaries between all disciplines overlap and converge at an accelerating pace. Progress in one area seeds advances in another. New tools can serve many disciplines, and even accelerate interdisciplinary work." ( Rita R. Colwell, Director, NSF, February 2003). Many in the AQ and the meteorological communities have had a longstanding desire to create new knowledge together, but with marginal success. It is hoped that a the AQ Observatory with its shared data and tools will increase their combined creativity and productivity.
The development of the cyberinfrastructure that brings together real-time air quality data through DataFed and meteorological data through the Unidata system offers the possibility of creating powerful new synergistic tools, such as the Combined Aerosol Trajectory Tool, CATT. [CATT ref] . In the CATT tool, air quality and pollutant transport (trajectory) data are combined in an exploration program that highlights the source regions from which high or low pollutant concentrations originate from. Advanced data fusion algorithms applied in CATT have already contributed to the characterization of unexpected aerosol source regions such as the nitrate source over the Upper Midwest. (Poirot et al.al).
Figure 3 illustrates the current capabilities of the tool, by highlighting in red, the airmass trajectories that carry the highest concentration of sulfate over the Eastern US on a particular day. Currently, the real-time application of such a diagnostic tool is not possible since necessary IT infrastructure for bringing together and fusing the AQ and transport data does not exists.
Figure 3. CATT diagnostic tool for dirty and clean air source regions.
With the IT infrastructure of the Air Quality Observatory, which seemlessly links real-time AQ monitoring data to current and forecast meteorology, the CATT tool could be a significant addition to the toolbox of air quality analysts and meteorologists.
Other synergistic tools include Analysts Consoles, which consists of an array of maps and charts similar to the 'meteorolgical wall' where forecaster cover their walls with relavant corrent and forecast information. The View-based data processing and rendering system of DataFed, is well suited to create such data views from the distributed data streams. Early protoyping has hown that Virtual Analysts Concoles are indeed feasible. However, there is considerble development required to make the preparation of user-defined data views easy and fast. Also, facilities are needed to to layout the views in a console according to the users needs.
Exiting new synergsm possibilities are offered through the advanced high-resolution forecast modeling offered through inthe LEAD project of Unidata. [Hey Ben, did I hear that with LEAD a user could set up and run a [nested?] local forecast model over a user-defined spatial window? If so, it would be a terrific new way to derive smoke emissions from forest fires! Lets compare notes on that.]
Figure 3. Smoke emission estimation framwork.
The challenges of unpredictable smoke emission modeling and the opportunities arising from rich real-time fire location and smoke data suggests an observation-based smoke emission monitoring strategy. This report outlines a possible framework for smoke emission estimation. The approach depicted schematically in Figure 2, uses a smoke dispersion model to simulate emissions. The model is driven by the best available observed fire locations, land and fuel conditions and transport winds. However, the pattern of smoke emission rates is derived from the observed smoke data assimilated into the smoke model. This is an inverse modeling approach to emission estimation. The smoke observations include satellite data from MODIS, GOES, AVHRR, TOMS and other emerging sensors. The surface observations include continuous PM25 (EPA AIRNOW), surface visibility (NWS ASOS), chemical measurements from IMPROVE and STN networks and miscellaneous sensors such as sun photometers (AERONET), lidar (e.g. MPLNet) etc. Each dataset is contributed by different agency. A key aspect of the framework is the information system that brings together, homogenizes and fuses the numerous observational and model data on various aspects of smoke.
Networking: Connecting Data DataFed, THREDDS, other nodes to make System of Systems
Service Interoparability chaining
Processing Applications, novel ways [Loose Coupling, Service Adapters]...
We plan to investigate multiple paradigms for the construction of interoperable distributed applications. One promising approach builds upon a separation of data and process. In particular, data repositories are seen entirely as passive entities that are acted upon by processes or transactions that are separately installed into the infrastructure. In this way, a variety of organizations can contribute to the Observatory by installing a mixture of data and processing into the infrastructure. Some repositories may contain raw data, while others may be derived from ongoing computation. Data repositories and application processes may by installed to physically execute on the same host, but logically separating them provides a design advantage over traditional work flow models. In particular, when computation is not tied to a particular data server, it becomes easier to construct applications that span multiple organizations. Processes may be installed into the infrastructure as independent entities for ongoing execution, without integrating the code for those processes into the data server implementations.
Use Cases for Prototype Demonstration
The proposed Air Quality Observatory prototype will demonstrate the benefits of the IIT through three use cases that are both integrating [cross-cutting], make a true contribution to AQ science and management and also place significant demand on the IIT. Use case selection driven by user needs: [letters EPA, LADCO, IGAC]. Not by coincidence, these topics are areas of active research in atmospheric chemistry and transport at CAPITA and other groups. The cases will be end-to end, connecting real data produces, mediators and well as decision makers. Prototype will be demonstration of seamless data discovery and access, flexible analysis tools and delivery.
[use future IT scenarios to illustrate the contribution of the advanced AQP IT]
1) Intercontinental Pollutant Transport. Sahara dust over the Southeast, Asian dust, pollution, [20+ JGR papers facilitated on the Asian Dust Events of April 1998 - now more can be done, faster and better with AQO] [letter from Terry Keating?]
2) Exceptional Events. The second AQO use case will be demonstration of real-time data access/processing/delivery/response system for Exceptional Events (EE). Exceptional AQ events include, smoke from natural and some anthropogenic fires, windblown dust events, volcanoes and also long range pollution transport events from sources such as other continents. A key feature of exceptional events is that they tend to be episodic with very high short-term concentrations. The AQO information prototype system needs will provide real-time characterization and near-term forecasting, that can be used for preventive action triggers, such as warnings to the public. Exceptional events are also important for long-term AQ management since EE samples can be flagged for exlosion from the National Ambient Air Quality Standards calculations. The IIT support by both state agencies and fedral gov...[need a para on the IIT support to global science e.g. IGAC projets] During extreme air quality events, the stakeholders need more extensive 'just in time analysis’, not just qualitative air quality information.
3) Midwestern Nitrate Anomaly. Over the last two years, a mysterious pollutant source has caused the rise of pollutant levels in excess of the AQ standard over much of the Upper Midwest in the winter/spring. Nitrogen sources are suspected since a sharp rise in nitrate aerosol is a key air component. The phenomenon has eluded detection and quantification since the area was not monitored but recent intense sampling campaigns have implicated NOX and Ammonia release from agricultural fields during snow melt. This AQO use case will integrate and facilitate access to data from soil quality, agricultural fertilizer concentration and flow, snow chemistry, surface meteorology and air chemistry.
Observatory Guiding Principles, Governance, Personnel
Guiding Priciples: openness, networking, 'harnessing the winds' [of change in technology, attitudes]
[everybody needs to show off their hats and feathers here, dont be too shy]The AQO project will be lead by Rudolf Husar and Ben Domenico. Husar is Professor of Mechanical engineering and director of the Center for Air Pollution Impact and Trend Analysis (CAPITA) and brings 30+ years of experience in AQ analysis and environmental informatics to AQO project. Ben Domenico, Deputy Director of Unidata. Since its inception in 1983, Domenico was an engine that turned Unidata into one of the earliest examples of successful cyberinfrastructure, providing data, tools and general building support to the meteorological research and education community. CAPITA and Unidata with their rich history and the experience of their staff will be the pillars of the AQO. The active members of the AQO network will be from the ranks of data providers, data users and value-adding mediators-analysts. The latter group will consist of existing AQ research projects funded by EPA, NASA, NOAA, NSF that have data, tools, or expertise to contribute to the shared AQO pool. The communication venue for the AQO will be through the Earth Science Information Partners (ESIP), as part of the Air Quality Cluster [agency/organization neutral].
The governance of the Observatory ... reps from data providers (NASA/NOAA), users (EPA), AQ science/Projects
Use agency-neutral ESIP/AQ cluster as the interaction platform -- AQO Project wiki on ESIP. Use ESIP meetings to have AQO project meetings. [Ben/Dave could use help here on governace]....
Stefan Falke is a systems engineer with NG and will lead their participation in AQO. He has made use of DataFed web services through Northrop Grumman projects. Dr. Falke is a part-time research professor of Environmental Engineering at Washington University. He is co-PI with Dr. Husar on the REASoN project and PI on an EPA funded cyberinfrastrucutre project focused on air emissions databases. Dr. Falke is co-lead for the ESIP air quality cluster. At Washington University, he teaches an environmental spatial data analysis course in which AQO could be used by students in their semester projects. From 2000-2002, Dr. Falke was a AAAS Science & Technology Policy fellow at the EPA where he served as liason with OGC in the initial development of the Sensor Web Enablement activity.
DataFed is a community-supported effort. While the data integration web services infrastructure was initially supported by specific information technology grants form NSF and NASA, the data resources are contributed by the autonomous providers. The application of the federated data and tools is in the hands of users as part of specific projects. Just like the way the quality data improves by passing it through many hands, the analysis tools will also be improve with use and feedback from data analysts. A partial list is at http://datafed.net/projects. At this time the DataFed-FASTNET user community is small but substantial efforts are under way to encourage and facilitate broader participation through larger organizations such as the Earth Science Information Partners (ESIP) Federation (NASA, NOAA, EPA main member agencies) and the Regional Planning Organizations (RPOs) for regional haze management.
Unidata is a diverse community of education and research institutions vested in the common goal of sharing data, tools to access the data, and software to use and visualize the data. Governing committees provide guidance and peer leadership. Successful cooperative endeavors have been launched through Unidata and its member institutions to enrich the geosciences community. Unidata's governing committees facilitate consensus building for future directions for the program and establish standards of involvement for the community.
Northrop Grumman (NG) is a Principal Member of the OGC and has been helping to define an interoperable open infrastructure that is shared across user communities. Through its development of OGC’s Compliance Testing Tools, NG leads the geospatial community in insight into service providers’ and GIS vendors’ compliance to OGC standards. Northrop Grumman actively supports the US Geospatial Intelligence Foundation (USGIF) and linked corporate partners, their tools, capabilities, and technologies to show the power of standards-based web services. NG has been providing geospatial applications, architectures and enterprise-wide solutions to the U.S. Government, military and homeland security organizations.
Kenneth J. Goldman is an Associate Professor in the Washington Unviersity Department of Computer Science and Engineering. Goldman brings to this project over 20 years of research experience in the areas of distributed systems and programming environments. His recent work includes JPie, a novel visual programming environment that supports live construction of running applications. In addition, Goldman is currently working on algorithms and middleware for a fault-tolerant shared infrastructure that supports evolvable long-running distributed applications. Goldman is also committed to education. He has been active in outreach activities, and was named "Professor of the Year" by the 2005 graduating senior class of the Washington University School of Engineering and Applied Science.
Activity Schedule
Infrastructure
Prototype
Use Cases
glossay acronyms
References Cited
1. Husar, R.B., et al. The Asian Dust Events of April 1998; J. Geophys. Res. Atmos. 106, 18317-18330, 2001. See event website: http://capita.wustl.edu/Asia-FarEast/
2. Wayland, R.A.; Dye, T.S. AIRNOW: America’s Resource for Real-Time and Forecasted Air Quality Information; Environmental Manager 2005, September, 19-27, 2005.
3. {(NAAMS)} National Ambient Air Monitoring Strategy
4. {TG2005} Thorvaldsson, Haraldur D.; Goldman, Kenneth J. "Architecture and Execution Model for a Survivable Workflow Transaction Infrastructure." Washington University Department of Computer Science and Engineering, Technical Report TR-2005-61, December 2005.
5. {GSMAS1995} Kenneth J. Goldman, Bala Swaminathan, T. Paul McCartney, Michael D. Anderson, and Ram Sethuraman. “The Programmers' Playground: I/O Abstraction for User-Configurable Distributed Applications.” IEEE Transactions on Software Engineering, 21(9):735-746, September 1995.
6. {SGM2005} Sajeeva L. Pallemulle, Kenneth J. Goldman, and Brandon E. Morgan. Supporting Live Development of SOAP and CORBA Servers. In Proceedings of the 25th IEEE International Conference on Distributed Computing Systems (ICDCS’05), pages 553-562, Washington DC, 2005.
7. {PEMDG2000} Jyoti K. Parwatikar, A. Maynard Engebretson, T. Paul McCartney, John D. Dehart, and Kenneth J. Goldman. “Vaudeville: A High Performance, Voice Activated Teleconferencing Application,” Multimedia Tools and Applications, 10(1): 5-22, January 2000.
Husar, R.; Poirot, R. DataFed and Fastnet: Tools for Agile Air Quality Analysis; Environmental Manager 2005, September, 39-41