Difference between revisions of "NSF Air Quality Observatory:AQ Observatory Proposal"

From Earth Science Information Partners (ESIP)
Line 126: Line 126:
 
The proposed Observatory will consist of a large collection of independent data services, as well as applications that operate on those data services in order to collect, analyze and disseminate information.  Applications will be created by multiple organizations and will require interaction with the data and applications created by other organizations.  Furthermore, individual applications will need to be modified over time without disruption of the other applications that depend upon them.
 
The proposed Observatory will consist of a large collection of independent data services, as well as applications that operate on those data services in order to collect, analyze and disseminate information.  Applications will be created by multiple organizations and will require interaction with the data and applications created by other organizations.  Furthermore, individual applications will need to be modified over time without disruption of the other applications that depend upon them.
  
To support this high degree of interoperability, we plan to leverage our ongoing research efforts on the creation of a shared infrastructure for the execution of distributed applications [TG06].  Important goals of for this work include support for installation, execution, and evolution (live upgrades) of long-running distributed applications.  For those applications that require high levels of robustness, the infrastructure will provide strong guarantees that installed applications will continue to execute correctly in spite of failures and attacks, and that they will not interfere with one another.  To support widespread sharing of resources and information, the computing infrastructure is being designed in a decentralized way, with computing resources provided by a multitude of independently administered hosts with independent security policies.
+
To support this high degree of interoperability and dynamic change, we plan to leverage our ongoing research efforts on the creation of a shared infrastructure for the execution of distributed applications [TG06].  Important goals of for this work include support for installation, execution, and evolution (live upgrades) of long-running distributed applications.  For those applications that require high levels of robustness, the infrastructure will provide strong guarantees that installed applications will continue to execute correctly in spite of failures and attacks, and that they will not interfere with one another.  To support widespread sharing of resources and information, the computing infrastructure is being designed in a decentralized way, with computing resources provided by a multitude of independently administered hosts with independent security policies.
  
 
The execution model for this infrastructure captures a wide class of applications and supports integration of legacy systems, including applications written using SOAP.  The execution model consists of an interconnected graph of data repositories and the work flow transactions that access them.  By separating repositories from computation, the model simplifies the creation of applications that span multiple organizations.  For example, one organization might install an application that reads from the data repository of a second organization and writes into a repository used by a third organization.  Each application will specify its own security policies and fault-tolerance requirements.
 
The execution model for this infrastructure captures a wide class of applications and supports integration of legacy systems, including applications written using SOAP.  The execution model consists of an interconnected graph of data repositories and the work flow transactions that access them.  By separating repositories from computation, the model simplifies the creation of applications that span multiple organizations.  For example, one organization might install an application that reads from the data repository of a second organization and writes into a repository used by a third organization.  Each application will specify its own security policies and fault-tolerance requirements.

Revision as of 15:03, January 20, 2006

Links to: AQO Proposal Main Page > Proposal | Proposal Discussion| NSF Solicitation | NSF Solicitation Discussion | People





Air Quality Observatory (AQO)


Proposal in a Nutshell:
Topic: Air Quality Observatory (AQO): Protoype based on Modular Service-based Infrastructure
IT Infrastructure: Standard Data & Tools Sharing || Orchestration of Distributed Apps || Communication, Cooperation, Coord.
AQO Prototype: DataFed extensions || THREDDS extensions || DataFEd/THEDDS fusion || Link to GIS || Community work space
Use Cases:Realtime AQ Event Detection, Analysis and Response || Intracontinental Transport || Midwest Nitrate Mystery
Management:CAPITA/Unidata, Collaborators Team || Multi-agency and Project Particpation || ESIP Facilitation

Introduction

Traditionally, air quality analysis was a slow, deliberate investigative process occurring months or years after the monitoring data had been collected. Satellites, real-time pollution detection and the World Wide Web have changed all that. Analysts and managers can now observe air pollution events as they unfold. They can ‘congregate’ through the Internet in ad hoc virtual work-groups to share their observations and collectively create the insights needed to elucidate the observed phenomena. Air quality analysis his becoming much more agile and responsive to the needs of air quality managers, the public and the scientific community. In April 1998, for example, a group of analysts keenly followed and documented on the Web, in real-time, the trans-continental transport Asian dust from the Gobi desert (Husar, et al., 2001), its impact on the air quality over the Western US and provided real-time qualitative explanation of the unusual event to the managers and to public pubic. The high value of qualitative real-time air quality information to the public is well demonstrated through EPA’s successful AIRNOW program (Weyland and Dye, 2006).

In recent years, air quality management process has also changed. The old command and control style is giving way to a more participatory approach that includes the key stakeholders and encourages the application of more science-based ‘weight of evidence’ approaches to controls. The air quality regulations now emphasize short-term monitoring while at the same time long-term air quality goals are set to glide toward ‘natural background’ levels over the next decades. In response to these and other development, EPA has undertaken a major redesign of the monitoring system that provides the main sensory data input for air quality management. The new National Ambient Air Monitoring Strategy {(NAAMS)}, through its multi-tier integrated monitoring system, is geared to provide more relevant and timely data for these complex management needs. The data from surface-based air pollution monitoring networks now provides routinely high-grade, spatio-temporal and chemical patterns throughout the US for the most serious air pollutant, fine particles (PM2.5) and ozone. Satellite sensors with global coverage and kilometer-scale spatial resolution now provide real-time snapshots which depict the pattern of haze, smoke and dust in stunning detail and the new sensors also show the pattern of gaseous compounds such as ozone and nitrogen dioxide. The generous sharing of data and tools now leads to faster knowledge creation through collaborative analysis and management. The emergence a new cooperative spirit is exemplified in the Global Earth Observation System of Systems (GEOSS, 60 + nation membership), where air quality is identified as one of the near-term opportunities for demonstrating the benefits of GEOSS.

Information technologies offer outstanding opportunities to fulfill the information needs for the new agile air quality management system. The ‘terabytes’ of data from these surface and remote sensors can now be stored, processed and delivered in near-real time. The instantaneous ‘horizontal’ diffusion of information via the Internet now permits, in principle, the delivery of the right information to the right people at the right place and time. Standardized computer-computer communication protocols and Service-Oriented Architectures (SOA) now facilitate the flexible processing of raw data into high-grade ‘actionable’ knowledge.

The increased data supply and the demand for higher grade AQ information products is a grand challenge for both and environmental science information science communities. From environmental science and engineering point of view, air quality is a highly multidisciplinary topic which includes air chemistry, atmospheric physics, meteorology, health science, ecology and others. The range of data needed for analysis and interpretation now is much richer including high resolution satellite data on PM concentrations, emissions, meteorology, and effects. Meteorological and air quality simulation and forecast models now also require more input verification, and augmentation. The “data deluge” problem is especially acute for analysts interest in aerosol pollution, since aerosols are so inherently complex and since there are so many different kinds of relevant data.

The AQ data need to be ‘metabolized’ into higher grade knowledge by the AQ analysis systems, but the value-adding chain that turns raw AQ data into 'actionable knowledge' for decision making consists of many steps, include human 'processors'. The data processing nodes are distributed among different organizations (EPA, NOAA, NASA, Regional and State Agencies etc. academia), each organization being both a producer and consumer of AQ-related information. The system must deliver relevant information to a broad range of stakeholders (federal, state, local, industry, international). Furthermore, the type of data, the level of aggregation, filtering, and the frequency at which sensory data are provided to the air quality management system differs greatly whether it is applied to policy, regulatory or operational decisions. The IIT needs to support both real-time, ‘just-in-time’ data analysis as well as the traditional in-depth post-analysis.

While the current AQ science and management system do work, their efficiency and effectiveness is hampered by the marginal support from a suitable information flow infrastructure. [stove-pipes]

Air Quality Observatory to the rescue!!!

The goal of this project is to build an infrastructure to support the science, management and education related to Air Quality. This goal is to be achieved through an Air Quality Observatory Based on a Modular Service-based Infrastructure. By making available many spatio-temporal data sources through a single web interface and in a consistent format, the DataFed and Unidata tools allow anyone to view, process, overlay, and display many types of data to gain insight to atmospheric physical and chemical processes.

A goal of the Observatory is to encourage use of these tools by a broad community of air pollution researchers and analysts, so that a growing group of empowered analysts may soon enhance the rate at which our collective knowledge of air pollution. The current challenge is to incorporate the support of the AQO into the air quality management process in a more regular and robust way.

A particularly goal is to develop and demonstrate the benefits of a mid-tier cyberinfrastructure that can benefit virtually all components of the air quality information system, the data producers, processors, human refiners, and the knowledge-consuming decision makers. ....Internet II , Cyber stuff in NSF, NASA, NOAA, EPA as well as industry....[from info stowepipe to open networking]

Infrastructure for Sharing AQ Data, Services and Tools

Current Infrastructure

DataFed an infrastructure for real-time integration and web-based delivery of distributed monitoring data. The federated data system, DataFed, (http://datafed.net) aims to support air quality management and science by more effective use of relevant data. Building on the emerging pattern of the Internet itself, DataFed assumes that datasets and new data processing services will continue to emerge spontaneously and autonomously on the Internet, as shown schematically in Figure 1. Example data providers include the AIRNOW project, modeling centers and the NASA Distributed Active Archive Centers (DAAC).

DataFed is not a centrally planned and maintained data system but a facility to harness the emerging resources by powerful dynamic data integration technologies and through a collaborative federation philosophy. The key roles of the federation infrastructure are to (1) facilitate registration of the distributed data in a user-accessible catalog; (2) ensure data interoperability based on physical dimensions of space and time; (3) provide a set of basic tools for data exploration and analysis. The federated datasets can be queried, by simply specifying a latitude-longitude window for spatial views, time range for time views, etc. This universal access is accomplished by ‘wrapping’ the heterogeneous data, a process that turns data access into a standardized web service, callable through well-defined Internet protocols.

The result of this ‘wrapping’ process is an array of homogeneous, virtual datasets that can be queried by spatial and temporal attributes and processed into higher-grade data products.

WCS GALEON here

The Service Oriented Architecture (SOA) of DataFed is used to build web-applications by connecting the web service components (e.g. services for data access, transformation, fusion, rendering, etc.) in Lego-like assembly. The generic web-tools created in this fashion include catalogs for data discovery, browsers for spatial-temporal exploration, multi-view consoles, animators, multi-layer overlays, etc.(Figure 2).

A good illustration of the federated approach is the realtime AIRNOW dataset described in a companion paper in this issue (Wayland and Dye, 2005). The AIRNOW data are collected from the States, aggregated by the federal EPA and used for informing the public (Figure 1) through the AIRNOW website. In addition, the hourly real-time O3 and PM2.5 data are also made accessible to DataFed where they are translated on the fly into uniform format. Through the DataFed web interface, any user can access and display the AIRNOW data as time series and spatial maps, perform spatial-temporal filtering and aggregation, generate spatial and temporal overlays with other data layer and incorporate these user-generated data views into their own web pages. As of early 2005, over 100 distributed air quality-relevant datasets have been ‘wrapped’ into the federated virtual database. About a dozen satellite and surface datasets are delivered within a day of the observations and two model outputs provide PM forecasts.

This is about data protocols, discovery, access processing services

DataFed

Unidata

Extending Current Infrastructure

DataFed wrappers - data access and homogenization

Standards Based Interoperability

Data and service interoperability among Air Quality Observatory (AQO) participants will be fostered through the implementation of accepted standards and protocols. Adherence to standards will foster interoperability not only within the AQO but also among other observatories, cyberinfrastrcuture projects, and the emerging GEOSS efforts.

Standards for finding, accessing, portraying and processing geospatial data are defined by the Open Geospatial Consortium (OGC). The AQO will implement many of the OGC specifications for discovering and interacting with its data and tools. The OGC specifications we expect to use in developing the AQO prototype are described in Table X.

Table X: OGC Specifications
Specification Description AQO Use
WMS Web Map Services support the creation, retrieval and display of registered and superimposed map views of information that can come simultaneously from multiple sources.
WFS The Web Feature Service defines interfaces for accessing discrete geospatial data encoded in GML.
WCS Web Coverage Services allow access to multi-dimensional data that represent coverages, such as grids and point data of spatially continuous phenomena.
CSW Catalog services support publishing and searching collections of metadata, services, and related information objects. Metadata in catalogs represent resource characteristics that can be queried and presented for humans and software.
SWE Specification emerging from the Sensor Web Enablement activity include SensorML for describing instruments, observations & measurements for describing sensor data, sensor observation service for retrieving data and sensor planning service for managing sensors.
WPS The proposed Web Processing Service offer geospatial operations, including traditional GIS processing and spatial analysis algorithms, to clients across networks

The most well established OGC specification is the Web Map Server for exchanging map images but the Web Feature Service and Web Coverage Service are gaining wider implementation. While these standards are based on the geospatial domain, they are being extended to support non-spatial aspects of geospatial data. For example, the WFS revision working group is presently revising the specification to include support for time. WCS is being revised to support coverage formats other than grids.

The success of OGC specifications have led to efforts to develop interfaces between them and other common data access protocols. For example the GALEON Interoperability Experiment, led by Ben Dominico at Unidata, is developing a WCS interface to netCDF. [Ben, anything more to add??]

Use of OGC specifications and interaction with OGC during the development of the AQO prototype will be facilitated by Northrop Grumman IT TASC (NG). NG has developed a GeoEnterprise Architecture approach for developing integrated solutions by leveraging the large body of geospatial standards, specifications, architectures, and services. As part of this effort, Northrop Grumman has participated in Technology Interoperability Experiments (TIEs) where multiple organizations collaborate to test ability to exchange distributed geospatial information using standards.

Achieving interoperability among the components of AQO will involve close interaction among its participants. Interoperability testing and prototyping will be conducted through service compliance and standards gap analysis.

Service compliance Northrop Grumman acts as the Chair of the OGC Compliance and Interoperability Subcommittee and is nearing completion of an open source compliance engine soon to be adopted by the OGC. Through compliance testing, NG's technical team has validated interoperability of various platforms and diverse geospatial data warehouses. Proving compliance testing to AQO components early in the prototype development process will ensure faster and more complete interoperability and establish an AQO infrastructure that can be networked with other infrastructure developments.

Standards gap analysis To fully exploit the multi-dimensional nature of the data (x, y, z, time, multi-parameters) query statements and portrayal services would need to support more than the traditional GIS map-focused perspective. Current OGC specifications lay a solid foundation upon which to add these capabilities. AQO development will extend and customize standards as needed and will forward these modifications to OGC for consideration in future versions of the specifications. The AQO development team has extensive experience in evaluating and enhancing geospatial standards. For example, Northrop Grumman is pressently involved in a National Technology Alliance project testing and extending OGC specifications to more fully support the temporal dimension.

AQO Network.gif

Unidata THREDDS middleware for data discovery and use; and test beds that assure the data exchange is indeed interoperable, e.g. Unidata-OGC GALEON Interoperability Experiment/Network. [much more Unidata stuff here] [Stefan OGC W*S standards] [CAPITA data wrapping, virtual SQL query for point data]

New Activities Extending the Infrastructure

Common Data Model [How about Stefano Nativi's semantic mediation]

Networking. [Semantic mediation of distributed data and services] [Jeff Ullman Mediator-as-view] [Purposeful pursuit of maximizing the Network Effect] [Value chains, value networks]

The novel technology development will focus on the framework for building distributed data analysis applications using loosely coupled web service component. By these technologies, applications will be built by dynamically 'orchestrating' the information processing components. .....[to perform an array of user-defined processing applications]. The user-configurable applications will include Analysts Consoles for real-time monitoring and analysis of air pollution events, workflow programs for more elaborate processing and tools for intelligent multi-sensory data fusion. Most of these technologies are already part of the CAPITA DataFed access and analysis system, developed through support from NSF, NASA, EPA and other agencies. Similarly, and increasing array of web service components are now being offered various providers. However, a crucial missing piece is the testing of service interoperability and the development of the necessary service-adapters that will facilitate interoperability and service chaining...... [more on evolvable, fault tolerance web apps ..from Ken Goldman here] [also link to Unidata LEAD project here]

[[[Ken Goldman added the following Friday, January 20, 2006:

The proposed Observatory will consist of a large collection of independent data services, as well as applications that operate on those data services in order to collect, analyze and disseminate information. Applications will be created by multiple organizations and will require interaction with the data and applications created by other organizations. Furthermore, individual applications will need to be modified over time without disruption of the other applications that depend upon them.

To support this high degree of interoperability and dynamic change, we plan to leverage our ongoing research efforts on the creation of a shared infrastructure for the execution of distributed applications [TG06]. Important goals of for this work include support for installation, execution, and evolution (live upgrades) of long-running distributed applications. For those applications that require high levels of robustness, the infrastructure will provide strong guarantees that installed applications will continue to execute correctly in spite of failures and attacks, and that they will not interfere with one another. To support widespread sharing of resources and information, the computing infrastructure is being designed in a decentralized way, with computing resources provided by a multitude of independently administered hosts with independent security policies.

The execution model for this infrastructure captures a wide class of applications and supports integration of legacy systems, including applications written using SOAP. The execution model consists of an interconnected graph of data repositories and the work flow transactions that access them. By separating repositories from computation, the model simplifies the creation of applications that span multiple organizations. For example, one organization might install an application that reads from the data repository of a second organization and writes into a repository used by a third organization. Each application will specify its own security policies and fault-tolerance requirements.

Building on prior experience in constructing distributed systems infrastructure [ADD CITATIONS], work is underway is to design and implement algorithms, protocols, and middleware for a practical shared computing infrastructure that is incrementally deployable. The architecture will feature dedicated data servers and transaction servers that communicate over the Internet, that run on heterogeneous hosts, and that are maintained and administered by independent service providers. For applications that require fault-tolerance, the servers will participate in replica groups that use an efficient Byzantine agreement protocol to survive arbitrary failures, provided that the number of faulty replicas in a group is less than one third of the total number of replicas. Consequently, the infrastructure will provide guarantees that once information enters the system, it will continue to be processed to completion, even though processing spans multiple applications and administrative domains.

The Observatory will be able to benefit from this infrastructure in several ways, most notably ease of application deployment and interoperability. In addition, the infrastructure will provide opportunities for reliable automated data monitoring. For example, we anticipate that ongoing computations, such as those that perform “gridding” operations on generated data points, will be installed into the system and happen automatically. Moreover, that some of the data analysis that is currently performed on demand could be installed into the system for ongoing periodic execution. This will result in the availability of shared data repositories not only for raw data, but also for information that is the result of computational synthesis of data obtained from multiple data sources. Researchers will be able to install applications into the infrastructure to make further use of these data sources, as well as the raw data sources, as input to their application. The fact that installation in this infrastructure is managed as a computation graph provides additional structure for certifying the source and derivation of information. Knowing the sources and destinations of each information flow in the computation graph could enable, for example, the construction of a computation trace for a given result. This could be useful for verifying the legitimacy of the information, since the information trace would reveal the source of the raw data and how the result was computed.

---end of Ken Goldman’s text]]]

Support for networked community interactions by creating web-based communication channels, aid cooperation through the sharing and reuse of multidisciplinary (air chemistry, meteorology, etc) AQ data, services and tools and by providing infrastructure support for group coordination among researchers, managers for achieving common objectives such as research, management and educational projects. [Unidata community support] The exploratory data analysis tools built on top of this infrastructure will seamlessly access these data, facilitate data integration and fusion operations and allow user-configuration of the analysis steps. ...[ including simple diagnostic AQ models driven by data in the Unidata system]. The resulting insights will help developing AQ management responses to the phenomenon and contribute to the scientific elucidation of this unusual phenomenon. [cyberinfrastructure-long end-to-end value chain, many players].

Protoype Air Quality Observatory

Extending Current Prototype

DataFed

THREDDS

New Prototyping Activities

Networking: Connecting Data DataFed, THREDDS, other nodes to make System of Systems

Service Interoparability chaining

Processing Applications, novel ways [Loose Coupling, Service Adapters, KenGoldman stuff]...

Cross-Cutting Use Cases for Prototype Demonstration

The proposed Air Quality Observatory prototype will demonstrate the benefits of the IIT through three use cases that are both integrating [cross-cutting], make a true contribution to AQ science and management and also place significant demand on the IIT. Use case selection driven by user needs: [letters EPA, LADCO, IGAC]. Not by coincidence, these topics are areas of active research in atmospheric chemistry and transport at CAPITA and other groups. The cases will be end-to end, connecting real data produces, mediators and well as decision makers. Prototype will be demonstration of seamless data discovery and access, flexible analysis tools and delivery.

1) Intercontinental Pollutant Transport. Sahara dust over the Southeast, Asian dust, pollution, [20+ JGR papers facilitated on the Asian Dust Events of April 1998 - now more can be done, faster and better with AQO] [letter from Terry Keating?]

2) Exceptional Events. The second AQO use case will be demonstration of real-time data access/processing/delivery/response system for Exceptional Events (EE). Exceptional AQ events include, smoke from natural and some anthropogenic fires, windblown dust events, volcanoes and also long range pollution transport events from sources such as other continents. A key feature of exceptional events is that they tend to be episodic with very high short-term concentrations. The AQO information prototype system needs will provide real-time characterization and near-term forecasting, that can be used for preventive action triggers, such as warnings to the public. Exceptional events are also important for long-term AQ management since EE samples can be flagged for exlosion from the National Ambient Air Quality Standards calculations. The IIT support by both state agencies and fedral gov...[need a para on the IIT support to global science e.g. IGAC projets] During extreme air quality events, the stakeholders need more extensive 'just in time analysis’, not just qualitative air quality information.


3) Midwestern Nitrate Anomaly. Over the last two years, a mysterious pollutant source has caused the rise of pollutant levels in excess of the AQ standard over much of the Upper Midwest in the winter/spring. Nitrogen sources are suspected since a sharp rise in nitrate aerosol is a key air component. The phenomenon has eluded detection and quantification since the area was not monitored but recent intense sampling campaigns have implicated NOX and Ammonia release from agricultural fields during snow melt. This AQO use case will integrate and facilitate access to data from soil quality, agricultural fertilizer concentration and flow, snow chemistry, surface meteorology and air chemistry.

Observatory Guiding Principles, Governance, Personnel

Guiding Priciples: openness, networking, 'harnessing the winds' [of change in technology, attitudes]

The governance of the Observatory ... reps from data providers (NASA/NOAA), users (EPA), AQ science/Projects Use agency-neutral ESIP/AQ cluster as the interaction platform -- AQO Project wiki on ESIP. Use ESIP meetings to have AQO project meetings. [Ben/Dave could use help here on governace]....

[everybody needs to show off their hats and feathers here, dont be too shy]The AQO project will be lead by Rudolf Husar and Ben Domenico. Husar is Professor of Mechanical engineering and director of the Center for Air Pollution Impact and Trend Analysis (CAPITA) and brings 30+ years of experience in AQ analysis and environmental informatics to AQO project. Ben Domenico, Deputy Director of Unidata. Since its inception in 1983, Domenico was an engine that turned Unidata into one of the earliest examples of successful cyberinfrastructure, providing data, tools and general building support to the meteorological research and education community. CAPITA and Unidata with their rich history and the experience of their staff will be the pillars of the AQO. The active members of the AQO network will be from the ranks of data providers, data users and value-adding mediators-analysts. The latter group will consist of existing AQ research projects funded by EPA, NASA, NOAA, NSF that have data, tools, or expertise to contribute to the shared AQO pool. The communication venue for the AQO will be through the Earth Science Information Partners (ESIP), as part of the Air Quality Cluster [agency/organization neutral].

DataFed is a community-supported effort. While the data integration web services infrastructure was initially supported by specific information technology grants form NSF and NASA, the data resources are contributed by the autonomous providers. The application of the federated data and tools is in the hands of users as part of specific projects. Just like the way the quality data improves by passing it through many hands, the analysis tools will also be improve with use and feedback from data analysts. A partial list is at http://datafed.net/projects. At this time the DataFed-FASTNET user community is small but substantial efforts are under way to encourage and facilitate broader participation through larger organizations such as the Earth Science Information Partners (ESIP) Federation (NASA, NOAA, EPA main member agencies) and the Regional Planning Organizations (RPOs) for regional haze management.

For over 40 years, TASC has been a leader in the geospatial industry by providing geospatial applications, architectures and enterprise-wide solutions to the U.S. Government, military and homeland security organizations. The comprehensive suite of its GeoEnterprise Solutions™ ranges from developing industry recognized commercial off-the-shelf (COTS) software to establishing interoperable standards-based geospatial information architectures to deploying comprehensive, customized geospatial solutions. Northrop Grumman is a Principal Member of the OGC and has been helping to define an interoperable open infrastructure that is shared across user communities. Through its development of OGC’s Compliance Testing Tools, TASC leads the geospatial community in insight into service providers’ and GIS vendors’ compliance to OGC standards. Northrop Grumman actively supports the US Geospatial Intelligence Foundation (USGIF) and led the development of the USGIF Interoperability demonstration that linked corporate partners, their tools, capabilities, and technologies to show the power of standards-based web services.

TASC is located in St. Louis in close proximity to CAPITA, thereby fostering close coordination and management of the AQO objectives.

Stefan Falke is a tactical architect with NG and will lead their participation in AQO. He has made use of DataFed web services through Northrop Grumman projects. Dr. Falke is a part-time research professor of Environmental Engineering at Washington University. He is co-PI with Dr. Husar on the REASoN project and PI on an EPA funded cyberinfrastrucutre project focused on air emissions databases. Dr. Falke is co-lead for the ESIP air quality cluster. At Washington University, he teaches an environmental spatial data analysis course in which AQO could be used by students in their semester projects. From 2000-2002, Dr. Falke was a AAAS Science & Technology Policy fellow at the EPA where he served as liason with OGC in the initial development of the Sensor Web Enablement activity.


Broader Impacts of the Air Quality Observatory

Impact on data providing agencies [letter from NASA?]

Impact on user agencies [letter from EPA, RPOs?]

International, Earth Science Process [letter from IGAC?]

Activity Schedule

Infrastructure

Prototype

Use Cases

References Cited

Husar, R.B., et al. The Asian Dust Events of April 1998; J. Geophys. Res. Atmos. 2001, 106, 18317-18330. See event website: http://capita.wustl.edu/Asia-FarEast/

Husar, R.; Poirot, R. DataFed and Fastnet: Tools for Agile Air Quality Analysis; Environmental Manager 2005, September, 39-41

Wayland, R.A.; Dye, T.S. AIRNOW: America’s Resource for Real-Time and Forecasted Air Quality Information; Environmental Manager 2005, September, 19-27.

National Ambient Air Monitoring Strategy {(NAAMS)}

Biographical Sketches

Collaborators and Other Personnel