|
|
Line 33: |
Line 33: |
| ===Data content: Primary & Secondary (storage); === | | ===Data content: Primary & Secondary (storage); === |
| ===Data Content: Raw, Derived; === | | ===Data Content: Raw, Derived; === |
− |
| |
− | = Big Title separator=
| |
− | === Selected Server Topics ===
| |
− | ==== Design Issues ====
| |
− | * WCS version 1.0, 1.1.2, 2.0?
| |
− | * Combining WMS, WFS, WCS?
| |
− | ==== Server for Different Data Types====
| |
− | * Grid data (model, emiss., sat.)
| |
− | * Point-Station (surf. Netw.)
| |
− | * Other data types?
| |
− | ====Server Maintenance-Support====
| |
− | * SourceForge, Docum. Guides
| |
− | * Server code governance
| |
− | ====Server Performance ====
| |
− | * Remote access or cache (??)
| |
− | * Extraction of vertical levels (how to implement in a useful and WCS 1.1/2.0 compliant way)
| |
− | * Streaming concept:
| |
− | ** idea is to speed delivery of netcdf files by avoiding a local copy action before the download (if store=false)
| |
− | ** requires separation of header creation and variable data (this is probably accomplished in C API but we are not sure about Python API)
| |
− | ** would be nice to know about output file/stream size before creation (to show download progress/estimate download time etc.)
| |
− | ** might not require "real streaming formats" such as [http://www.unidata.ucar.edu/software/netcdf-java/stream/NcStream.html ncstream] (ncstream is a new format that differs from current netcdf 3/4, hence users would have to locally convert from ncstream to "normal" netcdf if they want a netcdf file in the end. Therefore, ncstream may be useful for specific client applications, but maybe not for general file download)
| |
− | * metadata Request caching (store response XML so that they are available for a subsequent request with same parameters)
| |
− | * File caching
| |
− | ** input file caching: keep input files open (for a while) so that subsequent reads from them are quicker (no need to parse netCDF header again, etc)
| |
− | *** this seems to have worked on windows but failed on linux, more work needed
| |
− | * output file caching:
| |
− | ** if there is still a result file in the temp dir that fits the request, deliver that instead of generating a new one
| |
− | *** might collide with the streaming approach - at least for store=false parameter
| |
− | * maybe limit maximum dataset size requested with store=true parameter to avoid excessive local copy operations on the server side
| |
− | ** requires a reliable output file size estimator
| |
− | ** server would return an exception if estimated size is over given threshold
| |
− | ** would force people to use store=false for large datasets, so that data could be streamed without local copy
| |
− | ** should not violate WCS 1.1 standard as only store=false is mandatory
| |
− |
| |
− | ===Selected Network Topics===
| |
− | ====AQ Network Design Issues====
| |
− | * Autonomy-interop. balance
| |
− | * Network Catalog(s)
| |
− | ====AQ Community Catalog====
| |
− | * Domain/Application Catalog(s)
| |
− | ====Network Metadata Issues====
| |
− | * Discovery metadata for AQ
| |
− | * Provenance, quality, security
| |
− | ====Network Operation, Maintenance====
| |
− | * Governance, Legitimacy
| |
− |
| |
− | ===Selected Client Topics===
| |
− | ====Client Applications====
| |
− | * Regulations/Directives
| |
− | * Air Quality/Composition. Science
| |
− | * Informing the public
| |
− | ====Client Design Issues====
| |
− | * Desktop vs web-based
| |
− | * Workflow? Mashups?
| |
− | ====Community tools methods====
| |
− | * Tools …
| |
− | * Etc etc
| |