Difference between revisions of "Sensor Management Tracking and Documentation"
m |
|||
(4 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
back to [[EnviroSensing Cluster]] main page | back to [[EnviroSensing Cluster]] main page | ||
+ | |||
+ | [[File:Plate install.jpg|400px|thumb|right|Fig. Documentation of sensor installation, maintenance, and related systems is critical to long-term data usability.]] | ||
=Overview= | =Overview= | ||
Line 120: | Line 122: | ||
=Case Studies= | =Case Studies= | ||
− | [[File:Equipment_Db_Horsburgh_9-1-2013.jpg|100 px| | + | [[File:Equipment_Db_Horsburgh_9-1-2013.jpg|100 px|right]] Case study: Data model for tracking sensors and sensor maintenance at the Utah Water Research Laboratory (J. Horsburgh, September 2013) |
The database design diagram depicts the data model as it is used at the Utah Water Research Laboratory, Utah State University. It was developed by J. Horsburgh and his research team. Currently efforts are underway to extend the [http://his.cuahsi.org/odmdatabases.html CUAHSI ODM] to store this kind of metadata based on the experience with this data model. | The database design diagram depicts the data model as it is used at the Utah Water Research Laboratory, Utah State University. It was developed by J. Horsburgh and his research team. Currently efforts are underway to extend the [http://his.cuahsi.org/odmdatabases.html CUAHSI ODM] to store this kind of metadata based on the experience with this data model. | ||
− | Case study: Two example field sheets from the | + | |
− | * [[HJA_streamgage_check_sheet.pdf|HJ Andrews stream gage check sheet]] | + | |
− | * [[HJA_watershed_check_sheet.pdf|HJ Andrews watershed check sheet]] | + | |
+ | Case study: Two example field sheets from the [http://andrewsforest.oregonstate.edu/ HJ Andrews Experimental Forest] in Oregon. | ||
+ | * [[:File:HJA_streamgage_check_sheet.pdf|HJ Andrews stream gage check sheet]] | ||
+ | * [[:File:HJA_watershed_check_sheet.pdf|HJ Andrews watershed check sheet]] |
Latest revision as of 10:33, April 24, 2014
back to EnviroSensing Cluster main page
Overview
Automated observation systems need to be managed for optimal performance. Maintenance of the overall sensor system include anything from repairs, replacements, changes to the general infrastructure, to deployment and operation of individual sensors, and seasonal or event driven site clean up activities. Any of these activities in the field may affect the data being collected. Therefore, consistent and uniform records of maintenance, service, and changes to field instrumentation and supporting infrastructure serve as metadata for long term quality control and evaluation of the sensor data.
In this chapter, we describe the types of management records that should be kept and the various methods for collecting, maintaining, communicating, and connecting this information to the data. It is important to create tracking and documentation protocols early on because these protocols will support and guide communications and work between field and data management personnel.
Real time monitoring of system health and alerting systems are discussed in the middleware, quality control, and transmission sections of this document. Although some of these parameters do not affect the actual data quality, tracking of these system performance diagnostic data may be helpful to detect patterns and prevent future data loss, intervene remotely, and schedule site visits more effectively. Calibration procedures and schedules, maintenance activities, and replacement schedules are hardware specific and will not be covered here in detail.
Introduction
Data are collected to detect changes in the environment, effects of treatments, disturbances etc., and in all data collection great care is taken to not mask the signature of events of interest with impacts from unavoidable, sampling related disturbances. Field notes are usually associated with the raw data to be able to discern a natural event of interest from a management event. Data collection approaches using automated sensing networks are becoming more complex with many people involved in the data gathering, management, and interpretation activities, and communication among all involved parties is becoming more important and more challenging. Field notes can be a useful vehicle for this communication. Everyone using older long-term data knows the value of field note books to help understand and interpret a dataset. Field notes are equally valuable to future users for a sensor data stream, particularly if the notes are interpreted such that information is integrated with the data via data qualifying flags and method description codes.
Currently there are no standards for flag code sets or for defining which events should be flagged and how to efficiently communicate with data users. Here we attempt to present a list of events that are useful to track and that have been helpful in the past to guide data users in the interpretation and evaluation of the data. To manage this information the concepts of a ‘logical sensor’, a ‘physical sensor’, a ‘method’ and ‘event codes’ have proven useful.
A ‘logical sensor’ or a sensor data stream can be defined by a location, height/depth, and measurement parameter, regardless of what exact physical sensor or hardware is used to log measurements. An example would be ‘air temperature at 3 m above the ground at site A’. However, over time the ‘physical sensor’ will have to be calibrated, eventually replaced, and a new type of sensor may be chosen to provide more accurate measurements. If hardware is swapped out for technical reasons, the data stream still represents the site location for that measurement, and the notion of a ‘logical sensor’ allows identification of a consistent data stream over time.
Changes in the type of sensor or ‘method’ might be tracked with a method code associated with the logical sensor. Of course should a replacement sensor be significantly different such that the past and new data stream are not comparable, then a new logical sensor stream should be initiated. Events such as routine calibration might be flagged with an ‘event code’ rather than a change in ‘method’, even if this event has lasting effects on the data, i.e., more accurate data. An event code may serve as a means to link to individual field notes for the event. ‘Physical sensors’ should also be individually identifiable by location and tracked through a calibration or replacement schedule.
Methods
What should be tracked
Basic information on the site and hardware configuration need to be recorded at installation time. During normal operations event tracking can be done at several levels of granularity with respect to a research program. For example, it may be done at the level of the entire infrastructure, at a site, or at a sub-component of a site. The information about each event needs to be propagated or connected to all relevant data streams. Following are examples of what should be tracked at each of the above levels, in terms of impact on the recorded data:
Documentation at setup time
- Location lat, long, elevation (and/or depth), direction (e.g. camera facing north), Location from a certain reference point (e.g. tower base)
- Site description
- Site photos with metadata, photos of procedures (how do you change ...), photo of sensor (so others can easily recognize)
- Manufacturers specs and ID of instruments (make, model, serial number, range, precision, detection limit, calibration coefficient)
- Instrumentation (e.g. datalogger, multiplexer, sensor) wiring diagrams (this should be part of the logger program comments, a header section with the wiring description channel by channel)
- Power wiring diagrams (e.g. how many solar panels, are they hooked up in series or parallel, etc.)
- Network topology and IP addresses
- Software used for calculating measurements (other than datalogger)
- Instrumentation deployment date (the “go live” date)
Infrastructure events to track
- Changes to dataloggers, multiplexers, or datalogger programs (datalogger programs may be archived)
- Power problems, including battery voltage
- Enclosure temperature and humidity
- Platform maintenance (e.g., tower inspection, tramline leveling, etc.)
- Sampling protocol changes (e.g., timing, routine changing or upgrading of sensor parts, instrument change or replacement)
- RF/network performance degradation (prevents some/all data from being transmitted; track health/status of IP network devices using SNMP streams to Nagios, etc.)
Site level events to track
- Site disturbance (e.g., animal, human, weather caused)
- Site visits (presence of people may change measurements)
- Site maintenance (e.g., cutting brush, cutting trees, etc.)
- Changes to sensor network design, including additions or deletions of sensors
Sub-component events to track
Here, we include components like individual telemetry, power systems, instruments, sensor components, etc. While each component doesn’t affect the whole system, they still may influence the interpretation of the measurements. To track individual components a system of IDs may be developed for all components and supported by Barcodes, Geo-Location Tags and Microchip Encoded Sensors.
- Sensor failures
- Sensor calibrations
- Sensor removal
- Sub-sensor addition, removal, or change (pluggable sub-sensor positions within the main sensor need to be noted and kept consistent)
- Sensor installation (replacement)
- Sensor maintenance (cleaning, change of parts)
- Sensor firmware upgrades
- Enclosure temperature and humidity
- Repositioning of sensor (e.g., move up during winter to be above snowline
- Normal (non extreme) disturbances as they are noted and removed (e.g., sticks in weirs)
- Methodology changes (e.g., temperature radiation shield change)
How to track the information
Minimally documenting or logging site events or problems might be in a table structure such as:
SiteID | DataloggerID | SensorID | date time begin | date time end | category | notes | person |
---|---|---|---|---|---|---|---|
controlled vocabulary |
However, usually a lot more is recorded at each site visit - see use cases. A controlled vocabulary is very important to categorise the event for later interpretation and flagging in the data set and should be established as early as possible with project specific terms. Several database structures to maintain this information and connect to the actual data are currently being proposed and discussed below in use cases.
Best Practices
Establish and document procedures and protocols for site visits, installation of new sensors, maintenance activities, calibrations, communication between field and data personnel. Such protocols may include pre-designed field sheets or software applications on field data entry devices, both of which should be synchronized with a central storage system to which all parties have access. Observations in the field may also be made and recorded by researchers and field personnel not directly involved in the sensor system maintenance, and provisions should be made to capture that information and communicate it to responsible staff members.
In addition to capturing the field events mentioned above it is good practice for the data management staff to regularly monitor the data and confer with the field crew when anomalies are noticed. This frequently will bring up additional information that needs to be recorded in the field. It is also good practice to have the data management staff visit the site, periodically assist with field maintenance activities to better understand and interpret field notes and generally interact with the field staff.
All physical sensors should be uniquely identifiable. This may be achieved by recording a serial number, attaching a barcode, using intelligent sensors which are capable of storing their own metadata and which can be accessed upon connection. This is particularly important for sensors that are moved around or are pulled for mass calibration and redeployed. Sensor location and calibration schedules should be tracked by each sensor with ID.
Document specific information during normal operations
- Either a pre-designed field sheet or a data entry app on a field device (tablet, laptop, etc.) helps remember every detail to record. It is also helpful to define a list of terms to describe the most common problems in a consistent way for later analysis.
- Document site ID, date, time, person(s), site conditions, tasks performed every time a site is visited.
- When updating datalogger programs, use a new program name for every change. It is advisable to save old datalogger programs.
- Use a changelog section in a datalogger program comment header to note date, author, and description of differences from last datalogger program. i.e. versioning/revision control
- For sensor specific events note the sensor ID (Bar Codes, Geo-Location Tags, Microchip Encoded Sensors (NEON 'Grape'), or intelligent sensors that store and provide their own metadata upon connection).
Maintaining the records and linking to affected datastreams
As mentioned earlier, this record keeping is an effort in communication between field and data personnel as well as communicating events to future data users. Hence a good practice is to permanently link this information to the dataset. This may be achieved on different levels - a description in a metadata document, an indicator of a method for a data series or each data value, a flag indicating a one time event at a certain data value. As a minimum affected data should be flagged in a different column within the data table.
Following the concept of a logical sensor, certain events should trigger the start of a new ‘method’ description when the data stream is affected more than regular corrections can accommodate (e.g., new sensor using a different methods of measurement). In this case it is good practice to run the old and the new sensor side by side for a while to compare. No hard and fast guidelines are available for deciding when a method change occurs and when a whole new logical sensor stream (i.e., different data set or data table) should be started. These concepts are well implemented in the CUAHSI ODM, please see those documents for further discussion.
Most events, however, can be handled by well documented flags (sensor calibration, site maintenance activities, disturbances, etc.). For documentation, flags in the data file should link to a database with more extensive explanations of the events.
Managing sensor configurations
A number of sensors provide core measurements, but will also provide the ability to expand the sensor via one or more pluggable ports. When a sub-sensor is connected, the data from the sub-sensor are usually added to the main datastream as a voltage measurement that gets converted to the measurement parameter units post-transmission. Track both the number of sub-sensors and their port positions, since a change to either may cause problems in processing the data stream in middleware applications. For instance, a water sampler like a CTD may provide ports to connect sub sensors for dissolved oxygen or turbidity measurements. Note that the DO sub-sensor should always be connected to, say, voltage port 1, and the turbidity sensor is always connected to voltage port 2, and voltage port 3 is empty.
See also middleware capabilities and QA/QC procedure documentation in those respective sections.
Case Studies
Case study: Data model for tracking sensors and sensor maintenance at the Utah Water Research Laboratory (J. Horsburgh, September 2013)
The database design diagram depicts the data model as it is used at the Utah Water Research Laboratory, Utah State University. It was developed by J. Horsburgh and his research team. Currently efforts are underway to extend the CUAHSI ODM to store this kind of metadata based on the experience with this data model.
Case study: Two example field sheets from the HJ Andrews Experimental Forest in Oregon.