Difference between revisions of "Interoperability and Technology/Tech Dive Webinar Series"

From Earth Science Information Partners (ESIP)
 
(218 intermediate revisions by 7 users not shown)
Line 1: Line 1:
= Tech Dive Webinars =
+
'''[[Interoperability and Technology/Past Tech Dive Webinar Series|Past Tech Dive Webinars]] (2015-2022)'''
  
== 14 September 2017: "JupyterHub and JupyterLab Developments": Brian Granger, Cal Poly ==
+
==July 11th: Update on OGC GeoZarr Standards Working Group==
  
'''Summary''': The latest developments in JupyterHub and JupyterLab will be discussed as well as the roadmap for the future.
+
[https://www.briannapagan.com/ Dr. Brianna Rita Pagán]
  
'''Time''': Thursday, September 14, 2017, (Time: 3PM Eastern, 2PM Central, 1PM Mountain, 12PM Pacific)
+
Zarr is a cloud-native data format for n-dimensional arrays that enables access to data in compressed chunks of the original array. Zarr facilitates portability and interoperability on both object stores and hard disks.
  
'''Join meeting''':
+
As a generic data format, Zarr has increasingly become popular to use for geospatial purposes. As such, in June 2022, OGC endorsed Zarr V2.0 as an OGC Community Standard. The purpose of the GeoZarr SWG is to have an explicitly geospatial Zarr Standard (GeoZarr) adopted by OGC that establishes flexible and inclusive conventions for the Zarr cloud-native format that meet the diverse requirements of the geospatial domain. These conventions aim to provide a clear and standardized framework for organizing and describing data that ensures unambiguous representation.
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':  
+
[[File:ITI_July_2024.png|thumb|IT&I July 2024]]
Brian Granger is an Associate Professor of Physics at Cal Poly State University in San Luis Obispo, CA. He has a background in theoretical atomic, molecular and optical physics, with a PhD from the University of Colorado. His current research interests include quantum computing, parallel and distributed computing and interactive computing environments for scientific and technical computing. He is a core developer of the Jupyter project and is an active contributor to a number of other open source projects focused on scientific computing in Python.
 
  
'''Links''':
+
'''<u>Recording</u>''':
* https://jupyterhub.readthedocs.io/en/latest/
 
* https://github.com/jupyterlab/jupyterlab
 
* https://jupyter.org
 
  
== 10 August 2017: "ERDDAP: Easier access to scientific data": Bob Simons, NOAA ==
+
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/l3o11uLdm7E?si=cOBaSNFpzuYjQU3P" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
 +
</html>
  
'''Summary''': ERDDAP is a free, open source data server that gives you a simple, consistent way to download subsets of gridded and tabular scientific datasets in common file formats and make graphs and maps.
+
==June 13th: "Evaluation and recommendation of practices for publication of reproducible data and software releases in the USGS"==
  
'''Time''': Thursday, August 10, 2017, (Time: 3PM Eastern, 2PM Central, 1PM Mountain, 12PM Pacific)
+
[https://www.usgs.gov/centers/community-for-data-integration-cdi/science/evaluation-and-recommendation-practices#overview Alicia Rhoades, Dave Blodgett, Ellen Brown, Jesse Ross.]
  
'''Join meeting''':
+
USGS Fundamental Science Practices recognize data and software as separate information product types. In practice, (e.g., in model application) data are rarely complete without workflow code and workflows are often treated as software that include data. This project assembled a cross mission area team to build an understanding of current practices and develop a recommended path. The project conducted 27 interviews with USGS employees with a wide range of staff roles from across the bureau. The project also analyzed existing data and software releases to establish an evidence base of current practices for implemented information products. The project team recommends that a workshop be held at the next Community for Data Integration face to face or other venue. The workshop should consider the sum total of the findings of this project and plan specific actions that the Community can take or recommendations that the Community can advocate to the Fundamental Science Practices Advisory Council or others.
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':  
+
[[File:ITI_June_2024.png|thumb|IT&I June 2024]]
Bob Simons is an IT Specialist with NOAA's Environmental Research Division.
 
  
'''Links''':
+
'''<u>Recording</u>''':
* https://coastwatch.pfeg.noaa.gov/erddap/index.html
 
  
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/orjBINgaXag?si=a41TWK1vZZsXLyph" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
 +
</html>
  
== 13 July 2017: "GeoServer Developments": Jody Garnett and Kevin Smith, Boundless ==
+
==May 9th: "Achieving FAIR water quality data exchange thanks to international OGC water standards"==
  
'''Summary''': The latest developments in GeoServer will be discussed as well as plans for the future.
+
[https://orcid.org/0000-0001-7656-1830 (Sylvain Grellet - BRGM)]
  
'''Time''': Thursday, July 13, 2017, (Time: 3PM Eastern, 2PM Central, 1PM Mountain, 12PM Pacific)  
+
Leveraging on international standards (OGC, ISO), the OGC, WMO Water Quality Interoperabily Experiment aims at bridging the gap regarding Water Quality data exchange (surface, ground water). This presentation will also give a feedback on the methodology applied on this journey. How to build on existing international standards (OGC/ISO 19156 Observations, measurements and samples ; OGC SensorThings API) while answering domain needs and maximize community effect.
  
'''Join meeting''':  
+
[[File:ITI_May_2024.png|thumb|IT&I FAIR water quality data]]
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':  
+
'''<u>Recording</u>''':
Jody Garnett is the Community Lead and Kevin Smith is the GeoWebCache Lead at Boundless.
 
  
'''Links''':
+
<html>
* http://geoserver.org/
+
<iframe width="560" height="315" src="https://www.youtube.com/embed/AlYnSNWJYy0?si=nQfGtfJ51cJM60v8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
* https://boundlessgeo.com/geoserver/
+
</html>
  
=== Recording ===
+
'''<u>Slides</u>'''
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/q4g0RXnadAU" frameborder="0" allowfullscreen></iframe></html>
 
  
== 6 June 2017: "Installing JupyterHub in the Cloud using Kubernetes Helm": Yuvi Panda ==
+
[[File:FAIR_water_quality_data_OGC_Grellet-compressed.pdf|Slides from May 2024 IT&I]]
  
'''Summary''': Yuvi Panda will show how to deploy JupyterHub in the Cloud using Kubernetes Helm.
+
'''<u>Minutes:</u>'''
  
'''Time''': Tuesday, June 6, 2017, (Time: 3PM Eastern, 2PM Central, 1PM Mountain, 12PM Pacific)
+
* Emphasis on international water data standards.
 +
*Introduced OGC – international standards with contribution from public, private, and academic stakeholders.
 +
*Hydrology Domain Working Group around since circa 2007
 +
**This presentation is about the latest activity, the Water Quality Interoperability Experiment
 +
*Relying on a baseline of conceptual and implementation modeling from the Hydro Domain Working Group and more general community works like Observations Measurements and Samples.
 +
*Considering both in-situ (sample observations) and ex-situ (laboratory).
 +
*Core data models support everything the IE has needed with some key extensions, but the models are designed to support extensions.
 +
*In terms of FAIR access, Sensorthings is very capable for observational data and OGC-API Features support geospatial needs well.
 +
*Introduced a separation between "sensor" and "procedure" – sensor is the thing you used, procedure is the thing you do.
  
'''Join meeting''':  
+
==April 11th: "A Home for Earth Science Data Professionals - ESIP Communities of Practice"==
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':  
+
[https://www.esipfed.org/about/people/#people_bios-1-4 (Allison Mills)]
Yuvi Panda is a developer with 15 years of experience and 400+ followers on GitHub. He worked formerly with Wikimedia, and is currently working with the Data Science Education Program at UC Berkeley to make it easier for people who don't consider
 
themselves programmers to write code. He has been very involved with creating the Helm Chart for JupyterHub.
 
  
=== Recording ===
+
Earth Science Information Partners (ESIP) is a nonprofit funded by cooperative agreements with NASA, NOAA, and USGS. To empower the use and stewardship of Earth science data, we support twice-annual meetings, virtual collaborations, microfunding grants, graduate fellowships, and partnerships with 170+ organizations. Our model is built on an ever-evolving quilt of collaborative tools: Guest speaker Allison Mills will share insights on the behind-the-scenes IT structures that support our communities of practice.
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/aUwMlSIjtdg" frameborder="0" allowfullscreen></iframe></html>
 
  
 +
[[File:ITI_April2024.png|thumb|IT&I ESIP Communities of Practice]]
  
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/6loiBWpgMGE?si=yvIMfhKNbDrX_cn_" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
 +
</html>
  
'''Links''':
+
'''<u>Minutes:</u>'''
* https://github.com/yuvipanda
 
* https://jupyterhub.readthedocs.io/en/latest/
 
* https://gitter.im/jupyterhub/jupyterhub
 
* https://daemonza.github.io/2017/02/20/using-helm-to-deploy-to-kubernetes/
 
* https://github.com/kubernetes/helm
 
  
== 11 May 2017: "TerriaJS: A Free, Open-Source Library for Building Web-based Geospatial Data Explorers": Kevin Ring, CSIRO/Data61, Australia  ==
+
*Going to to talk about the IT infrastructure behind the ESIP cyber presence.
 +
*Shared ESIP Vision and Mission – BIG goals!!
 +
*Played a video about what ESIP is as a community.
 +
*But how do we actually "build a community"?
 +
*Virtual collaborations need digital tools.
 +
*<nowiki>https://esipfed.org/collaborate</nowiki>
 +
**Needs a front door and a welcome mat!
 +
** "It doesn't matter how nice your doormat is if your porch is rotten."
 +
**Tools: Homepage, Slack, Update email, and people Directory.
 +
**"We take easy collaboration for granted."
 +
*<nowiki>https://esipfed.org/lab</nowiki>
 +
** Microfunding – build in time for learning objectives.
 +
**RFP system, github, figshare, people directory.
 +
**"Learning objectives are a key component of an ESIP lab project."
 +
*<nowiki>https://esipfed.org/meetings</nowiki>
 +
**Web site, agendas, eventbrite, QigoChat + Zoom, Google Docs.
  
'''Summary''': The library behind the Australian National Map. 3D and 2D geospatial visualization based on Cesium and Leaflet. Visualise WMS, WMTS, WFS, KML, GeoJSON, CSV, CZML, GPX, and many more spatial formats out of the box, or easily add your own. Present a dynamic catalog from your existing WMS, ArcGIS, CKAN, CSW, Socrata, WMTS or WFS server, curate your catalog by hand, or use any combination thereof. Explore time-varying WMS layers, watch vehicles move smoothly across the map, and observe your CSV data change over time.
+
Problem: our emails bounce! Needed to get in the weeds of DNS and "DMARC" policies.
  
'''Time''': Thursday, May 11, 2017, (5:00pm ET | 4:00pm CT | 3:00pm MT | 2:00pm PT | 07:00am Sydney Time)
+
Domain-based Message Authentication, Reporting, and Conformance (DMARC)
  
'''Join meeting''':  
+
Problem: Twitter is now X
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':
+
Decided to focus on platforms where engagement is higher.
  
Kevin Ring is a Principal Software Engineer at CSIRO's Data61, and is the lead developer for TerriaJS. Previously, he helped found the Cesium project while working at Analytical Graphics, Inc. (AGI) and developed its streaming terrain and imagery engine.
+
Problem: Old wikimedia pages are way way outdated.
  
'''Links''':
+
Focus on creating new web pages that replace, update and maintain community content.
* http://terria.io/
 
* https://github.com/TerriaJS/terriajs
 
* http://nationalmap.gov.au/
 
  
=== Recording ===
+
Problem: "I can't use platform XYZ"
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/videoseries?list=PL8X9E6I5_i8gmLI7r6huyQNr0oC1mCadA" frameborder="0" allowfullscreen></iframe></html>
 
  
== 13 April 2017:  "Processing Planetary-Scale Data in the Cloud": Drew Bollinger, Development Seed  ==
+
Try to go the extra mile to adapt so that these issues are overcome.
  
'''Summary''': Modern cloud-based infrastructure has had a huge effect on our ability to process, manipulate, and publish satellite imagery at scale. We'll discuss current methods of making imagery available across different platforms and how this is supported by the efforts of groups like AWS to publish open satellite data including MODIS, Landsat and more.  
+
==March 15th: "Creating operational decision ready data with remote sensing and machine learning."==
  
'''Time''': Thursday, April 13, 2017, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
[https://www.voyagersearch.com/ (Brian Goldin])
  
'''Join meeting''':  
+
[[File:ITI_March2024.png|thumb|IT&I Operational Remote Sensing 2024]]
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Presenter''':
+
As organizations grapple with information overload, timely and reliable insights remain elusive, particularly in disaster scenarios. Voyager's participation in the OGC Disaster Pilot 2023 aimed to address these challenges by streamlining data integration and discovery processes. Leveraging innovative data conditioning and enrichment techniques, alongside machine learning models, Voyager transformed raw data into actionable intelligence. Through operational pipelines, we linked diverse datasets with machine learning models, automating the generation of new observations to provide decision-makers with timely insights during critical moments. This presentation will explore Voyager's role in enhancing disaster response capabilities, showcasing how innovative integration of technology along with open standards can improve decision-making processes on a global scale.
  
Drew Bollinger is a data analyst and software developer, with experience running advanced statistical and spatial analysis on large and small data sets, as well as building visualizations for data storytelling.
+
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/TFGLnVljAlY?si=LzpWoMWZx_3YMk0H" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
 +
</html>
  
'''Links''':
+
'''<u>Minutes:</u>'''
* https://github.com/sat-utils/sat-api
 
* https://github.com/sat-utils
 
* https://github.com/developmentseed/landsat-util
 
* https://libra.developmentseed.org/
 
  
=== Slides ===
+
Providing insights from the OGC Disaster Pilot 2023
http://drewbo.com/talks/esip-2017/#0
 
  
=== Recording ===
+
Goal with work is to provide timely and reliable insights based on huge volumes of data.
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/PO2z37XX1Gg" frameborder="0" allowfullscreen></iframe></html>
 
  
== 9 March 2017:  "Introduction to Esri Story Maps": Christine White, Esri  ==
+
"Overcome information overload in critical moments"
  
'''Summary''': Today, multi-media communication plays a pivotal role in how an audience experiences, understands, and shares your message. Story Maps bring a narrative to life by weaving maps,  text, images, video, and other content into a creative and memorable story. Christine will share several examples of effective Story Maps and then walk through how you can create and configure your own.
+
Example: 2022 Callao Oil Spill in Peru
  
'''Time''': Thursday, March 9, 2017, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Tsunami hit an oil tanker transferring oil to land.
  
'''Join meeting''':
+
Possibly useful data from many remote sensing products but hard to combine them all together in the moment of responding to an oil spill. (slide shows dozens of data sources)
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':  
+
Goal: build a centralized and actionable inventory of data resources.
  
Christine is a Technical Advisor and science team member at Esri. She loves using art and technology to communicate about the challenges and opportunities for our future. Christine also serves as the Vice President of ESIP. One of her favorite things about ESIP is how its members offer their unique perspectives (stories) and shared knowledge to collaborate.
+
#Connect and read data,
 +
#build pipelines to enrich data sources,
 +
#populate a registry of data sources,
 +
#construct processing framework that can operate over the registry,
 +
#build user experience framework that can execute the framework.  
  
=== GoToMeeting Recording ===
+
Focus is on an adaptable processing framework for model execution.
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/wqQW2xVw0hA" frameborder="0" allowfullscreen></iframe></html>
+
At this scale and for this purpose, it's critical to have a receipt of what was completed with basic results in a registry that is searchable. Allows model results to trigger notifications or be searched based on a record of model runs that have been run previously.
  
=== Slides ===
+
For the pilot: focused on wildfire, drought, oil spill, and climate.
Christine gave her presentation as a live StoryMap, available here:
 
https://www.arcgis.com/apps/MapSeries/index.html?appid=5a99a82a19c84dbab641a22ddd3d329b
 
  
== 9 February 2017:  "Web AppBuilder for ArcGIS": Derek Law, ESRI  ==
+
"What indicators do decision makers need to make the best decisions?"
  
'''Summary''': Web AppBuilder for ArcGIS is a pure HTML5/JavaScript-based application that allows you to create your own intuitive, fast, and beautiful web apps without writing a single line of code. The app uses new ArcGIS platform features and modern browser technology to provide both flexible and powerful capabilities such as 3D visualization of data. In addition, developers have an opportunity to create custom tools and themes through the extensibility framework.
+
What remote sensing processing models can be run in operations to provide these indicators?
  
'''Time''': Thursday, February 9, 2017, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Fire Damage Assessment
  
'''Join meeting''':
+
Detected building footprints using a remote sensing building detection model.
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':
+
Can run fire detection model in real time cross referenced with building footprints.
  
Derek Law is an Product Manager at ESRI.  He has over 15 years experience with geospatial software and web application development.
+
Need for stronger / more consistent "model metadata"
  
=== GoToMeeting Recording ===
+
Need data governance/fitness for use metadata
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/7uKbfMSX6Sw" frameborder="0" allowfullscreen></iframe></html>
+
Need better standards that provide linkages between systems.
  
=== Slides ===
+
Need better public private partnerships.
https://speakerdeck.com/esipfed/esri-webapp-builder-derek-law-esri
 
  
== 19 January 2017:  "Introduction to Google Earth Engine": Jess Walker, USGS  ==
+
Need better data licensing and sharing framework.
  
'''Summary''': Google Earth Engine is a cloud-based geospatial processing platform that unites multiple petabytes of publicly accessible imagery and a massive computational infrastructure with a web-based integrated development environment (IDE).  Users can harness the unprecedented combination of data and computing resources to conduct complex geospatial analyses on planetary scales.  
+
"This is not rocket science, it's really just building a good metadata registry."
  
'''Time''': Thursday, January 19, 2017, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
==February 15th: "Creating Great Data Products in the Cloud"==
  
'''Join meeting''':
+
[https://radiant.earth/about/ (Jed Sundwall])
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
'''Speaker(s)''':  
+
[[File:ITI_Feb2024.png|thumb|IT&I Cloud Data Products 2024]]
  
Jessica Walker is a postdoctoral researcher with the USGS Western Geographic Science Center in Tucson, AZ. Her research investigates the recovery of post-wildfire landscapes in Alaska and across the southwestern US using time series of remote sensing imagery.  
+
Competition within the public cloud sector has reliably led to reduction in object storage costs, continual improvement in performance, and a commodification of services that have made cloud-based object storage a viable solution to share almost any volume of data. Assuming that this is true, what are the best ways to create data products in a cloud environment? This presentation will include an overview of lessons learned from Radiant Earth as they’ve advocated for adoption of cloud-native geospatial data formats and best practices.
  
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/4cWGJcOcAEA?si=NYWSSB7DiGK2nrMN" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
  
=== GoToMeeting Recording ===
+
'''<u>Minutes:</u>'''
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/m47eHiOL0ZI" frameborder="0" allowfullscreen></iframe></html>
+
Jed is executive director of Radiant Earth – Focus is on human cooperation on a global scale.
  
=== Slides ===
+
Two major initiatives – Cloud Native Geospatial foundation and Source Cooperative
https://speakerdeck.com/esipfed/introduction-to-google-earth-engine-jessica-walker-usgs
 
  
== 8 December 2016:  "Vector Tile Maps": Sam Matthews, Mapbox  ==
+
Cloud native geospatial is about adoption of efficient approaches
 +
Source is about providing easy and accessible infrastructure
  
'''Summary''': Vector tiles make huge maps fast while offering full design flexibility. They are the vector data equivalent of image tiles for web mapping, applying the strengths of tiling – developed for caching, scaling and serving map imagery rapidly – to vector data.  A general overview of vector tiles will be presented.
+
What does "Cloud Native" mean? https://guide.cloudnativegeo.org/
 +
partial reads, parallel reads, easy access to metadata
  
'''Speaker(s)''':
+
Leveraging market pressure to make object stores cheaper and more scalable.
  
Sam Matthews is a Mapbox engineer  focused on improving the speed and reliability of maps. He works with the Mapnik team to generate vector tiles and maintains the upload pipeline behind Mapbox Studio. He is passionate about making open source tools as welcoming as possible through clear docs and zero assumptions.
+
"Pace Layering" – https://jods.mitpress.mit.edu/pub/issue3-brand/release/2
  
'''Time''': Thursday, December 8, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Observation: Software is getting cheaper and cheaper to build – it gets harder to create software monopolies in the way Microsoft or ESRI have.
  
'''Join meeting''':
+
This leads to a lot of diversity and a proliferation of "primitive" standards and defacto interoperability arrangements.
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
=== GoToMeeting Recording ===
+
'''Source Cooperative'''
https://www.youtube.com/watch?v=wN2-ms2PwBs
 
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/wN2-ms2PwBs" frameborder="0" allowfullscreen></iframe></html>
 
  
=== Slides ===
+
Borrowed a lot from github architecturally.  
https://speakerdeck.com/esipfed/vector-tile-maps-sam-matthews-mapbox
 
  
== 10 November 2016:  "Introducing 3D Tiles": Todd Smith, AGI  ==
+
Repository with a README
  
'''Summary''': 3D Tiles are an open specification for streaming massive heterogeneous 3D geospatial datasets. To expand on Cesium’s terrain and imagery streaming, 3D Tiles will be used to stream 3D content, including buildings, trees, point clouds, and vector data.  
+
Browser of contents in the browser.
  
'''Speaker(s)''':
+
Within this, what makes a great data product?
  
Todd Smith is the Cesium Product Manager, and helps define and manage the Cesium product line. Todd has been with the AGI team from the beginning and has been in the web mapping world for over 15 years.  He is a Penn State GIS graduate.
+
"Our data model is the Web"
  
 +
People will deal with messy data if it's super valuable.
  
'''Time''': Thursday, November 10, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Case in point, IRS 990 data on non-profits was shared in a TON of xml schemas. People came together to organize it and work with it.
  
'''Join meeting''':
+
Story about a building footprint data released in the morning – had been matched up into at least four products by the end of the day.
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
=== GoToMeeting Recording ===
+
Shout out to:
https://www.youtube.com/watch?v=0upb4E12CPE
+
https://www.geoffmulgan.com/ and https://jscaseddon.co/  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/0upb4E12CPE" frameborder="0" allowfullscreen></iframe></html>
 
  
=== Slides ===
+
https://jscaseddon.co/2024/02/science-for-steering-vs-for-decision-making/  
https://speakerdeck.com/esipfed
 
  
== 13 October 2016:  "EarthCube Integration and Test Environment (ECITE)": Phil Yang, GMU ==
+
"We don't have institutions that are tasked with producing great data products and making them available to the world!"
  
'''Summary''': An outgrowth of activities of the EarthCube Technology Architecture Committee (TAC)'s Testbed Working Group (TWG), ECITE provides an integration test-bed for technology and science projects for both EarthCube funded projects and community technology demonstrations.  ECITE consists of a seamless federated system of scalable and location independent distributed computational resources (nodes) across the US. The hybrid federated system provides a robust set of distributed resources utilizing including both public and private cloud capabilities.  
+
https://radiant.earth/blog/2023/05/we-dont-talk-about-open-data/
 +
[[File:Meme hackathons.png|thumb]]
  
'''Speaker(s)''': Chaowei Phil Yang is a Professor at George Mason University where he founded the NSF Spatiotemporal Innovation Center with colleagues from Harvard and UC-Santa Barbara. He advised over 30 graduate students and has placed over 20 geoinformatics professors around the world.  His research interest are utilizing spatiotemporal principles to optimize computing infrastructure for geospatial science applications of national and international significance. (http://cpgis.gmu.edu/homepage/)
+
"There's a server somewhere where there's some stuff" – This is very different from a local hard drive where everything is indexed.  
  
'''Time''': Thursday, October 13, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
A cloud native approach puts the index (metadata) up front in a way that you can figure out what you need.
  
'''Join meeting''':
+
A file's metadata gives you the information you need to ask for just the part of a file that you actually need.
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
=== GoToMeeting Recording ===
+
But there are other files where you don't need to do range requests. Instead, the file is broken up into many many objects that are indexed.
https://www.youtube.com/watch?v=kYi-22hXY6k
 
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/kYi-22hXY6k" frameborder="0" allowfullscreen></iframe></html>
 
  
=== Slides ===
+
In both cases, the metadata is a map to the content. Figuring out the right size of the content's bits is kind of an art form.
https://speakerdeck.com/esipfed
 
  
 +
https://www.goodreads.com/en/book/show/172366
  
== 8 September 2016: "Apache Open Climate Workbench": Lewis McGibbney and Kyo Lee, NASA JPL/Apache OCW ==
+
Q: > I was thinking of your example of Warren Buffett's daily spreadsheet (gedanken experiment)... How do you see data quality or data importance (incl. data provider trustworthiness) being effectively conveyed to users?
  
'''Summary''': Apache [http://climate.apache.org Open Climate Workbench] (OCW) is an effort to develop software that performs climate model evaluation using model outputs from a variety of different sources the [http://esgf.llnl.gov/ Earth System Grid Federation], the [http://www.cordex.org/ Coordinated Regional Climate Downscaling Experiment], the [http://nca2014.globalchange.gov/ U.S. National Climate Assessment] and the [http://www.narccap.ucar.edu/ North American Regional Climate Change Assessment Program] and temporal/spatial scales with remote sensing data from [http://www.nasa.gov NASA], [http://www.noaa.gov NOAA] and other agencies. The toolkit includes capabilities for rebinning, metrics computation and visualization.
+
A: We want to focus on verification of who people are and relying on reputational considerations to establish importance.  
  
'''Speaker(s)''': Lewis McGibbney, NASA JPL/Apache OCW; currently a Data Scientist at the NASA Jet Propulsion Laboratory in Pasadena, California, Lewis works in the Computer Science and Data Intensive Applications Group (398M). He enjoys floating up and down the tide of technologies at the Apache Software Foundation having a real enthusiasm for Web Search and Information Retrieval in particular. You'll find him on community mailing lists including Nutch, Gora, Any23, OODT, Open Climate Workbench, Tika, Usergrid and a number of incubating mailing lists including CommonsRDF, HTrace and Joshua. Lewis is currently a Project Management Committee member and Committer on OCW.
+
Q: > I agree with you about the importance of social factors in how people make decisions. What do you think the implications are of this for metadata for open data on the cloud?
  
'''Speaker(s)''': Huikyo Lee, NASA JPL/Apache OCW; currently a Climate Data Scientist at the NASA Jet Propulsion Laboratory in Pasadena, California, Huikyo has lead development of Regional Climate Model Evaluation System (http://rcmes.jpl.nasa.gov), an open-source software toolkit based on Open Climate Workbench to facilitate systematic evaluation of climate models using observational datasets from a variety of sources.
+
A: Tracking data's impact and use is an important thing to keep track of. Using metadata as concrete records of observations and how it has been used is where this becomes important.
  
'''Time''': Thursday, September 8, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Q: > What about the really important kernels of information that we use to, say calibrate remote sensing products, that are really small but super important? How do we make sure those don't get drowned?
 +
A: We need to be careful not to overemphasize "everything is open" if we can't keep really important datasets in the spotlight.
  
'''Join meeting''':  
+
==January 11th: "Using Earth Observations for Sustainable Development"==
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
=== GoToMeeting Recording ===
+
"Using Earth Observation Technologies when Assessing Environmental, Social, Policy and Technical factors to Support Sustainable Development in Developing Countries"
https://www.youtube.com/watch?v=YA8SZiG9JZk
 
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/YA8SZiG9JZk" frameborder="0" allowfullscreen></iframe></html>
+
[https://www.media.mit.edu/people/shariful/overview/ Sharif Islam]
  
=== Slides ===
+
Earth Observation (EO) technologies, such as satellites and remote sensing, provide a comprehensive view of the Earth's surface, enabling real-time monitoring and data acquisition. Within the environmental domain, EO facilitates tracking land use changes, deforestation, and biodiversity,   thereby   supporting   evidence-based   conservation   efforts.   Social   factors, encompassing population dynamics and urbanization trends, can be analyzed to inform inclusive and resilient development strategies. EO also assumes a crucial role in policy formulation by furnishing accurate and up-to-date information on environmental conditions, thereby supporting informed decision-making. Furthermore, technical aspects, like infrastructure development and resource management, benefit from EO's ability to provide detailed insights into terrain characteristics and natural resource distribution. The integration of Earth Observation across these domains yields a comprehensive understanding of the intricate interplay between environmental, social, policy, and technical factors, fostering a more sustainable and informed approach to development initiatives. In this presentation, I will discuss our lab's work in Bangladesh, Angola, and other countries, covering topics such as coastal erosion, drought, and air pollution.
https://speakerdeck.com/esipfed/apache-ocw
 
  
== 11 August 2016: "Community Data Analysis Tools (CDAT)": Charles Doutriaux, LLNL ==  
+
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/PhEg9bTd1JU?si=EUfOaz3nzEFdOOsb" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
  
'''Summary''': CDAT is a rich set of visual-data exploration and analysis capabilities well-suited for earth science data analysis problems. It integrates many tools and technology to offer scientist a start-to-finish environment for their work. From reading in various data format, to publication-quality output of their analysis.
+
'''<u>Minutes:</u>'''
  
'''Speaker''': Charles Doutriaux is a senior Lawrence Livermore National Laboratory research computer scientist, where he is known for his work in climate analytics, informatics, and management systems supporting model intercomparison projects. He works closely with many international climate scientists and shares in the recognition of the Intergovernmental Panel on Climate Change 2007 Nobel Peace Prize. He has co­-authored over 30 peer­-reviewed articles. He presented his work to many scientific conferences. Aside from everything Python-related, his research interests include climate attribution and detection, visualization, and data analysis. Doutriaux has a master's degree in "Climate and Physico-­Chemistry of the Atmosphere" from the University Joseph Fourier in Grenoble. He’s a member of the AGU and AMS. You can contact him at doutriaux1@llnl.gov.
+
Plan to share data from NASA and USGS that was used in his PHD work.
  
'''Time''': Thursday, August 11, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Applied the EVDT Environment, Vulnerability, Decision Technology Framework.
  
'''Join meeting''':
+
Studied a variety of hazards – coastal erosion, air pollution, drought, deforestation, etc.
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
=== GoToMeeting Recording ===
+
'''Coastal Erosion in Bangladesh:'''
https://www.youtube.com/watch?v=nh2dqAHt5jY
 
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/nh2dqAHt5jY" frameborder="0" allowfullscreen></iframe></html>
+
*Displacement, loss of land, major economic drain
 +
* Studied the situation in the Bay of Bengal
 +
*Used LANDSAT to study coastal erosion from the 80s to the present
 +
*Coastal erosion rates upwards of 300m/yr!
 +
*Combined survey data and landsat observations
  
=== Slides ===
+
'''Air Pollution and mortality in South Asia'''
  
== 13 July 2016:  "The NOAA OneStop Data Discovery and Access Framework Project": Ken Casey, NOAA/NCEI  ==
+
*Able to show change in air pollution over time using remote sensing
  
'''Summary''': The OneStop Project is designed to improve NOAA's data discovery and access framework.  Focusing on all layers of the framework and not just the user interface, OneStop is addressing data format and metadata best practices, ensuring more data are available through modern web services, working to improve the relevance of dataset searches, and improving both collection-level metadata management and granule level metadata systems to accommodate the wide variety and vast scale of NOAA's data. 
+
'''Drought in Angola and Brazil'''
  
'''Speaker''': Ken Casey is the Deputy Director of the Data Stewardship Division in the NOAA National Centers for Environmental Information (NCEI).  He leads the OneStop project, is active within NOAA's Big Earth Data Initiative and Big Data Project.  Ken serves on a variety of national and international science and data management panels including the US Group on Earth Observations Data Management Working Group and the Group for High Resolution Sea Surface Temperature (GHRSST) Science Team.  He co-chairs the Committee on Earth Observing Satellites SST Virtual Constellation and represents NCEI in the Federation of Earth Science Information Partners (ESIP).  He holds a PhD in Physical Oceanography from the University of Rhode Island.
+
Used SMAP (Soil Moisture Active Passive)  
  
'''Time''': Wednesday, July 13, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Developed the same index as the US Drought Monitor
  
'''Join meeting''':
+
Able to apply SMAP observations over time
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
  
=== GoToMeeting Recording ===
+
Applied a social vulnerability model using these data to identify vulnerable populations.
https://youtu.be/wp7trIRFDOs
 
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/wp7trIRFDOs" frameborder="0" allowfullscreen></iframe></html>
+
'''Deforestation in Ghana'''
  
=== Slides ===
+
Used LANDSAT to identify land converted from forest to mining and urban.
https://speakerdeck.com/esipfed/noaa-one-stop-ken-casey-ncei
 
  
== 9 June 2016:  "Dive into Docker":  Kyle Wilcox, Dave Foster and Shane StClair: Axiom Data Science ==
+
Significant amounts of land to mining (gold mining and others)
  
'''Summary''': Docker is an open platform for distributed applications that has taken the world by storm, making it easy to deploy services with complicated dependencies.  In this presentation you will learn what Docker is, why it will make your life easier, how to build a container, and how to install containers.
+
'''Water hyacinth in a major fishery lake in Benin.'''
  
'''Speaker''': Kyle Wilcox, Dave Foster and Shane StClair are developers at Axiom Data Science.  Axiom Data Science works with organizations to improve the long term management, reuse and impact of their scientific data resources.  They have built Docker containers for many of the key services used by the U.S. Integrated Ocean Observing System (US-IOOS).
+
Impact on fishery and transportation
  
'''Time''': June 9, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
Rotting hyacinth is a big issue
  
'''Join meeting''':
+
Helped develop a DSS to guide management practices
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
'''Links''':
 
* http://www.docker.com/
 
  
=== GoToMeeting Recording ===
+
'''Mangrove loss in Brazil'''
https://youtu.be/mDR_x0E5az0
 
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/mDR_x0E5az0" frameborder="0" allowfullscreen></iframe></html>
+
Combined information from economic impacts, urban plans, and remote sensing to help build a decision support tool.  
  
=== Slides ===
+
==November 9th: "Persistent Unique Well Identifiers: Why does California need well IDs?"==  
https://speakerdeck.com/esipfed/dive-into-docker-kyle-wilcox-shane-stclair-dave-foster-axiom-data-science
 
  
== 12 May 2016: "Leaflet Time Dimension":  Biel Frontera, SOCIB ==
+
[[File:ITI_Nov_Wells.png|thumb|IT&I CA Wells November 2023]]
  
'''Summary''': Leaflet.TimeDimension is a free, open-source Leaflet.js plugin that enables visualization of spatial data with a temporal dimension. It can manage different types of layers (WMS, GeoJSON, Overlay) and it can be easily extended.  It meet some common needs, enabling web maps using observational and forecasting layers generated by a THREDDS server (via ncWMS), animating trajectories of drifters, gliders, follow a simulated oil spill, and other time dependent mapping applications. 
+
[https://cawaterdata.org/teams/hannah-ake/ Hannah Ake]
  
'''Speaker''': Biel Frontera was trained as a mathematician, and has spent most of his career developing software. He is a free software enthusiast and has worked for the last 3 years on data visualization and geospatial software issues for SOCIB, the Baleric Islands Coastal Observing and Forecasting System.  
+
Groundwater is a critical resource for farms, urban and rural communities, and ecosystems in California, supplying approximately 40 percent of California's total water supply in average water years, and in some regions of the state, up to 60 percent in dry years. Regardless of water year type – some communities rely entirely on groundwater for drinking water supplies year-round. However, California lacks a uniform well identification system, which has real impacts on those who manage and depend upon groundwater. Clearly identifying wells, both existing and newly constructed, is vital to maintaining a statewide well inventory that can be more easily monitored to ensure the wellbeing of people, the environment, and the economy, while supporting the sustainable use of groundwater. A uniform well ID program has not yet been accomplished at a scale like California, but it is achievable, as evidenced by great successes in other states. Learn more about why a well ID program will be so important to tackle in California and offer your thoughts about how to untangle some of the particularly thorny technical challenges.
  
'''Time''': May 12, 2016, (3:00pm ET | 2:00pm CT | 1:00pm MT | 12:00am PT)
+
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/dvxOHh86QVQ?si=GtgSG62nbj2aVMR0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
  
'''Join meeting''':
+
'''<u>Minutes:</u>'''
* computer, tablet or smartphone: https://www.gotomeeting.com/join/533510693
 
* regular phone: United States: +1 (408) 650-3123, Access Code: 533-510-693
 
'''Links''':
 
* https://github.com/socib/Leaflet.TimeDimension
 
* http://apps.socib.es/Leaflet.TimeDimension/examples/
 
* http://www.socib.eu/
 
  
=== GoToMeeting Recording ===
+
*Groundwater is 40-60% of California's Water supply
https://www.youtube.com/watch?v=US5FUUPqlww
+
*~2 Million groundwater wells!
 +
*As many as 15k new wells are constructed each year
 +
Sustainable groundwater management act frames groundwater sustainability agencies that develop groundwater sustainability plans
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/US5FUUPqlww" frameborder="0" allowfullscreen></iframe></html>
+
There is a need to account for groundwater use to ensure the plans are achieved.
  
=== Slides ===
+
Problem: There is no dedicated funding (or central coordinator) to create and maintain a statewide well inventory.
https://speakerdeck.com/esipfed/leatlet-time-dimension-biel-frontera-socib
 
  
== 21 Apr 2016:  "The New Geoplatform.gov":  Tod Dabolt, DOI ==
+
  
'''Summary''': Geoplatform.gov was recently rebuilt from the ground up. Tod will talk about new features of the platform and plans for the future.
+
*Department of Water Resources develops standards
 +
* State Water Resources Control Board has statewide ordinance
 +
*Cities and local districts adopt local ordinance
 +
*Local enforcement agency administers and enforces ordinance
  
'''Speaker''': Tod Dabolt is the acting Geographic Information Officer for the Department of Interior, and the technical lead on Geoplatform.gov.  
+
There are a lot of IDs in use. 5 different identifiers can be used for the same well.
  
'''Time''': April 21, 2016, (2:00pm ET | 1:00pm CT | 12:00pm MT | 11:00am PT)
+
Solution: Create a well inventory that is statewide but is a compound (single id that stands in for many others) id from multiple id systems. – A meaningless identifier that links multiple others to each other.
  
'''Join meeting''':
+
There are a number of states with well id programs.  
* computer, tablet or smartphone: https://www.gotomeeting.com/join/271218861
 
* regular phone: United States: +1 (872) 240-3212, Access Code: 271-218-861
 
  
'''Links''':
+
*Trying to learn from what other states have done.
* http://www.geoplatform.gov
 
  
=== GoToMeeting Recording ===
+
Going forward with some kind of identifier system that spans all local and federal identifier systems.
https://www.youtube.com/watch?v=f-ABUpy4Qvk
 
  
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/f-ABUpy4Qvk" frameborder="0" allowfullscreen></iframe></html>
+
*Q: Will this include federal wells? – Yes!
 +
*Q: Will this actually be a new well identifier minted by someone? – Yes.
 +
*Q: If someone drills a well do they have to register it? – Yes, but it's the local enforcing agency that collects the information.
 +
*Q: What if a well is deepened? Do we update the ID? – This has caused real problems in the past. We end up with multiple IDs for the same hole that go through time.
 +
**Seems to make sense to make a new one to keep things simple.
  
=== Slides ===
+
Link mentioned early in the talk:
https://speakerdeck.com/esipfed/the-new-geoplatform-tod-dabolt-doi
 
<html><script async class="speakerdeck-embed" data-id="9c2ac038b60a4943bab8a3005350b95e" data-ratio="1.33333333333333" src="//speakerdeck.com/assets/embed.js"></script></html>
 
  
== 13 Oct 2015: Raj Pandya on AGU's Thriving Earth Exchange and Sharing Solutions ==
+
https://groundwateraccounting.org/
  
The Thriving Earth Exchange is a network and platform that connects community leaders, sponsors, and scientists and helps them combine science and local knowledge to solve on-the-ground challenges related to natural hazards, natural resources, and climate change.  I’ll talk about the general principles on which we are building TEX and describe the basic modules that are part of the TEX. Drawing on the lessons learned from our pilots, I'll talk about how we are developing modules and launching new projects with several partners. I’ll describe a range of projects – from a community monitoring effort in Denver to a Pamiri Mountain project to integrate climate projections into traditional calendars. I’ll introduce our nascent “share” module, and describe our partnership with Amazon Web Services to move prototype community-based solutions to the cloud to enhance their adaptability. And, just to live up to the name, I’ll  frame it all around a small rant about the loading-dock model of science and a rave about more participatory approaches.
+
Reference during Q&A
  
=== Slides ===
+
https://docs.ogc.org/per/20-067.html#_cerdi_vvg_selfie_demonstration
  
[[Media:2015-10-13_ESIP_RantRave_RajPandya.pdf | PDF]]
+
==October 26th: "Improving standards and documentation publishing methods: Why can’t we cross the finish line?"==
  
=== WebEx Recordings ===
+
[[File:ITI_Oct_ogc.png|thumb|IT&I OGC October 2023]]
  
[https://esipfed.webex.com/esipfed/ldr.php?RCID=221cb6674dbff96604009d20e182c637 Streaming] | [https://esipfed.webex.com/esipfed/lsr.php?RCID=0ade6c77a86111ca979174ef312aca30 Download] (The talk starts at about 12:15 into the recording.)
+
[https://www.ogc.org/about/team/scott-simmons/ Scott Simmons]
  
== 13 Aug 2015: Rich Signell on Catalog-driven Workflows for Science ==
+
OGC and the rest of the Standards community have been promising for YEARS that our Standards and supporting documentation will be more friendly to the users that need this material the most. Progress has been made on many fronts, but why are we still not finished with a promise made in 2015 that all OGC Standards will be available in implementer-friendly views first, ugly virtual printed paper second?This topic bugs me as much as it bugs our whole community. Some of the problems are institutional (often from our Government members across the globe), others are due to lack of resources, but I think that most are due to a lack of clear reward to motivate people to do things differently.Major progress is being made in some areas. The OGC APIs have landing pages that include focused and relevant content for users/implementers and it takes some effort to find the owning Standard. OGC Developer Resources are growing quickly with sample code, running examples, and multiple views of API resources in OpenAPI, Swagger, and ReDoc.
  
"Catalog-driven, reproducible workflows for ocean science: Comparing
+
[https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fportal.ogc.org%2Ffiles%2F%3Fartifact_id%3D106445&data=05%7C01%7Cdblodgett%40usgs.gov%7Cfb8c43b89a0e45fbb4ff08dbd56a8577%7C0693b5ba4b184d7b9341f32f400a5494%7C0%7C0%7C638338425710712359%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Z1GilUJ3gQaBmwi7QPaz3%2Fa2ycaqRcsJlNfoYN4aG0U%3D&reserved=0 Slides]
sea level forecasts along the US Coastline"
 
  
Rich Signell
 
  
Filipe Fernandes
+
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/HJ7TbhcVs-U?si=hcUmpoIaTz4zNodo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
  
The USGS Integrated Ocean Observing System (US-IOOS) requires that
+
'''<u>Minutes:</u>'''
data providers use standard web services (OPeNDAP+CF, OGC WMS, OGC
 
SOS) for distributing model products and insitu observations.  The
 
services are captured in ISO metadata records and searchable via
 
standard catalog services (OGC CSW).
 
  
This presentation will demonstrate how to use this system in a
+
(missing first ~15 minutes of recording -- apologies)
reproducible Jupyter Notebook, discovering, accessing and using model
 
and observed water levels along the US Coastline, using a free python
 
environment that can be installed on Mac, Windows and Linux in less
 
than 10 minutes.
 
  
=== Slides ===
+
Circa 2015 OGC GeoRabble
  
[https://speakerdeck.com/rsignell/catalog-driven-reproducible-workflows-for-ocean-science Speaker Deck] | [[Media:2015-08-13 ESIP RantRave.pdf | PDF]]
+
*Took a critical look at the status of publishing standards.
 +
*Couldn't we format these specs in a kind of tutorial form?
  
=== WebEx Recordings ===
+
* Lots of snippets and tutorial content in the specs.
 +
**E.g. http://opengeospatial.github.io/e-learning/index.html
 +
*Multiple representations of specifications – that OGC staff could maintain
  
[https://esipfed.webex.com/esipfed/ldr.php?RCID=c3ece329915705e77f48c6da0ecc2204 Streaming] | [https://esipfed.webex.com/esipfed/lsr.php?RCID=ffcf193fd739d057d72217cdd2ff8f3e Download] (The talk starts at about 12:30 into the recording.)
+
9 years later
  
== 11 June 2015: [http://www.nationaldataservice.org/projects/labs.html NDS Labs], Matt Turk ==
+
*What makes this hard?
 +
**Standards must be unambiguous AND procurable.
 +
**The modular specification is a model for this balance.
  
Matt is a member of the NDS Labs technical advisory committee and will present NDS Labs as a platform for exploring data services -- enabling the separation of data and its representation, and how NDS Labs is functioning as an emerging platform for such separation.
+
Standards are based around testable requirements that relate to conformance classes.
  
=== Slides ===
+
Swaggerhub and ReDoc as a way to show a richer collection of information for multiple users.
[[Media:2015-06-11_ESIP_RantRave_NDSLabs.pdf | PDF]]
 
  
=== WebEx Recordings ===
+
Specification are much more modular (core and extensions)
  
[https://esipfed.webex.com/esipfed/ldr.php?RCID=dd5f5320aed8be082110abbf107b27db Streaming] | [https://esipfed.webex.com/esipfed/lsr.php?RCID=dc5fba6bcc82a51951041cfcea8e9a98 Download] (The talk starts at about 21:00 into the recording.)
+
Developer website: https://developer.ogc.org/
 +
 
 +
Going to be including persistent  demonstrator (example implementations) that are "in the wild".
 +
 
 +
https://www.ogc.org/initiatives/open-science/
 +
 
 +
Moving to an "OGC Building Blocks" model that are registered across multiple platforms and linked to lots of examples.
 +
 
 +
Building blocks are richly described and nuanced but linked back to specific requirements in a specification.
 +
 
 +
https://blocks.ogc.org/
 +
 
 +
https://sn80uo0zmbg.typeform.com/to/gcwDDNB6?typeform-source=blocks.ogc.org
 +
 
 +
A lot of this focused on APIs – what about data models?
 +
 
 +
*Worked on APIs first because it was current. Also thinking about how to apply similar concepts to data models.
 +
 
 +
==September 14th: "Water data standardization: Navigating the AntiCommons" ==
 +
 
 +
[[File:ITI_Sept_IoW_Kyle-Onda.png|thumb|IT&I IoW September 2023]]
 +
 
 +
[https://internetofwater.org/about/people/kyle-onda/ Kyle Onda]
 +
 
 +
We all know interoperability rests on data standards and API standards. Many open standards are less prominent in the open water data space than proprietary solutions. This is because proprietary data management solutions are often bundled with very easy to use implementing software and more importantly—client software that address basic use cases. We’re giving people blueprints when they need houses. Community standards making processes should invest in end-user tools if they want to gain traction. The good news is that some of the newest generation of standards is much easier to develop around which has led to some reference implementations that are much easier to create end-user tools around than previously.
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/miFwXB-E1V8?si=Xr2SF_okCxLv2lL2" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
AntiCommons – name comes from social science background
 +
 
 +
Tragedy of the commons - two solutions, enclose (privatize) or regulate
 +
 
 +
Tragedy of the anticommons - as opposed to common resources, these are resources that don't get used up – as in open data. Inefficiency and under utilization is common.
 +
 
 +
Two solutions. expropriation (like imminent domain or public data), incentivize
 +
 
 +
Example – consolidate urban sprawl into higher density housing to get more open space and room for business.
 +
 
 +
Introducing the Internet of Water.
 +
 +
Noting that in PNW, there are >800 USGS stream gages and >400 from other organizations. Only USGS are very broadly known about.
 +
 
 +
Thinking about open data as an anticommons – environmental data is normally publically available but only in ways that are convenient to data providers and the software that they use.
 +
 
 +
Discussion of the variety of standardized vs bespoke modes of data dissemination.
 +
 
 +
Example of Nebraska – GUI with download and separate custom API
 +
USGS has the same basic scheme where an ETL goes from data management software to a custom web service system.
 +
 
 +
What's going on here? Limited resources lead to focus on existing users and needs and administration ease.
 +
 
 +
Tools that meet this need tend to not focus on the needs of new user and standardization.
 +
 
 +
Most organizations don't need standards – they need software. Both server and CLIENT software.
 +
 
 +
New specs and efforts ARE heading in this direction.
 +
OGC-API, SensorThings, etc.
 +
 
 +
Promising developments around proxying non standard APIs and in use of structured data "decoration" to make documentation more standard.
 +
 
 +
== August 10th: "Learning to love the upside down: Quarto and the two data science worlds" ==
 +
 
 +
[[File:ITI_August_Quarto.png|thumb|IT&I Quarto August 10th]]
 +
 
 +
[https://cscheid.net/v2/ Carlos Scheidegger]
 +
 
 +
There are two wonderful data science worlds. You can be a jupyter expert: you work on jupyter notebooks, with access to myriad Julia, Python, and R packages, and excellent technical documentation systems. You can also be a knitr and rmarkdown expert: you work on rmarkdown notebooks, with access to myriad Julia, Python, and R packages, and excellent technical documentation systems. 
 +
 
 +
But what if your colleague works on the wrong side of the fence? What if you spent years learning one of them, only to find that the job you love is in an organization that uses the other? In this talk, I’m going to tell you about quarto, a system for technical communication (software documentation, academic papers, websites, etc) that aspires to let you choose any of these worlds. 
 +
 
 +
If you’re one to worry about Conway’s law and what this two-worlds situation does to an organization’s talent pool, or if you live in one side of the world and want to be able to collaborate with folks on the other side, I think you’ll find something of value in what I have to say. 
 +
 
 +
I’m also going to complain about software, mostly the one I write. Mostly.
 +
 
 +
Slides: https://cscheid.net/static/2023-esip-quarto-talk/
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/uQ3yZjM1bj8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
Carlos was in a tenure computer science position at University of Arizona.
 +
 
 +
Hating bad software makes a software developer a good developer.
 +
 
 +
Two data science worlds:
 +
 
 +
tidyverse (with R and markdown)
 +
 
 +
*Cohesive, hard to run things out of order.
 +
* Doesn't store output.
 +
 
 +
Jupyter (python and notebooks)
 +
 
 +
*Notebook saves intermediate outputs.
 +
*State can be messed up easily – cells aren't linear steps.
 +
 
 +
Quarto:
 +
 
 +
*Acts as a compatibility layer for tidyverse and jupyter ecosystems.
 +
*Emulates RMarkdown with multi language support.
 +
 
 +
Rant:
 +
 
 +
Quarto gets you a webpage and PDF output.
 +
 
 +
– note that the PDF requirement is not great.
 +
 
 +
Quarto is kind of just a huge wrapper around pandoc.
 +
 
 +
Quarto documentation is intractably hard to build out.
 +
 
 +
Consider Conways Law – that an organization that creates a large system will create a system that is a copy of the organization's communication structure.
 +
 
 +
– Quarto is meant to allow whole organizations with different technical tools exist in the same communication structure (same system).
 +
 +
Quarto tries to make kinda hard things easy while not making really hard things impossible.
 +
 
 +
Quarto can convert jupyter notebooks (with cached outputs) into markdown and vice versa.
 +
 
 +
Issue is, you need to know a variety of other languages (YAML, CSS, Javascript, LaTeX, etc.)
 +
 
 +
– "unavoidable but kinda gross"
 +
 
 +
You can edit Quarto in RStudio or VS Code, or any text editor.
 +
 
 +
For collaboration, Quarto projects can use jupyter or knitr engines. E.g. in a single web page, you can build one page with jupyter and another page with knitr.
 +
 
 +
– you can embed a ipynb cell in a notebook.
 +
 
 +
Orchestrating computation is hard – quarto has to take input from existing computation – which can be awkward / complex.
 +
 
 +
Quarto is extensible – CSS themes, OJS for interactive webpages, Pandoc extensions.
 +
 
 +
Can also write your own shortcodes.
 +
==July 13th 2023: "Tools to Assist Simulation Based Researchers in Deciding What Project Outputs to Preserve and Share"==
 +
 
 +
[[File:ITI_July_EarthCube.png|thumb|IT&I EarthCube Model RCN July 13th]]
 +
 
 +
[https://staff.ucar.edu/users/schuster Doug Schuster]
 +
 
 +
This presentation will highlight findings from the NSF EarthCube
 +
Research Coordination Network project titled “What About Model Data? -
 +
Best Practices for Preservation and Replicability”
 +
(https://modeldatarcn.github.io/), which suggest that most simulation
 +
based research projects only need to preserve and share selected model
 +
outputs, along with the full simulation experiment workflow to
 +
communicate knowledge. Challenges related to meeting community open science
 +
expectations will also be highlighted.
 +
 
 +
Slides available here: [[File:ModelDataRCN-2023-07-13-ESIP-IT&I_v2.pdf|thumb]]
 +
 
 +
https://modeldatarcn.github.io/
 +
 
 +
Rubric: https://modeldatarcn.github.io/rubrics-worksheets/Descriptor-classifications-worksheet-v2.0.pdf
 +
 
 +
''Open science expectations for simulation based research. Frontiers in Climate, 2021. https://doi.org/10.3389/fclim.2021.763420''
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/ulk0mQSQNzQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
Primary motivation: What are data management requirements for simulation projects?
 +
 
 +
Project ran May 2020 to Jul 2022
 +
 
 +
We clearly shouldn't preserve ALL data / output from projects. It's just too expensive.
 +
 
 +
Project broke down components of data associated with a different project
 +
Forcings, code/documentation, selected outputs.
 +
 
 +
But what outputs to share?!?
 +
 
 +
Project developed a rubric of what to preserve / share.
 +
 
 +
"Is your project a data production project or a knowledge production project"
 +
 
 +
"How hard is it to rerun your workflow?"
 +
 
 +
"How much will it cost to store and serve the data?"
 +
 
 +
Rubric gives guidance on how much of a project's outputs should be preserved.
 +
 
 +
So this is all well and good, but it falls onto PIs and funding agencies.
 +
 
 +
What are the ethical and professional considerations of these trade offs?
 +
 
 +
What are the incentives in place currently? Sharing is not necessarily seen as a benefit to the author.
 +
 
 +
==June 8 2023: "Reproducible Data Pipelines in Modern Data Science: what they are, how to use them, and examples you can use!"==
 +
 
 +
[[File:ITI_June_Pipeline.png|thumb|IT&I Reproducible Pipelines June 8th]]
 +
 
 +
[https://www.usgs.gov/staff-profiles/julie-padilla Julie Padilla]
 +
 
 +
Modern scientific workflows face common challenges including accommodating growing volumes and complexity of data and the need to update analyses as new data becomes available or project needs change. The use of better practices around reproducible workflows and the use of automated data analysis pipelines can help overcome these challenges and more efficiently translate open data to actionable scientific insights. These data pipelines are transparent, reproducible, and robust to changes in the data or analysis, and therefore promote efficient, open science. In this presentation, participants will learn what makes a reproducible data pipeline and what differentiates it from a workflow as well as the key organizational concepts for effective pipeline development.
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/K8EOY_HLlho" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
Motivation –
 +
what if we find bad data in an input,
 +
what if we need to rerun something with new data,
 +
can we reproduce findings from previous work?
 +
 
 +
Need to be able to "trace" what we did and the way we do it needs to be reliable.
 +
 
 +
A "workflow" is a sequence of steps going from start to finish of some activity or process.
 +
 
 +
A "pipeline" is a programmatic implementation of a workflow that requires little to no interaction.
 +
 
 +
In a pipeline, if one workflow step or input gets changed, we can track what is "downstream" of it.
 +
 
 +
Note that different steps of the workflow may be influenced by  different people. So a given step of a pipeline could be contributed by different programmers. But each person would be contributing a component of a consistent pipeline.
 +
 
 +
There is a difference between writing scripts to building a reproducible pipeline.
 +
Better to break it into steps. Script -> organize -> encapsulate into functions -> assemble pipeline.
 +
 
 +
Focus is on R targets – snakemake is equivalent in python.
 +
 
 +
Key concepts for going from script to workflow:
 +
Functions stored separate from workflow script.
 +
Steps clearly organized in script.
 +
Can wrap steps in pipeline steps to track them.
 +
 
 +
Pipeline software keeps track of whether things have changed and what needs to be rerun.
 +
Allows visualization of the workflow inputs, functions, and steps.
 +
 
 +
How do steps of the pipeline get related to eachother?
 +
They are named and the target names get passed to downstream targets.
 +
 
 +
Chat questions about branching.
 +
Dynamic branching lets you run the same target for a list of inputs in a map/reduce pattern.
 +
 
 +
Pipelines can have outputs that are reports that render pipeline results in a nice form.
 +
 
 +
Pipeline templates:
 +
A pipeline can adopt from a standard template that is pre-determined.
 +
Helps enforce best practices and have a quick and easy starting point.
 +
 
 +
Note that USGS data science has a template for a common pattern.
 +
 
 +
What's a best practice for tracking container function and reproducibility?
 +
Versioned Git / Docker for code and environment.
 +
For data, it is context dependent. Generally, try to pull down citeable / persistent sources. If sources are not persistent, you can cache inputs for later reuse / reproducibility.
 +
 +
Data change detection / cacheing is a really tricky thing but many people are working on the problem. https://cboettig.github.io/contentid/, https://dvc.org/
 +
 
 +
https://learning.nceas.ucsb.edu/2021-11-delta/session-3-programmatic-metadata-and-data-access.html#reproducible-data-access
 +
 
 +
 
 +
==11 May 2023: "Software Procurement Has Failed Us Completely, But No More!"==
 +
 
 +
[[File:ITI_May_Software.png|thumb|IT&I Software Procurement May 11th]]
 +
 
 +
[https://waldo.jaquith.org/ Waldo Jaquith]
 +
 
 +
The way we buy custom software is terrible for everybody involved, and has become a major obstacle to agencies achieving their missions. There are solutions, if we would just use them! By combining the standard practices of user research, Agile software development, open source, modular procurement, and time & materials contracts, we can make procurement once again serve the needs of government.
 +
 
 +
Slides available here: [[File:2023-05-Jaquith.pdf|thumb]]
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/V4-3WZ5hN5k" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
Recognizing that software procurement is one of the primary ways that Software / IT systems advance, Waldo went into trying to understand that space as a software developer.
 +
 
 +
'''Healthcare.gov '''
 +
 
 +
Contract given to CGI Federal for $93M – cost ~1.7B by launch.<br>
 +
Low single digits numbers of people actually made it through the system.<br>
 +
Senior leaders were given the impression that things were in good shape.<br>
 +
The developers working on the site knew it wasn't going to work out (per IG report).<br>
 +
*strategic misrepresentation – things are represented as more rosy as you go up the chain of command<br>
 +
On launch, things went very badly and the recovery was actually quite quick and positive.<br>
 +
 
 +
Waldo recommends reading the IG report on healthcare.gov.
 +
This article: <br>
 +
''("The OIG Report Analyzing Healthcare.gov's Launch: What's There And What's Not", Health Affairs Blog, February 24, 2016. https://dx.doi.org/10.1377/hblog20160224.053370<nowiki/>)'' provides a path to the IG report: <br>
 +
''(HealthCare.gov - CMS Management of the Federal Marketplace: An OIG Case Study (OEI-06-14-00350), https://oig.hhs.gov/oei/reports/oei-06-14-00350.pdf<nowiki/>)''
 +
and additional perspective.
 +
 
 +
'''Rhode Island Unified Health Infrastructure'''
 +
 
 +
($364M to DeLoitte) "Big Bang" deployment – they let people running old systems go on the day of the new system launch.
 +
They "outsourced" a mission critical function to a contractor.
 +
 
 +
We don't tend to hear about relatively smaller projects because they are less likely to fail and garner less attention.
 +
 
 +
Outsourcing as started in the ~90s was one thing when the outsourcing was for internal agency software. It's different when the systems are actually public interfaces to stakeholders or are otherwise mission critical.
 +
 
 +
'''[[File:2023-05-Jaquith.pdf|See slides with big numbers and study sources!!]]'''
 +
 
 +
It's common for software to meet contract requirements but NOT meet end user needs.
 +
 
 +
Requirements complexity is fractal. There is no complete / comprehensive set of requirements.
 +
 
 +
… federal contractors interpreting requirements as children trying to resist getting out the door ...
 +
 
 +
There is little to no potential to update or improve requirements due to contract structure.
 +
 
 +
'''Demos not memos!'''
 +
 
 +
Memorable statements:
 +
*Outsourced ability to accomplish agency’s mission
 +
*Load bearing software systems on which the agency depends to complete their mission.
 +
*Mission of many agencies is mediated by technology.
 +
 
 +
But no more! – approach developed by 18F
 +
 
 +
System of six parts –
 +
 
 +
1. User-centered design  <br>
 +
2. Agile software development <br> 
 +
3. Product ownership  <br>
 +
4. DevOps  <br>
 +
5. Building out of loosely coupled parts  <br>
 +
6. Modular contracting  <br>
 +
 
 +
[[File:Agil-control-model.png|frame|Roles for government and vendors in agile contracting]]
 +
 
 +
"You don't know what people need till you talk to them."
 +
 
 +
Basic premise of agile is good. Focus is on finished software being developed every two weeks.
 +
 
 +
Constantly delivering a usable product... e.g. A skateboard is more usable than a car part.
 +
 
 +
Key roles for government staff around operations are too often overlooked.
 +
 
 +
Product team needs to include an Agency Product Owner. Allows government representation in software development iteration.
 +
 
 +
Build out of loosely coupled / interchangeable components. Allows you to do smaller things and form big coherent systems that can evolve.
 +
 
 +
Modular contracts allow big projects that are delivered through many small task orders or contracts. The contract document is kind of a fill in the blank template and doesn't have to be hard.
 +
 
 +
Westrum typology of cultures article is relevant is relevant: http://dx.doi.org/10.1136/qshc.2003.009522
 +
 
 +
==13 April 2023: "Evolution of open source geospatial python."==
 +
 
 +
[[File:Iti april geospatialpython 720.png|thumb|IT&I Python Open Source April 13th]]
 +
 
 +
[https://github.com/tomkralidis Tom Kralidis]
 +
 
 +
Free and Open Source Software in the Geospatial ecosystems (i.e. FOSS4G) play a key role in geospatial systems and services.  Python has become the lingua franca for scientific and geospatial software and tooling.  This rant and rave will provide an overview of the evolution of FOSS4G and Python, focusing on popular projects in support of Open Standards.
 +
 
 +
Slides: https://geopython.github.io/presentation
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/HTouLSzKGto" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
Mapserver has been around for 23 years!
 +
 
 +
Why Python for Geospatial?
 +
Ubiquity
 +
Cross OS compatible
 +
Legible and easy to understand what it's doing
 +
Support ecosystem is strong (PyPI, etc.)
 +
Balance of performance and ease of implementation
 +
Python: fast enough, and fast in human time -- more intensive workloads can glue to C/C++
 +
 
 +
The new generation of OGC services – based on JSON, so the API interoperates with client environments / objects at a much more direct level.
 +
 
 +
The geopython ecosystem has a number of low level components that are used across multiple projects.
 +
 
 +
pygeoapi is an OGC API reference implementation and an OSGeo project.
 +
E.g. https://github.com/developmentseed/geojson-pydantic
 +
 
 +
pygeoapi implements OGC API - Environmental Data Retrieval (EDR) https://ogcapi.ogc.org/edr/overview.html
 +
 
 +
pygeoapi has a plugin architecture.
 +
https://pygeoapi.io/
 +
https://code.usgs.gov/wma/nhgf/pygeoapi-plugin-cookiecutter
 +
 
 +
pycsw is an OGC CSW and OGC API - Records implementation.
 +
Works with pygeometa for metadata creation and maintenance.
 +
https://geopython.github.io/pygeometa/
 +
 
 +
There's a real trade off to "the shiny object" vs the long term sustainability of an approach. Geopython has generally erred on the side of "does it work in a virtualenv out of the box".
 +
 
 +
How does pycsw work with STAC and other catalog APIs?
 +
pycsw can convert between various representations of the same basic metadata resource.
 +
 
 +
"That's a pattern… People can implement things the way they want."
 +
 
 +
'''<u>Chat Highlights:</u>'''
 +
 
 +
*You can also write a C program that is slower than Python if you aren't careful =).
 +
*https://www.ogc.org/standards/ has lots of useful details
 +
* For anyone interested in geojson API development in Python, I just recently came across this https://github.com/developmentseed/geojson-pydantic
 +
*OGC API - Environmental Data Retrieval (EDR) https://ogcapi.ogc.org/edr/overview.html
 +
*Our team has a pygeoapi plugin cookiecutter that we are hopeful others can get some mileage out of. https://code.usgs.gov/wma/nhgf/pygeoapi-plugin-cookiecutter
 +
*I'm going to post this here and run: https://twitter.com/GdalOrg/status/1613589544737148944
 +
**''100% agreed. That's unfortunate, but PyPI is not designed to deal with binary wheels of beasts like me which depend of ~ 80 direct or indirect other native libraries. Best solution or least worst solution depending on each one's view is "conda install -c conda-forge gdal"''
 +
*General question here - you mentioned getting away from GDAL in a previous project. What are your thoughts on GDAL's role in geospatial python moving forward, and how will pygeoapi accommodate that?
 +
*Never, ever works with the wheels!
 +
*Kitware has some pre-compiled wheels as well: https://github.com/girder/large_image
 +
*In the pangeo.io project, our go to tools are geopandas for tabular geospatial data, xarray/rioxarray for n-dimensional array data, dask for parallelization, and holoviz for interactive visualization.  We use the conda-forge channel pretty much exclusively to build out environments
 +
*If you work on Windows, good luck getting the Python gdal/geos-based tools installed without Conda
 +
* data formats and standards are what make it difficult to get away from GDAL -- it just supports so many different backends!  Picking those apart and cutting legacy formats or developing more modular tools to deal with each of those things "natively" in python would be required to get away from the large dependency on something like GDAL.
 +
*Sustainability and maintainability is always good to ask yourself "how easy will it be to replace this dependency when it no longer works?"
 +
*No one should build gdal alone (unless it is winter and you need a source of heat). Join us at https://github.com/conda-forge/gdal-feedstock
 +
 
 +
 
 +
==9 Mar 2023: "Meeting Data Where it Lives: the power of virtual access patterns"==
 +
 
 +
[https://github.com/mikejohnson51 Mike Johnson] (Lynker, NOAA-affiliate) will rant and rave about the VRT and VSI (curl and S3) virtual data access patterns and how he's used them to work with LCMAP and 3DEP data in integrated climate and data analysis workflows.
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/auK_gPR-e7M" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
*VRT stands for "ViRTual"
 +
*VSI stands for "Virtual System Interface"
 +
*Framed by FAIR
 +
 
 +
LCMAP – requires fairly complex URLs to access specific data elements.
 +
 
 +
3DEP - need to understand tiling scheme to access data across domains.
 +
 
 +
Note some large packages (zip files) where only one small file is actually desired.
 +
 
 +
NWM datasets in NetCDF files that change name (with time step) daily as they are archived.
 +
 
 +
 
 +
Implications for Findability, Availability, and Reuse – note that interoperability is actually pretty good once you have the data.
 +
 
 +
VRT: – an XML "metadata" wrapper around one or more tif files.
 +
 
 +
Use case 1: download all of 3DEP tiles and wrap in a VRT xml file.
 +
 
 +
*VRT has an overall aggregated grid "shape"
 +
*Includes references to all the individual files.
 +
* Can access the dataset through the vrt wrapper to work across all the times.
 +
*Creates a seamless collection of subdatasets
 +
*Major improvement to accessibility.
 +
If you have to download the data is that "reuse" of the data??
 +
 
 +
VSI: – allows virtualization of data from remote resources available as a few protocols (S3/http/compressed)
 +
 
 +
Wide variety of GDAL utilities to access VSI files – zip, tar, 7zip
 +
 
 +
Use case 2: Access a tif file remotely without downloading all the data in the file.
 +
 
 +
*Uses vsi to access a single tif file
 +
 
 +
Use case 3: Use vsi within a vrt to remotely access contents of remote tif files.
 +
 
 +
* Note that the vrt file doesn't actually have to be local itself.
 +
*If the tiles that the vrt points to update, the vrt will update by default.
 +
*Can easily access and reuse data without actually copying it around.
 +
 
 +
Use case 4: OGR using vsi to access a shapefile in a tar.gz file remotely.
 +
 
 +
* Can create a nested url pattern to access contents of the tar.gz remotely.
 +
 
 +
Use case 5: NWM shortrange forecast of streamflow in a netcdf file.
 +
 
 +
*Appending "HDF5:" to the front of a vsicurl url allows access to a netcdf file directly.
 +
*The access url pattern is SUPER tricky to get right.
 +
 
 +
Use case 5: "flat catalogs"
 +
 
 +
*Stores a flat (denormalized) table of data variables with the information required to construct URLs.
 +
*Can search based on rudimentary metadata within the catalog.
 +
* Can access and reuse data from any host in the same workflow.
 +
 
 +
Use case 6: access NWM current and archived data from a variety of cloud data stores.
 +
 
 +
*Leveraging the flat catalog content to fix up urls and data access nuances.
 +
 
 +
Flat catalog improves findability down at the level of individual data variables.
 +
 
 +
Take Aways / discussion:
 +
 
 +
Question about the flat catalog:
 +
 
 +
"Minimal set of shortcuts" to get at this fast access mechanism.
 +
 
 +
Is the flat catalog manually curated?
 +
 
 +
More or less – all are automated but some custom logic is required to add additional content.
 +
 
 +
Would be great to systematize creation of this flat catalog more broadly.
 +
 
 +
Question: Could some “examples” be posted either in this doc or elsewhere (or links to examples), for a beginner to copy/paste some code and see for themselves, begin to think about how we’d use this? Something super basic please.
 +
 
 +
GDAL documentation is good but doesn't have many examples.
 +
 
 +
climateR has a workflow that shows how the catalog was built.
 +
 
 +
 
 +
What about authentication issues?
 +
 
 +
*S3 is handled at a session level.
 +
*Earthengine can be handled similarly.
 +
How much word of mouth or human-to-human interaction is required for the catalog.
 +
 
 +
* If there is a stable entrypoint (S3 bucket for example) some automation is possible.
 +
*If entrypoints change, configuration needs to be changed based on human intervention.
 +
 
 +
== 9 Feb 2023: "February 2023 - Rants & Raves"==
 +
 
 +
The conversation built on the "rants and raves" session from the 2023 January ESIP Meeting, starting with very short presentations and an in-depth discussion on interoperability and the Committee's next steps.
 +
 
 +
'''<u>Recording</u>''':<br />
 +
<html>
 +
<iframe width="560" height="315" src="https://www.youtube.com/embed/cS7TrLmSu5U" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
 +
</html>
 +
 
 +
'''<u>Minutes:</u>'''
 +
 
 +
*Mike Mahoney: Make Reproducibility Easy
 +
*Dave Blodgett: FAIR data and Science Data Gateways
 +
*Doug Fils:  Web architecture and Semantic Web
 +
*Megan Carter: Opening Doors for Collaboration
 +
*Yuhan (Douglas) Rao: Where are we for AI-ready data?
 +
 
 +
I had a couple major take aways from the Winter Meeting:
 +
 
 +
*We have come a long way in IT interoperability but most of our tools are based on tried and true fundamentals. We should all know more about those fundamentals.
 +
*There are a TON of unique entry points to things that, at the end of the day, do more or less the same thing. These are opportunities to work together and share tools.
 +
*The “shiny object” is a great way to build enthusiasm and trigger ideas and we need to better capture that enthusiasm and grow some shared knowledge base.
 +
 
 +
So with that, I want to suggest three core activities:
 +
 
 +
#We seek out presentations that explore foundational aspects of interoperability. I want to help build an awareness of the basics that we all kind of know but either take for granted, haven’t learned yet, or straight up forgot.
 +
#We ask for speakers to explore how a given solution fits into multiple domain’s information systems and to discuss the tension between the diversity of use cases that are accommodated by an IT solution targeted at interoperability. We are especially interested to learn about the expense / risk of adopting dependencies vs the efficiency that can be gained from adopting pre-built dependencies.
 +
#We look for opportunities to take small but meaningful steps to record the core aspects of these sessions in the form of web resources like the ESIP wiki or even Wikipedia. On this front, we will aim to construct a summary wiki page from each meeting assembled from a working notes document and the presenting authors contribution.
 +
__FORCETOC__

Latest revision as of 07:19, July 24, 2024

Past Tech Dive Webinars (2015-2022)

July 11th: Update on OGC GeoZarr Standards Working Group

Dr. Brianna Rita Pagán

Zarr is a cloud-native data format for n-dimensional arrays that enables access to data in compressed chunks of the original array. Zarr facilitates portability and interoperability on both object stores and hard disks.

As a generic data format, Zarr has increasingly become popular to use for geospatial purposes. As such, in June 2022, OGC endorsed Zarr V2.0 as an OGC Community Standard. The purpose of the GeoZarr SWG is to have an explicitly geospatial Zarr Standard (GeoZarr) adopted by OGC that establishes flexible and inclusive conventions for the Zarr cloud-native format that meet the diverse requirements of the geospatial domain. These conventions aim to provide a clear and standardized framework for organizing and describing data that ensures unambiguous representation.

IT&I July 2024

Recording:

June 13th: "Evaluation and recommendation of practices for publication of reproducible data and software releases in the USGS"

Alicia Rhoades, Dave Blodgett, Ellen Brown, Jesse Ross.

USGS Fundamental Science Practices recognize data and software as separate information product types. In practice, (e.g., in model application) data are rarely complete without workflow code and workflows are often treated as software that include data. This project assembled a cross mission area team to build an understanding of current practices and develop a recommended path. The project conducted 27 interviews with USGS employees with a wide range of staff roles from across the bureau. The project also analyzed existing data and software releases to establish an evidence base of current practices for implemented information products. The project team recommends that a workshop be held at the next Community for Data Integration face to face or other venue. The workshop should consider the sum total of the findings of this project and plan specific actions that the Community can take or recommendations that the Community can advocate to the Fundamental Science Practices Advisory Council or others.

IT&I June 2024

Recording:

May 9th: "Achieving FAIR water quality data exchange thanks to international OGC water standards"

(Sylvain Grellet - BRGM)

Leveraging on international standards (OGC, ISO), the OGC, WMO Water Quality Interoperabily Experiment aims at bridging the gap regarding Water Quality data exchange (surface, ground water). This presentation will also give a feedback on the methodology applied on this journey. How to build on existing international standards (OGC/ISO 19156 Observations, measurements and samples ; OGC SensorThings API) while answering domain needs and maximize community effect.

IT&I FAIR water quality data

Recording:

Slides

File:FAIR water quality data OGC Grellet-compressed.pdf

Minutes:

  • Emphasis on international water data standards.
  • Introduced OGC – international standards with contribution from public, private, and academic stakeholders.
  • Hydrology Domain Working Group around since circa 2007
    • This presentation is about the latest activity, the Water Quality Interoperability Experiment
  • Relying on a baseline of conceptual and implementation modeling from the Hydro Domain Working Group and more general community works like Observations Measurements and Samples.
  • Considering both in-situ (sample observations) and ex-situ (laboratory).
  • Core data models support everything the IE has needed with some key extensions, but the models are designed to support extensions.
  • In terms of FAIR access, Sensorthings is very capable for observational data and OGC-API Features support geospatial needs well.
  • Introduced a separation between "sensor" and "procedure" – sensor is the thing you used, procedure is the thing you do.

April 11th: "A Home for Earth Science Data Professionals - ESIP Communities of Practice"

(Allison Mills)

Earth Science Information Partners (ESIP) is a nonprofit funded by cooperative agreements with NASA, NOAA, and USGS. To empower the use and stewardship of Earth science data, we support twice-annual meetings, virtual collaborations, microfunding grants, graduate fellowships, and partnerships with 170+ organizations. Our model is built on an ever-evolving quilt of collaborative tools: Guest speaker Allison Mills will share insights on the behind-the-scenes IT structures that support our communities of practice.

IT&I ESIP Communities of Practice

Recording:

Minutes:

  • Going to to talk about the IT infrastructure behind the ESIP cyber presence.
  • Shared ESIP Vision and Mission – BIG goals!!
  • Played a video about what ESIP is as a community.
  • But how do we actually "build a community"?
  • Virtual collaborations need digital tools.
  • https://esipfed.org/collaborate
    • Needs a front door and a welcome mat!
    • "It doesn't matter how nice your doormat is if your porch is rotten."
    • Tools: Homepage, Slack, Update email, and people Directory.
    • "We take easy collaboration for granted."
  • https://esipfed.org/lab
    • Microfunding – build in time for learning objectives.
    • RFP system, github, figshare, people directory.
    • "Learning objectives are a key component of an ESIP lab project."
  • https://esipfed.org/meetings
    • Web site, agendas, eventbrite, QigoChat + Zoom, Google Docs.

Problem: our emails bounce! Needed to get in the weeds of DNS and "DMARC" policies.

Domain-based Message Authentication, Reporting, and Conformance (DMARC)

Problem: Twitter is now X

Decided to focus on platforms where engagement is higher.

Problem: Old wikimedia pages are way way outdated.

Focus on creating new web pages that replace, update and maintain community content.

Problem: "I can't use platform XYZ"

Try to go the extra mile to adapt so that these issues are overcome.

March 15th: "Creating operational decision ready data with remote sensing and machine learning."

(Brian Goldin)

IT&I Operational Remote Sensing 2024

As organizations grapple with information overload, timely and reliable insights remain elusive, particularly in disaster scenarios. Voyager's participation in the OGC Disaster Pilot 2023 aimed to address these challenges by streamlining data integration and discovery processes. Leveraging innovative data conditioning and enrichment techniques, alongside machine learning models, Voyager transformed raw data into actionable intelligence. Through operational pipelines, we linked diverse datasets with machine learning models, automating the generation of new observations to provide decision-makers with timely insights during critical moments. This presentation will explore Voyager's role in enhancing disaster response capabilities, showcasing how innovative integration of technology along with open standards can improve decision-making processes on a global scale.

Recording:

Minutes:

Providing insights from the OGC Disaster Pilot 2023

Goal with work is to provide timely and reliable insights based on huge volumes of data.

"Overcome information overload in critical moments"

Example: 2022 Callao Oil Spill in Peru

Tsunami hit an oil tanker transferring oil to land.

Possibly useful data from many remote sensing products but hard to combine them all together in the moment of responding to an oil spill. (slide shows dozens of data sources)

Goal: build a centralized and actionable inventory of data resources.

  1. Connect and read data,
  2. build pipelines to enrich data sources,
  3. populate a registry of data sources,
  4. construct processing framework that can operate over the registry,
  5. build user experience framework that can execute the framework.  

Focus is on an adaptable processing framework for model execution.

At this scale and for this purpose, it's critical to have a receipt of what was completed with basic results in a registry that is searchable. Allows model results to trigger notifications or be searched based on a record of model runs that have been run previously.

For the pilot: focused on wildfire, drought, oil spill, and climate.

"What indicators do decision makers need to make the best decisions?"

What remote sensing processing models can be run in operations to provide these indicators?

Fire Damage Assessment

Detected building footprints using a remote sensing building detection model.

Can run fire detection model in real time cross referenced with building footprints.

Need for stronger / more consistent "model metadata"

Need data governance/fitness for use metadata

Need better standards that provide linkages between systems.

Need better public private partnerships.

Need better data licensing and sharing framework.

"This is not rocket science, it's really just building a good metadata registry."

February 15th: "Creating Great Data Products in the Cloud"

(Jed Sundwall)

IT&I Cloud Data Products 2024

Competition within the public cloud sector has reliably led to reduction in object storage costs, continual improvement in performance, and a commodification of services that have made cloud-based object storage a viable solution to share almost any volume of data. Assuming that this is true, what are the best ways to create data products in a cloud environment? This presentation will include an overview of lessons learned from Radiant Earth as they’ve advocated for adoption of cloud-native geospatial data formats and best practices.

Recording:

Minutes:

Jed is executive director of Radiant Earth – Focus is on human cooperation on a global scale.

Two major initiatives – Cloud Native Geospatial foundation and Source Cooperative

Cloud native geospatial is about adoption of efficient approaches Source is about providing easy and accessible infrastructure

What does "Cloud Native" mean? https://guide.cloudnativegeo.org/ partial reads, parallel reads, easy access to metadata

Leveraging market pressure to make object stores cheaper and more scalable.

"Pace Layering" – https://jods.mitpress.mit.edu/pub/issue3-brand/release/2

Observation: Software is getting cheaper and cheaper to build – it gets harder to create software monopolies in the way Microsoft or ESRI have.

This leads to a lot of diversity and a proliferation of "primitive" standards and defacto interoperability arrangements.

Source Cooperative

Borrowed a lot from github architecturally.

Repository with a README

Browser of contents in the browser.

Within this, what makes a great data product?

"Our data model is the Web"

People will deal with messy data if it's super valuable.

Case in point, IRS 990 data on non-profits was shared in a TON of xml schemas. People came together to organize it and work with it.

Story about a building footprint data released in the morning – had been matched up into at least four products by the end of the day.

Shout out to: https://www.geoffmulgan.com/ and https://jscaseddon.co/

https://jscaseddon.co/2024/02/science-for-steering-vs-for-decision-making/

"We don't have institutions that are tasked with producing great data products and making them available to the world!"

https://radiant.earth/blog/2023/05/we-dont-talk-about-open-data/

Meme hackathons.png

"There's a server somewhere where there's some stuff" – This is very different from a local hard drive where everything is indexed.

A cloud native approach puts the index (metadata) up front in a way that you can figure out what you need.

A file's metadata gives you the information you need to ask for just the part of a file that you actually need.

But there are other files where you don't need to do range requests. Instead, the file is broken up into many many objects that are indexed.

In both cases, the metadata is a map to the content. Figuring out the right size of the content's bits is kind of an art form.

https://www.goodreads.com/en/book/show/172366

Q: > I was thinking of your example of Warren Buffett's daily spreadsheet (gedanken experiment)... How do you see data quality or data importance (incl. data provider trustworthiness) being effectively conveyed to users?

A: We want to focus on verification of who people are and relying on reputational considerations to establish importance.

Q: > I agree with you about the importance of social factors in how people make decisions. What do you think the implications are of this for metadata for open data on the cloud?

A: Tracking data's impact and use is an important thing to keep track of. Using metadata as concrete records of observations and how it has been used is where this becomes important.

Q: > What about the really important kernels of information that we use to, say calibrate remote sensing products, that are really small but super important? How do we make sure those don't get drowned? A: We need to be careful not to overemphasize "everything is open" if we can't keep really important datasets in the spotlight.

January 11th: "Using Earth Observations for Sustainable Development"

"Using Earth Observation Technologies when Assessing Environmental, Social, Policy and Technical factors to Support Sustainable Development in Developing Countries"

Sharif Islam

Earth Observation (EO) technologies, such as satellites and remote sensing, provide a comprehensive view of the Earth's surface, enabling real-time monitoring and data acquisition. Within the environmental domain, EO facilitates tracking land use changes, deforestation, and biodiversity,   thereby   supporting   evidence-based   conservation   efforts.   Social   factors, encompassing population dynamics and urbanization trends, can be analyzed to inform inclusive and resilient development strategies. EO also assumes a crucial role in policy formulation by furnishing accurate and up-to-date information on environmental conditions, thereby supporting informed decision-making. Furthermore, technical aspects, like infrastructure development and resource management, benefit from EO's ability to provide detailed insights into terrain characteristics and natural resource distribution. The integration of Earth Observation across these domains yields a comprehensive understanding of the intricate interplay between environmental, social, policy, and technical factors, fostering a more sustainable and informed approach to development initiatives. In this presentation, I will discuss our lab's work in Bangladesh, Angola, and other countries, covering topics such as coastal erosion, drought, and air pollution.

Recording:

Minutes:

Plan to share data from NASA and USGS that was used in his PHD work.

Applied the EVDT Environment, Vulnerability, Decision Technology Framework.

Studied a variety of hazards – coastal erosion, air pollution, drought, deforestation, etc.

Coastal Erosion in Bangladesh:

  • Displacement, loss of land, major economic drain
  • Studied the situation in the Bay of Bengal
  • Used LANDSAT to study coastal erosion from the 80s to the present
  • Coastal erosion rates upwards of 300m/yr!
  • Combined survey data and landsat observations

Air Pollution and mortality in South Asia

  • Able to show change in air pollution over time using remote sensing

Drought in Angola and Brazil

Used SMAP (Soil Moisture Active Passive)

Developed the same index as the US Drought Monitor

Able to apply SMAP observations over time

Applied a social vulnerability model using these data to identify vulnerable populations.

Deforestation in Ghana

Used LANDSAT to identify land converted from forest to mining and urban.

Significant amounts of land to mining (gold mining and others)

Water hyacinth in a major fishery lake in Benin.

Impact on fishery and transportation

Rotting hyacinth is a big issue

Helped develop a DSS to guide management practices

Mangrove loss in Brazil

Combined information from economic impacts, urban plans, and remote sensing to help build a decision support tool.

November 9th: "Persistent Unique Well Identifiers: Why does California need well IDs?"

IT&I CA Wells November 2023

Hannah Ake

Groundwater is a critical resource for farms, urban and rural communities, and ecosystems in California, supplying approximately 40 percent of California's total water supply in average water years, and in some regions of the state, up to 60 percent in dry years. Regardless of water year type – some communities rely entirely on groundwater for drinking water supplies year-round. However, California lacks a uniform well identification system, which has real impacts on those who manage and depend upon groundwater. Clearly identifying wells, both existing and newly constructed, is vital to maintaining a statewide well inventory that can be more easily monitored to ensure the wellbeing of people, the environment, and the economy, while supporting the sustainable use of groundwater. A uniform well ID program has not yet been accomplished at a scale like California, but it is achievable, as evidenced by great successes in other states. Learn more about why a well ID program will be so important to tackle in California and offer your thoughts about how to untangle some of the particularly thorny technical challenges.

Recording:

Minutes:

  • Groundwater is 40-60% of California's Water supply
  • ~2 Million groundwater wells!
  • As many as 15k new wells are constructed each year

Sustainable groundwater management act frames groundwater sustainability agencies that develop groundwater sustainability plans

There is a need to account for groundwater use to ensure the plans are achieved.

Problem: There is no dedicated funding (or central coordinator) to create and maintain a statewide well inventory.

  • Department of Water Resources develops standards
  • State Water Resources Control Board has statewide ordinance
  • Cities and local districts adopt local ordinance
  • Local enforcement agency administers and enforces ordinance

There are a lot of IDs in use. 5 different identifiers can be used for the same well.

Solution: Create a well inventory that is statewide but is a compound (single id that stands in for many others) id from multiple id systems. – A meaningless identifier that links multiple others to each other.

There are a number of states with well id programs.

  • Trying to learn from what other states have done.

Going forward with some kind of identifier system that spans all local and federal identifier systems.

  • Q: Will this include federal wells? – Yes!
  • Q: Will this actually be a new well identifier minted by someone? – Yes.
  • Q: If someone drills a well do they have to register it? – Yes, but it's the local enforcing agency that collects the information.
  • Q: What if a well is deepened? Do we update the ID? – This has caused real problems in the past. We end up with multiple IDs for the same hole that go through time.
    • Seems to make sense to make a new one to keep things simple.

Link mentioned early in the talk:

https://groundwateraccounting.org/

Reference during Q&A

https://docs.ogc.org/per/20-067.html#_cerdi_vvg_selfie_demonstration

October 26th: "Improving standards and documentation publishing methods: Why can’t we cross the finish line?"

IT&I OGC October 2023

Scott Simmons

OGC and the rest of the Standards community have been promising for YEARS that our Standards and supporting documentation will be more friendly to the users that need this material the most. Progress has been made on many fronts, but why are we still not finished with a promise made in 2015 that all OGC Standards will be available in implementer-friendly views first, ugly virtual printed paper second?This topic bugs me as much as it bugs our whole community. Some of the problems are institutional (often from our Government members across the globe), others are due to lack of resources, but I think that most are due to a lack of clear reward to motivate people to do things differently.Major progress is being made in some areas. The OGC APIs have landing pages that include focused and relevant content for users/implementers and it takes some effort to find the owning Standard. OGC Developer Resources are growing quickly with sample code, running examples, and multiple views of API resources in OpenAPI, Swagger, and ReDoc.

Slides


Recording:

Minutes:

(missing first ~15 minutes of recording -- apologies)

Circa 2015 OGC GeoRabble

  • Took a critical look at the status of publishing standards.
  • Couldn't we format these specs in a kind of tutorial form?

9 years later

  • What makes this hard?
    • Standards must be unambiguous AND procurable.
    • The modular specification is a model for this balance.

Standards are based around testable requirements that relate to conformance classes.

Swaggerhub and ReDoc as a way to show a richer collection of information for multiple users.

Specification are much more modular (core and extensions)

Developer website: https://developer.ogc.org/

Going to be including persistent  demonstrator (example implementations) that are "in the wild".

https://www.ogc.org/initiatives/open-science/

Moving to an "OGC Building Blocks" model that are registered across multiple platforms and linked to lots of examples.

Building blocks are richly described and nuanced but linked back to specific requirements in a specification.

https://blocks.ogc.org/

https://sn80uo0zmbg.typeform.com/to/gcwDDNB6?typeform-source=blocks.ogc.org

A lot of this focused on APIs – what about data models?

  • Worked on APIs first because it was current. Also thinking about how to apply similar concepts to data models.

September 14th: "Water data standardization: Navigating the AntiCommons"

IT&I IoW September 2023

Kyle Onda

We all know interoperability rests on data standards and API standards. Many open standards are less prominent in the open water data space than proprietary solutions. This is because proprietary data management solutions are often bundled with very easy to use implementing software and more importantly—client software that address basic use cases. We’re giving people blueprints when they need houses. Community standards making processes should invest in end-user tools if they want to gain traction. The good news is that some of the newest generation of standards is much easier to develop around which has led to some reference implementations that are much easier to create end-user tools around than previously.

Recording:

Minutes:

AntiCommons – name comes from social science background

Tragedy of the commons - two solutions, enclose (privatize) or regulate

Tragedy of the anticommons - as opposed to common resources, these are resources that don't get used up – as in open data. Inefficiency and under utilization is common.

Two solutions. expropriation (like imminent domain or public data), incentivize

Example – consolidate urban sprawl into higher density housing to get more open space and room for business.

Introducing the Internet of Water.

Noting that in PNW, there are >800 USGS stream gages and >400 from other organizations. Only USGS are very broadly known about.

Thinking about open data as an anticommons – environmental data is normally publically available but only in ways that are convenient to data providers and the software that they use.

Discussion of the variety of standardized vs bespoke modes of data dissemination.

Example of Nebraska – GUI with download and separate custom API USGS has the same basic scheme where an ETL goes from data management software to a custom web service system.

What's going on here? Limited resources lead to focus on existing users and needs and administration ease.

Tools that meet this need tend to not focus on the needs of new user and standardization.

Most organizations don't need standards – they need software. Both server and CLIENT software.

New specs and efforts ARE heading in this direction. OGC-API, SensorThings, etc.

Promising developments around proxying non standard APIs and in use of structured data "decoration" to make documentation more standard.

August 10th: "Learning to love the upside down: Quarto and the two data science worlds"

IT&I Quarto August 10th

Carlos Scheidegger

There are two wonderful data science worlds. You can be a jupyter expert: you work on jupyter notebooks, with access to myriad Julia, Python, and R packages, and excellent technical documentation systems. You can also be a knitr and rmarkdown expert: you work on rmarkdown notebooks, with access to myriad Julia, Python, and R packages, and excellent technical documentation systems.

But what if your colleague works on the wrong side of the fence? What if you spent years learning one of them, only to find that the job you love is in an organization that uses the other? In this talk, I’m going to tell you about quarto, a system for technical communication (software documentation, academic papers, websites, etc) that aspires to let you choose any of these worlds.

If you’re one to worry about Conway’s law and what this two-worlds situation does to an organization’s talent pool, or if you live in one side of the world and want to be able to collaborate with folks on the other side, I think you’ll find something of value in what I have to say.

I’m also going to complain about software, mostly the one I write. Mostly.

Slides: https://cscheid.net/static/2023-esip-quarto-talk/

Recording:

Minutes:

Carlos was in a tenure computer science position at University of Arizona.

Hating bad software makes a software developer a good developer.

Two data science worlds:

tidyverse (with R and markdown)

  • Cohesive, hard to run things out of order.
  • Doesn't store output.

Jupyter (python and notebooks)

  • Notebook saves intermediate outputs.
  • State can be messed up easily – cells aren't linear steps.

Quarto:

  • Acts as a compatibility layer for tidyverse and jupyter ecosystems.
  • Emulates RMarkdown with multi language support.

Rant:

Quarto gets you a webpage and PDF output.

– note that the PDF requirement is not great.

Quarto is kind of just a huge wrapper around pandoc.

Quarto documentation is intractably hard to build out.

Consider Conways Law – that an organization that creates a large system will create a system that is a copy of the organization's communication structure.

– Quarto is meant to allow whole organizations with different technical tools exist in the same communication structure (same system).

Quarto tries to make kinda hard things easy while not making really hard things impossible.

Quarto can convert jupyter notebooks (with cached outputs) into markdown and vice versa.

Issue is, you need to know a variety of other languages (YAML, CSS, Javascript, LaTeX, etc.)

– "unavoidable but kinda gross"

You can edit Quarto in RStudio or VS Code, or any text editor.

For collaboration, Quarto projects can use jupyter or knitr engines. E.g. in a single web page, you can build one page with jupyter and another page with knitr.

– you can embed a ipynb cell in a notebook.

Orchestrating computation is hard – quarto has to take input from existing computation – which can be awkward / complex.

Quarto is extensible – CSS themes, OJS for interactive webpages, Pandoc extensions.

Can also write your own shortcodes.

July 13th 2023: "Tools to Assist Simulation Based Researchers in Deciding What Project Outputs to Preserve and Share"

IT&I EarthCube Model RCN July 13th

Doug Schuster

This presentation will highlight findings from the NSF EarthCube Research Coordination Network project titled “What About Model Data? - Best Practices for Preservation and Replicability” (https://modeldatarcn.github.io/), which suggest that most simulation based research projects only need to preserve and share selected model outputs, along with the full simulation experiment workflow to communicate knowledge. Challenges related to meeting community open science expectations will also be highlighted.

Slides available here: File:ModelDataRCN-2023-07-13-ESIP-IT&I v2.pdf

https://modeldatarcn.github.io/

Rubric: https://modeldatarcn.github.io/rubrics-worksheets/Descriptor-classifications-worksheet-v2.0.pdf

Open science expectations for simulation based research. Frontiers in Climate, 2021. https://doi.org/10.3389/fclim.2021.763420

Recording:

Minutes:

Primary motivation: What are data management requirements for simulation projects?

Project ran May 2020 to Jul 2022

We clearly shouldn't preserve ALL data / output from projects. It's just too expensive.

Project broke down components of data associated with a different project Forcings, code/documentation, selected outputs.

But what outputs to share?!?

Project developed a rubric of what to preserve / share.

"Is your project a data production project or a knowledge production project"

"How hard is it to rerun your workflow?"

"How much will it cost to store and serve the data?"

Rubric gives guidance on how much of a project's outputs should be preserved.

So this is all well and good, but it falls onto PIs and funding agencies.

What are the ethical and professional considerations of these trade offs?

What are the incentives in place currently? Sharing is not necessarily seen as a benefit to the author.

June 8 2023: "Reproducible Data Pipelines in Modern Data Science: what they are, how to use them, and examples you can use!"

IT&I Reproducible Pipelines June 8th

Julie Padilla

Modern scientific workflows face common challenges including accommodating growing volumes and complexity of data and the need to update analyses as new data becomes available or project needs change. The use of better practices around reproducible workflows and the use of automated data analysis pipelines can help overcome these challenges and more efficiently translate open data to actionable scientific insights. These data pipelines are transparent, reproducible, and robust to changes in the data or analysis, and therefore promote efficient, open science. In this presentation, participants will learn what makes a reproducible data pipeline and what differentiates it from a workflow as well as the key organizational concepts for effective pipeline development.

Recording:

Minutes:

Motivation – what if we find bad data in an input, what if we need to rerun something with new data, can we reproduce findings from previous work?

Need to be able to "trace" what we did and the way we do it needs to be reliable.

A "workflow" is a sequence of steps going from start to finish of some activity or process.

A "pipeline" is a programmatic implementation of a workflow that requires little to no interaction.

In a pipeline, if one workflow step or input gets changed, we can track what is "downstream" of it.

Note that different steps of the workflow may be influenced by different people. So a given step of a pipeline could be contributed by different programmers. But each person would be contributing a component of a consistent pipeline.

There is a difference between writing scripts to building a reproducible pipeline. Better to break it into steps. Script -> organize -> encapsulate into functions -> assemble pipeline.

Focus is on R targets – snakemake is equivalent in python.

Key concepts for going from script to workflow: Functions stored separate from workflow script. Steps clearly organized in script. Can wrap steps in pipeline steps to track them.

Pipeline software keeps track of whether things have changed and what needs to be rerun. Allows visualization of the workflow inputs, functions, and steps.

How do steps of the pipeline get related to eachother? They are named and the target names get passed to downstream targets.

Chat questions about branching. Dynamic branching lets you run the same target for a list of inputs in a map/reduce pattern.

Pipelines can have outputs that are reports that render pipeline results in a nice form.

Pipeline templates: A pipeline can adopt from a standard template that is pre-determined. Helps enforce best practices and have a quick and easy starting point.

Note that USGS data science has a template for a common pattern.

What's a best practice for tracking container function and reproducibility? Versioned Git / Docker for code and environment. For data, it is context dependent. Generally, try to pull down citeable / persistent sources. If sources are not persistent, you can cache inputs for later reuse / reproducibility.

Data change detection / cacheing is a really tricky thing but many people are working on the problem. https://cboettig.github.io/contentid/, https://dvc.org/

https://learning.nceas.ucsb.edu/2021-11-delta/session-3-programmatic-metadata-and-data-access.html#reproducible-data-access


11 May 2023: "Software Procurement Has Failed Us Completely, But No More!"

IT&I Software Procurement May 11th

Waldo Jaquith

The way we buy custom software is terrible for everybody involved, and has become a major obstacle to agencies achieving their missions. There are solutions, if we would just use them! By combining the standard practices of user research, Agile software development, open source, modular procurement, and time & materials contracts, we can make procurement once again serve the needs of government.

Slides available here: File:2023-05-Jaquith.pdf

Recording:

Minutes:

Recognizing that software procurement is one of the primary ways that Software / IT systems advance, Waldo went into trying to understand that space as a software developer.

Healthcare.gov

Contract given to CGI Federal for $93M – cost ~1.7B by launch.
Low single digits numbers of people actually made it through the system.
Senior leaders were given the impression that things were in good shape.
The developers working on the site knew it wasn't going to work out (per IG report).

  • strategic misrepresentation – things are represented as more rosy as you go up the chain of command

On launch, things went very badly and the recovery was actually quite quick and positive.

Waldo recommends reading the IG report on healthcare.gov. This article:
("The OIG Report Analyzing Healthcare.gov's Launch: What's There And What's Not", Health Affairs Blog, February 24, 2016. https://dx.doi.org/10.1377/hblog20160224.053370) provides a path to the IG report:
(HealthCare.gov - CMS Management of the Federal Marketplace: An OIG Case Study (OEI-06-14-00350), https://oig.hhs.gov/oei/reports/oei-06-14-00350.pdf) and additional perspective.

Rhode Island Unified Health Infrastructure

($364M to DeLoitte) "Big Bang" deployment – they let people running old systems go on the day of the new system launch. They "outsourced" a mission critical function to a contractor.

We don't tend to hear about relatively smaller projects because they are less likely to fail and garner less attention.

Outsourcing as started in the ~90s was one thing when the outsourcing was for internal agency software. It's different when the systems are actually public interfaces to stakeholders or are otherwise mission critical.

File:2023-05-Jaquith.pdf

It's common for software to meet contract requirements but NOT meet end user needs.

Requirements complexity is fractal. There is no complete / comprehensive set of requirements.

… federal contractors interpreting requirements as children trying to resist getting out the door ...

There is little to no potential to update or improve requirements due to contract structure.

Demos not memos!

Memorable statements:

  • Outsourced ability to accomplish agency’s mission
  • Load bearing software systems on which the agency depends to complete their mission.
  • Mission of many agencies is mediated by technology.

But no more! – approach developed by 18F

System of six parts –

1. User-centered design
2. Agile software development
3. Product ownership
4. DevOps
5. Building out of loosely coupled parts
6. Modular contracting

Roles for government and vendors in agile contracting

"You don't know what people need till you talk to them."

Basic premise of agile is good. Focus is on finished software being developed every two weeks.

Constantly delivering a usable product... e.g. A skateboard is more usable than a car part.

Key roles for government staff around operations are too often overlooked.

Product team needs to include an Agency Product Owner. Allows government representation in software development iteration.

Build out of loosely coupled / interchangeable components. Allows you to do smaller things and form big coherent systems that can evolve.

Modular contracts allow big projects that are delivered through many small task orders or contracts. The contract document is kind of a fill in the blank template and doesn't have to be hard.

Westrum typology of cultures article is relevant is relevant: http://dx.doi.org/10.1136/qshc.2003.009522

13 April 2023: "Evolution of open source geospatial python."

IT&I Python Open Source April 13th

Tom Kralidis

Free and Open Source Software in the Geospatial ecosystems (i.e. FOSS4G) play a key role in geospatial systems and services. Python has become the lingua franca for scientific and geospatial software and tooling. This rant and rave will provide an overview of the evolution of FOSS4G and Python, focusing on popular projects in support of Open Standards.

Slides: https://geopython.github.io/presentation

Recording:

Minutes:

Mapserver has been around for 23 years!

Why Python for Geospatial? Ubiquity Cross OS compatible Legible and easy to understand what it's doing Support ecosystem is strong (PyPI, etc.) Balance of performance and ease of implementation Python: fast enough, and fast in human time -- more intensive workloads can glue to C/C++

The new generation of OGC services – based on JSON, so the API interoperates with client environments / objects at a much more direct level.

The geopython ecosystem has a number of low level components that are used across multiple projects.

pygeoapi is an OGC API reference implementation and an OSGeo project. E.g. https://github.com/developmentseed/geojson-pydantic

pygeoapi implements OGC API - Environmental Data Retrieval (EDR) https://ogcapi.ogc.org/edr/overview.html

pygeoapi has a plugin architecture. https://pygeoapi.io/ https://code.usgs.gov/wma/nhgf/pygeoapi-plugin-cookiecutter

pycsw is an OGC CSW and OGC API - Records implementation. Works with pygeometa for metadata creation and maintenance. https://geopython.github.io/pygeometa/

There's a real trade off to "the shiny object" vs the long term sustainability of an approach. Geopython has generally erred on the side of "does it work in a virtualenv out of the box".

How does pycsw work with STAC and other catalog APIs? pycsw can convert between various representations of the same basic metadata resource.

"That's a pattern… People can implement things the way they want."

Chat Highlights:

  • You can also write a C program that is slower than Python if you aren't careful =).
  • https://www.ogc.org/standards/ has lots of useful details
  • For anyone interested in geojson API development in Python, I just recently came across this https://github.com/developmentseed/geojson-pydantic
  • OGC API - Environmental Data Retrieval (EDR) https://ogcapi.ogc.org/edr/overview.html
  • Our team has a pygeoapi plugin cookiecutter that we are hopeful others can get some mileage out of. https://code.usgs.gov/wma/nhgf/pygeoapi-plugin-cookiecutter
  • I'm going to post this here and run: https://twitter.com/GdalOrg/status/1613589544737148944
    • 100% agreed. That's unfortunate, but PyPI is not designed to deal with binary wheels of beasts like me which depend of ~ 80 direct or indirect other native libraries. Best solution or least worst solution depending on each one's view is "conda install -c conda-forge gdal"
  • General question here - you mentioned getting away from GDAL in a previous project. What are your thoughts on GDAL's role in geospatial python moving forward, and how will pygeoapi accommodate that?
  • Never, ever works with the wheels!
  • Kitware has some pre-compiled wheels as well: https://github.com/girder/large_image
  • In the pangeo.io project, our go to tools are geopandas for tabular geospatial data, xarray/rioxarray for n-dimensional array data, dask for parallelization, and holoviz for interactive visualization. We use the conda-forge channel pretty much exclusively to build out environments
  • If you work on Windows, good luck getting the Python gdal/geos-based tools installed without Conda
  • data formats and standards are what make it difficult to get away from GDAL -- it just supports so many different backends! Picking those apart and cutting legacy formats or developing more modular tools to deal with each of those things "natively" in python would be required to get away from the large dependency on something like GDAL.
  • Sustainability and maintainability is always good to ask yourself "how easy will it be to replace this dependency when it no longer works?"
  • No one should build gdal alone (unless it is winter and you need a source of heat). Join us at https://github.com/conda-forge/gdal-feedstock


9 Mar 2023: "Meeting Data Where it Lives: the power of virtual access patterns"

Mike Johnson (Lynker, NOAA-affiliate) will rant and rave about the VRT and VSI (curl and S3) virtual data access patterns and how he's used them to work with LCMAP and 3DEP data in integrated climate and data analysis workflows.

Recording:

Minutes:

  • VRT stands for "ViRTual"
  • VSI stands for "Virtual System Interface"
  • Framed by FAIR

LCMAP – requires fairly complex URLs to access specific data elements.

3DEP - need to understand tiling scheme to access data across domains.

Note some large packages (zip files) where only one small file is actually desired.

NWM datasets in NetCDF files that change name (with time step) daily as they are archived.


Implications for Findability, Availability, and Reuse – note that interoperability is actually pretty good once you have the data.

VRT: – an XML "metadata" wrapper around one or more tif files.

Use case 1: download all of 3DEP tiles and wrap in a VRT xml file.

  • VRT has an overall aggregated grid "shape"
  • Includes references to all the individual files.
  • Can access the dataset through the vrt wrapper to work across all the times.
  • Creates a seamless collection of subdatasets
  • Major improvement to accessibility.

If you have to download the data is that "reuse" of the data??

VSI: – allows virtualization of data from remote resources available as a few protocols (S3/http/compressed)

Wide variety of GDAL utilities to access VSI files – zip, tar, 7zip

Use case 2: Access a tif file remotely without downloading all the data in the file.

  • Uses vsi to access a single tif file

Use case 3: Use vsi within a vrt to remotely access contents of remote tif files.

  • Note that the vrt file doesn't actually have to be local itself.
  • If the tiles that the vrt points to update, the vrt will update by default.
  • Can easily access and reuse data without actually copying it around.

Use case 4: OGR using vsi to access a shapefile in a tar.gz file remotely.

  • Can create a nested url pattern to access contents of the tar.gz remotely.

Use case 5: NWM shortrange forecast of streamflow in a netcdf file.

  • Appending "HDF5:" to the front of a vsicurl url allows access to a netcdf file directly.
  • The access url pattern is SUPER tricky to get right.

Use case 5: "flat catalogs"

  • Stores a flat (denormalized) table of data variables with the information required to construct URLs.
  • Can search based on rudimentary metadata within the catalog.
  • Can access and reuse data from any host in the same workflow.

Use case 6: access NWM current and archived data from a variety of cloud data stores.

  • Leveraging the flat catalog content to fix up urls and data access nuances.

Flat catalog improves findability down at the level of individual data variables.

Take Aways / discussion:

Question about the flat catalog:

"Minimal set of shortcuts" to get at this fast access mechanism.

Is the flat catalog manually curated?

More or less – all are automated but some custom logic is required to add additional content.

Would be great to systematize creation of this flat catalog more broadly.

Question: Could some “examples” be posted either in this doc or elsewhere (or links to examples), for a beginner to copy/paste some code and see for themselves, begin to think about how we’d use this? Something super basic please.

GDAL documentation is good but doesn't have many examples.

climateR has a workflow that shows how the catalog was built.


What about authentication issues?

  • S3 is handled at a session level.
  • Earthengine can be handled similarly.

How much word of mouth or human-to-human interaction is required for the catalog.

  • If there is a stable entrypoint (S3 bucket for example) some automation is possible.
  • If entrypoints change, configuration needs to be changed based on human intervention.

9 Feb 2023: "February 2023 - Rants & Raves"

The conversation built on the "rants and raves" session from the 2023 January ESIP Meeting, starting with very short presentations and an in-depth discussion on interoperability and the Committee's next steps.

Recording:

Minutes:

  • Mike Mahoney: Make Reproducibility Easy
  • Dave Blodgett: FAIR data and Science Data Gateways
  • Doug Fils: Web architecture and Semantic Web
  • Megan Carter: Opening Doors for Collaboration
  • Yuhan (Douglas) Rao: Where are we for AI-ready data?

I had a couple major take aways from the Winter Meeting:

  • We have come a long way in IT interoperability but most of our tools are based on tried and true fundamentals. We should all know more about those fundamentals.
  • There are a TON of unique entry points to things that, at the end of the day, do more or less the same thing. These are opportunities to work together and share tools.
  • The “shiny object” is a great way to build enthusiasm and trigger ideas and we need to better capture that enthusiasm and grow some shared knowledge base.

So with that, I want to suggest three core activities:

  1. We seek out presentations that explore foundational aspects of interoperability. I want to help build an awareness of the basics that we all kind of know but either take for granted, haven’t learned yet, or straight up forgot.
  2. We ask for speakers to explore how a given solution fits into multiple domain’s information systems and to discuss the tension between the diversity of use cases that are accommodated by an IT solution targeted at interoperability. We are especially interested to learn about the expense / risk of adopting dependencies vs the efficiency that can be gained from adopting pre-built dependencies.
  3. We look for opportunities to take small but meaningful steps to record the core aspects of these sessions in the form of web resources like the ESIP wiki or even Wikipedia. On this front, we will aim to construct a summary wiki page from each meeting assembled from a working notes document and the presenting authors contribution.