ESIP 2011 Winter Meeting Decisions Workshop

From Earth Science Information Partners (ESIP)
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Notes from Session

Insight to Impact

  • Why Evaluate
    • To provide credible information and verify that initiative is doing as planned
    • Assess impact
    • Discover challenges early to optimize outcome/impact
    • Prioritize resources and activities, to make changes and insure sustainability
  • Addressing complexity
    • ESIP is a difficult organization to evaluate due to its diverse membership
      • Examine individual objectives instead of just goals
    • Complex systems like ESIP:
      • Connections are essential +simple rules lead to complex responses + Individuals have creative opportunity to respond within rules
      • Requires complex evaluation methods
    • Complex adaptive systems:
      • Input → activity→ output → outcome → IMPACT
      • Output is evaluated and feedback into the system as another input
      • Impact addresses that product was not only used, but that the use had an effect.
    • Traditional approach: past events predict future outcomes
    • Emergence- agents interact in random ways (interpersonal relationships and social networking)
    • Connectivity – systems depend on interconnections and feedback→ dissemination across stakeholders
    • Interdependence - of environment and other human systems.
      • Butterfly effects, small changes have large impacts, cultural sensitivity to the differences between agencies involved in ESIP
    • Rules- systems are governed by simple conventions that are self-organized.
      • Underlying consistencies and patterns may appear random and lead to different outcomes than anticipated
    • Outcomes are optimized in terms of meeting specific thresholds, predictability is not expected except in broad focus.
  • Where to start? Discussion of 1st Key Evaluation Findings
    • Concerns and Recommendations
      • Stakeholder focused - unmet needs, varying expectations, no when or why → Improve communication strategy/ clarify purpose, process, value added and engagement of wider audience/ Establish clear mechanisms for acknowledging contributions
      • Geoss Focused - detrimental effect of voluntary nature, lack of resources→ conduct gap analysis, alternative models, long-term strategy for support and sustainability (membership fees)
      • Many suggestions ambiguous, and not really actionable
  • Managing Data Complexity/Characterizing Programs
    • Plausibility – correct logic
    • Feasibility – sufficient resources
    • Measurability – credible ways to discover results
    • Meaningfulness – stakeholders can see effects
  • Theory of Change
    • Identifies a causal pathway from implementation of the program to the intended outcomes by specifying what is needed for outcomes to be achieved
    • To build one:
      • Identify long-term goals and assumptions behind them
      • Backwards mapping and connect the preconditions or requirements necessary to achieve that goal
      • Identifying the interventions that your initiative will perform
      • Develop indicators to evaluate outcome
      • Writing a narrative to explain the logic
    • Outcome mapping
      • Causal chain between short-term outcome and long-term goals.
    • Looking for impact
      • Identify intermediate outcomes
      • Use near-real-time assessment
  • Approaches to Evaluation
    • Needs assessment – magnitude of need, possible solutions
    • Evaluability assessment – has there been sufficient implementation
    • Conceptualization-focused evaluation – help define the program, target population, possible outcomes
    • Implementation evaluation
    • Process evaluation
    • Developmental evaluation – focus on innovative engagement
    • Outcome evaluation
    • Impact evaluation
    • Cost-effectiveness and cost-benefit analysis – standardizing outcomes in dollar costs and values
    • Meta-analysis – studies impact across studies of a similar magnitude for an overall judgment on an evaluation question
  • Gap analysis
    • Existing status
    • Apirants – condition in comparison to other competing organizations
    • Market – potential to grow given current political economic and demographic conditions
    • Program/product – are there products not being produced that could be?
  • Data collection
    • Outcome based monitoring – don’t collect data for the sake of it, monitor to benefit the outcome and achieve goals
    • Goal driven management – needs to be done for a reason, not because it is the rule
    • Go from best-guess decisions to data-based decision making
    • Cooperate across partners, collaboration is priority over competition/discrimination between different departments or roles
    • Anticipate need instead of reacting
    • Information is disseminated and transparent
  • Measurement Precision
    • Consistency and accuracy – not fixed, variable due to differences in collection procedures and understanding of data
    • Measuring validity of the pipeline of data, not the scientific validity of the content.
  • Validity
  • Balancing data and methods
    • Qualitative v quantitative, contextual v less contextual
    • Attitudes and underlying reasons v pure data
    • Anecdotal data can be mined using qualitative software if there are enough stories and statements
  • Randomized clinical trials
    • Do not always provide better evidence than observational studies – especially for rare adverse effects
  • Comparative effectiveness research
    • Conducting and synthesizing existing research comparing the benefits and harms of different strategies and interventions to monitor condition sin “real world” settings
  • Strength of Evidence
    • Risk of bias
    • Consistency
    • Directness
    • Precision
      • Dose-response association – differential exposure/duration of participation
      • Confounding factors – present or absent
      • Magnitude of the effect/impact – strong or weak
      • Publication bias – selective publication of studies/ no current studies available
    • Grading strength – high/moderate/low/insufficient : based on availability of evidence and extent that it reflects reality
  • Establishing metrics – SMART approach
    • Specific
    • Measureable
    • Actionable
    • Relevant
    • Timely
      • Analytic too SWOT
        • Strengths
        • Weaknesses
        • Opportunities
        • Threats
          • Identify how to harness opportunities and strengths in order to tackle weaknesses and threats
          • Not just a list of factors, a list of actions