ESIP 2011 Winter Meeting Decisions Workshop

From Earth Science Information Partners (ESIP)
Revision as of 21:25, September 13, 2012 by (talk) (Reverted edits by Prados (talk) to last revision by Erinmr)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Back to Jan 6, 2011 Agenda | Back to Decisions Workspace

Introduction to Evaluation Workshop

Ann Doucette, Director, The Evaluators' Institute, George Washington University

From Insight to Impact: Managing YOUR Data Through Evaluation

Sponsored by the Decisions Cluster, this workshop covers evaluation basics and GEO as a Case Study for how to improve the value of ESIP member activities through evaluation. Does not require previous experience with project evaluation

Interventions and programs are implemented within complex environments that present challenges in evaluating efficiency and effectiveness and attributing outcomes and impact to specific actions. A general problem in evaluation efforts — and what often causes them to fall short of their intended objectives — is the failure to fully articulate a theory of change that includes the identification of critical mechanisms that support optimal outcomes; to select measureable objectives that are actionable, meaning that they are linked to practices that an organization can actually do something about; to incorporate diverse stakeholders including end-users; to craft the evaluation in terms of its role in data-driven decision-making; and, lastly to effectively communicate the return on investment in terms of not only cost, but human and social capital.

This workshop will provide an interactive opportunity for participants to become more familiar with effective evaluation approaches that include a focus on crafting a theory of change that characterizes the outcome goal(s) and impact as well as identifying the mechanisms of change – moving from activity to results. Matching evaluation levels with the objectives to be achieved (e.g., linking gap analysis with theory of change, etc.) will be addressed, as well as optimizing the actionability of evaluation efforts. The workshop will examine performance measurement strategies that support actionable data. Data based decision-making, value-based issues, and practice-based evidence related to evaluation and monitoring (M & E) activities (process, outcome, and impact) will be emphasized. A case study approach, focusing on the work of the Group on Earth Observations will be used as an illustrative example of how members of the Federation of Earth Science Information Partners can better use evaluation tools to achieve outcomes and to optimize the impact of their work.

Notes from Session

Insight to Impact

  • Why Evaluate
    • To provide credible information and verify that initiative is doing as planned
    • Assess impact
    • Discover challenges early to optimize outcome/impact
    • Prioritize resources and activities, to make changes and insure sustainability
  • Addressing complexity
    • ESIP is a difficult organization to evaluate due to its diverse membership
      • Examine individual objectives instead of just goals
    • Complex systems like ESIP:
      • Connections are essential +simple rules lead to complex responses + Individuals have creative opportunity to respond within rules
      • Requires complex evaluation methods
    • Complex adaptive systems:
      • Input → activity→ output → outcome → IMPACT
      • Output is evaluated and feedback into the system as another input
      • Impact addresses that product was not only used, but that the use had an effect.
    • Traditional approach: past events predict future outcomes
    • Emergence- agents interact in random ways (interpersonal relationships and social networking)
    • Connectivity – systems depend on interconnections and feedback→ dissemination across stakeholders
    • Interdependence - of environment and other human systems.
      • Butterfly effects, small changes have large impacts, cultural sensitivity to the differences between agencies involved in ESIP
    • Rules- systems are governed by simple conventions that are self-organized.
      • Underlying consistencies and patterns may appear random and lead to different outcomes than anticipated
    • Outcomes are optimized in terms of meeting specific thresholds, predictability is not expected except in broad focus.
  • Where to start? Discussion of 1st Key Evaluation Findings
    • Concerns and Recommendations
      • Stakeholder focused - unmet needs, varying expectations, no when or why → Improve communication strategy/ clarify purpose, process, value added and engagement of wider audience/ Establish clear mechanisms for acknowledging contributions
      • Geoss Focused - detrimental effect of voluntary nature, lack of resources→ conduct gap analysis, alternative models, long-term strategy for support and sustainability (membership fees)
      • Many suggestions ambiguous, and not really actionable
  • Managing Data Complexity/Characterizing Programs
    • Plausibility – correct logic
    • Feasibility – sufficient resources
    • Measurability – credible ways to discover results
    • Meaningfulness – stakeholders can see effects
  • Theory of Change
    • Identifies a causal pathway from implementation of the program to the intended outcomes by specifying what is needed for outcomes to be achieved
    • To build one:
      • Identify long-term goals and assumptions behind them
      • Backwards mapping and connect the preconditions or requirements necessary to achieve that goal
      • Identifying the interventions that your initiative will perform
      • Develop indicators to evaluate outcome
      • Writing a narrative to explain the logic
    • Outcome mapping
      • Causal chain between short-term outcome and long-term goals.
    • Looking for impact
      • Identify intermediate outcomes
      • Use near-real-time assessment
  • Approaches to Evaluation
    • Needs assessment – magnitude of need, possible solutions
    • Evaluability assessment – has there been sufficient implementation
    • Conceptualization-focused evaluation – help define the program, target population, possible outcomes
    • Implementation evaluation
    • Process evaluation
    • Developmental evaluation – focus on innovative engagement
    • Outcome evaluation
    • Impact evaluation
    • Cost-effectiveness and cost-benefit analysis – standardizing outcomes in dollar costs and values
    • Meta-analysis – studies impact across studies of a similar magnitude for an overall judgment on an evaluation question
  • Gap analysis
    • Existing status
    • Apirants – condition in comparison to other competing organizations
    • Market – potential to grow given current political economic and demographic conditions
    • Program/product – are there products not being produced that could be?
  • Data collection
    • Outcome based monitoring – don’t collect data for the sake of it, monitor to benefit the outcome and achieve goals
    • Goal driven management – needs to be done for a reason, not because it is the rule
    • Go from best-guess decisions to data-based decision making
    • Cooperate across partners, collaboration is priority over competition/discrimination between different departments or roles
    • Anticipate need instead of reacting
    • Information is disseminated and transparent
  • Measurement Precision
    • Consistency and accuracy – not fixed, variable due to differences in collection procedures and understanding of data
    • Measuring validity of the pipeline of data, not the scientific validity of the content.
  • Validity
  • Balancing data and methods
    • Qualitative v quantitative, contextual v less contextual
    • Attitudes and underlying reasons v pure data
    • Anecdotal data can be mined using qualitative software if there are enough stories and statements
  • Randomized clinical trials
    • Do not always provide better evidence than observational studies – especially for rare adverse effects
  • Comparative effectiveness research
    • Conducting and synthesizing existing research comparing the benefits and harms of different strategies and interventions to monitor condition sin “real world” settings
  • Strength of Evidence
    • Risk of bias
    • Consistency
    • Directness
    • Precision
      • Dose-response association – differential exposure/duration of participation
      • Confounding factors – present or absent
      • Magnitude of the effect/impact – strong or weak
      • Publication bias – selective publication of studies/ no current studies available
    • Grading strength – high/moderate/low/insufficient : based on availability of evidence and extent that it reflects reality
  • Establishing metrics – SMART approach
    • Specific
    • Measureable
    • Actionable
    • Relevant
    • Timely
      • Analytic too SWOT
        • Strengths
        • Weaknesses
        • Opportunities
        • Threats
          • Identify how to harness opportunities and strengths in order to tackle weaknesses and threats
          • Not just a list of factors, a list of actions