Chapter 7. Part 1. Program Evaluation

What to know

The results of evaluation are often used by stakeholders to improve or increase capacity of the program or activity. The primary purposes of an evaluation are to assess the processes and outcomes of a specific initiative and to facilitate ongoing program management.

Community engagement header

Section Overview

  • Systematic collection of information about activities, characteristics, and outcomes of programs
  • Reduces uncertainties, improves effectiveness, and informs decision-making
  • Includes goals, concerns, and perspectives of program stakeholders
  • Results used to improve or increase program capacity
  • Identifies program priorities and measures success
  • Considers data sources for evaluating acceptability, participation levels, and impact

  • Involves community as a whole and individual community groups as key stakeholders
  • Identifies relevant community perspectives for improvement
  • Considers involvement of appropriate persons or organizations
  • Assesses activities and input from participants
  • Examines development, maturity, and sustainability of engagement

Program evaluation uses the methods and design strategies of traditional research. Research is a systematic investigation designed to develop or contribute to gener­alizable knowledge (MacDonald et al., 2001).

Research is hypothesis driven, often initiated and controlled by an investigator, concerned with research standards of internal and external validity. It is also designed to generate facts, remain value-free, and focus on specific variables. Research establishes a time sequence and control for potential confounding variables.

Often, the research is widely disseminated. Evaluation, in contrast, may or may not contribute to generalizable knowledge. The primary purposes of an evaluation are to assess the processes and outcomes of a specific initiative and to facilitate ongoing program management. Evaluation of a program usually includes multiple measures that are informed by the contri­butions and perspectives of diverse stakeholders.

  1. Formative evaluation: Provides information to guide program improvement
  2. Process evaluation: Determines whether a program is delivered as intended to the targeted recipients
  3. Summative evaluation: Informs judgments about whether the program worked and requires making explicit the criteria and evidence being used to make "summary" judg­ments
  4. Outcome evaluation: Focuses on the observable conditions of a specific population, organizational attribute, or social condition that a program is expected to have changed
  5. Impact evaluation: Examines the program's long-term goals

Tips From the Literature‎

Formative and process evaluation are appropriate to conduct during program implementation and summative, outcome, and impact evaluation are appropriate to use after program completion or if a program has been implemented for a long period of time (Rossi et al., 2004).

  1. Engage stakeholders
  2. Describe the program
  3. Design the evaluation
  4. Gather credible evidence
  5. Justify conclusions
  6. Ensure use of the evaluation and share lessons learned

  1. Propriety standards (legal, ethical, welfare-oriented)
  2. Utility standards (meeting information needs of users)
  3. Feasibility standards (realistic scope and methods)
  4. Accuracy standards (valid methods, transparent reporting)

Together, the CDC framework and the Joint Committee standards provide a gen­eral perspective on the characteristics of an effective evaluation. Both identify the need to be pragmatic and serve intended users with the goal of determining the effectiveness of a program.

References

MacDonald G, Starr G, Schooley M, Yee SL, Klimowski K, Turner K. Introduction to program evaluation for comprehensive tobacco control programs. Atlanta (GA): Centers for Disease Control and Prevention; 2001.

Patton MQ. Utilization focused evaluation (4th ed.). Saint Paul (MN): Sage; 2008.

Rossi P, Lipsey M, Freeman H. Evaluation: a systemic approach (7th ed.). Thousand Oaks (CA): Sage; 2004.