Program evaluations are conducted for a variety of reasons. Purposes can range from a mechanical compliance with a funder’s reporting requirements, to the genuine desire by program managers and stakeholder to learn “Are we making a difference?” and if so, “What kind of difference are we making?” The different purposes of, and motivations for, conducting evaluations determine the different types of evaluations. Below, I briefly discuss the variety of evaluation types.
Formative evaluations are evaluations whose primary purpose is to gather information that can be used to improve or strengthen the implementation of a program. Formative evaluations typically are conducted in the early- to mid-period of a program’s implementation. Summative evaluations are conducted near, or at, the end of a program or program cycle, and are intended to show whether or not the program has achieved its intended outcomes (i.e., intended effects on individuals, organizations, or communities) and to indicate the ultimate value, merit and worth of the program. Summative evaluations seek to determine whether the program should be continued, replicated or curtailed, whereas formative evaluations are intended to help program designers, managers, and implementers to address challenges to the program’s effectiveness.
Process evaluations, like formative evaluations, are conducted during the program’s early and mid-cycle phases of implementation. Typically process evaluations seek data with which to understand what’s actually going on in a program (what the program actually is and does), and whether intended service recipients are receiving the services they need. Process evaluations are, as the name implies, about the processes involved in delivering the program.
Impact evaluations sometimes alternatively called “outcome evaluations,” gather and analyze data to show the ultimate, often broader range, and longer lasting, effects of a program. An impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes. (see, for example) The International Initiative for Impact Evaluation (3ie) defines rigorous impact evaluations as: ”Analyses that measure the net change in outcomes for a particular group of people that can be attributed to a specific program using the best methodology available, feasible and appropriate to the evaluation question that is being investigated and to the specific context” (see, for example) Impact (and outcome evaluations) are primarily concerned with determining whether the effects of the program are the result of the program, or the result of some other extraneous factor(s). Ultimately, outcome evaluations want to answer the question, “What effect(s) did the program have on its participants (e.g., changes in knowledge, attitudes, behaviors, skills, practices) and were these effects the result of the program?
Although the different types of evaluation described above differ in their intended purposes and times of implementation, it is important to keep in mind that every program evaluation should be guided by good evaluation research questions. (See our earlier post, Questions Before Methods) Program evaluation, like any effective research project depends upon asking important and insight-producing questions. Ultimately, the different types of evaluations discussed above support the general definition of “program evaluation —a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.” (see, for example) To learn more about our evaluation methods visit our Data collection & Outcome measurement page.
Resources