Program evaluation process impact outcome




















This type of evaluation needs to identify the relevant community and establish its perspectives so that the views of engagement leaders and all the important components of the community are used to identify areas for improvement. This approach includes determining whether the appropriate persons or organizations are involved; the activities they are involved in; whether participants feel they have significant input; and how engagement develops, matures, and is sustained.

Research is hypothesis driven, often initiated and controlled by an investigator, concerned with research standards of internal and external validity, and designed to generate facts, remain value-free, and focus on specific variables. Research establishes a time sequence and control for potential confounding variables. Often, the research is widely disseminated.

Evaluation, in contrast, may or may not contribute to generalizable knowledge. The primary purposes of an evaluation are to assess the processes and outcomes of a specific initiative and to facilitate ongoing program management.

Formative evaluation provides information to guide program improvement, whereas process evaluation determines whether a program is delivered as intended to the targeted recipients Rossi et al. Summative evaluation informs judgments about whether the program worked i. Outcome evaluation focuses on the observable conditions of a specific population, organizational attribute, or social condition that a program is expected to have changed.

Whereas outcome evaluation tends to focus on conditions or behaviors that the program was expected to affect most directly and immediately i.

For example, assessing the strategies used to implement a smoking cessation program and determining the degree to which it reached the target population are process evaluations. Reduction in morbidity and mortality associated with cardiovascular disease may represent an impact goal for a smoking cessation program Rossi et al. Several institutions have identified guidelines for an effective evaluation.

For example, in , CDC published a framework to guide public health professionals in developing and implementing a program evaluation CDC, There are many different methods for collecting data. A key reason for mixing methods is that it helps to overcome the weaknesses inherent in each method when used alone. It also increases the credibility of evaluation findings when information from different data sources converges i.

Good data management includes developing effective processes for: consistently collecting and recording data, storing data securely, cleaning data, transferring data e. The particular analytic framework and the choice of specific data analysis methods will depend on the purpose of the impact evaluation and the type of KEQs that are intrinsically linked to this. For answering descriptive KEQs, a range of analysis options is available, which can largely be grouped into two key categories: options for quantitative data numbers and options for qualitative data e.

For answering causal KEQs, there are essentially three broad approaches to causal attribution analysis: 1 counterfactual approaches; 2 consistency of evidence with causal relationship; and 3 ruling out alternatives see above. Ideally, a combination of these approaches is used to establish causality. For answering evaluative KEQs, specific evaluative rubrics linked to the evaluative criteria employed such as the OECD-DAC criteria should be applied in order to synthesize the evidence and make judgements about the worth of the intervention see above.

The evaluation report should be structured in a manner that reflects the purpose and KEQs of the evaluation. In the first instance, evidence to answer the detailed questions linked to the OECD-DAC criteria of relevance, effectiveness, efficiency, impact and sustainability, and considerations of equity, gender equality and human rights should be presented succinctly but with sufficient detail to substantiate the conclusions and recommendations.

Evidence on multiple dimensions should subsequently be synthesized to generate answers to the high-level evaluative questions. The structure of an evaluation report can do a great deal to encourage the succinct reporting of direct answers to evaluative questions, backed up by enough detail about the evaluative reasoning and methodology to allow the reader to follow the logic and clearly see the evidence base. The following recommendations will help to set clear expectations for evaluation reports that are strong on evaluative reasoning:.

The executive summary must contain direct and explicitly evaluative answers to the KEQs used to guide the whole evaluation. Explicitly evaluative language must be used when presenting findings rather than value-neutral language that merely describes findings. Examples should be provided. Structuring of the findings section using KEQs as subheadings rather than types and sources of evidence, as is frequently done.

There must be clarity and transparency about the evaluative reasoning used, with the explanations clearly understandable to both non-evaluators and readers without deep content expertise in the subject matter. These explanations should be broad and brief in the main body of the report, with more detail available in annexes. If evaluative rubrics are relatively small in size, these should be included in the main body of the report. If they are large, a brief summary of at least one or two should be included in the main body of the report, with all rubrics included in full in an annex.

Overview briefs 1,6,10 are available in English, French and Spanish and supported by whiteboard animation videos in three languages; Brief 7 RCTs also includes a video. The webinars were based on the Impact Evaluation Series — a user-friendly package of 13 methodological briefs and four animated videos — and presented by the briefs' authors. Each page provides links not only to the eight webinars, but also to the practical questions and their answers which followed each webinar presentation.

Impact Evaluation for Development: Principles for Action - This paper discusses strategies to manage and undertake development evaluation. Rogers P Introduction to Impact Evaluation. Impact Evaluation Notes No. Washington DC: InterAction. Perrin B Linking Monitoring and Evaluation to Impact Evaluation. Bamberger M Introduction to Mixed Methods in Impact Evaluation. Guidance Note No.

Bonbright D Use of Impact Evaluation Results. The Methods Lab sought to develop, test, and institutionalise flexible approaches to impact evaluations. It focused on interventions which are harder to evaluate because of their diversity and complexity or where traditional impact evaluation approaches may not be feasible or appropriate, with the broader aim of identifying lessons with wider application potential.

See more information here. The Methods Lab produced several guidance documents including:. Realist impact evaluation: an introduction - This guide explains when a realist impact evaluation may be most appropriate or feasible for evaluating a particular programme or policy, and outlines how to design and conduct an impact evaluation based on a realist approach.

Read more. Addressing gender in impact evaluation - This paper is a resource for practitioners and evaluators who want to include a genuine focus on gender impact when commissioning or conducting evaluations. Evaluability assessment for impact evaluation - This guide provides an overview of the utility of and specific guidance and a tool for implementing an evaluability assessment before an impact evaluation is undertaken.

When and how to develop an impact-oriented monitoring and evaluation system - Many development programme staff have had the experience of commissioning an impact evaluation towards the end of a project or programme only to find that the monitoring system did not provide adequate data about implementation, context, baselines or interim results. This guidance note has been developed in response to this common problem.

The briefs were written by in alphabetical order : E. Funnell S and Rogers P Principles for Evaluation of Development Assistance. Evaluation of development programmes. Stufflebeam D Evaluation values and criteria checklist.

UNEG Guidance Document. Peersman, G. This is more consistent with a complexity perspective, in that a given event can have multiple cause and multiple consequences and we could focus our analysis on either side of this picture.

Mahoney, J. Political Analysis 14, — Goertz, G. Princeton University Press. Thanks, Rick, for this important point. Acknowledging multiple causes and multiple consequences where appropriate is important in impact evaluation, and designs and methods need to be able to address these. Is it reasonable to expect there to be different methods used to identify the causes of an effect as compared to the effects of a cause?

This is a helpful overview of impact evaluation, which corresponds with my organisation's thinking on this subject to large extent. One key difference however is that we develop our theory of change or 'chain of impact' as we term it during the project planning. This establishes the relationships between the outputs, intermediate outcomes and longer-term impact in a transparent way at the outset of the programme. This approach also helps us to consult and seek consensus with the participants and stakeholders.

Of course there are often unanticipated impacts using this approach, but it seems to increase the likelihood that the desired impacts will be achieved. I am almost a year late in responding to this resource, but I would be interested in the views of others on our approach. As you've noted, this makes it possible to get much more value from the theory of change. Hello Patricia, Thank you for your reply, it was very helpful. I have developed a causal links model for the theory of change, I would like for you to review it, and offer some suggestions.

Is there anyway that I can send you a Microsoft Word file? Sir I have the questions that I have two years data on agriculture technology adoption base case, endline consists of farmers interviews. Thanks for getting in touch. I found this site extremely helpful in bringing various texts together coherently in my my mind. I am writing a practicum for a MSc in project management and evaluation.

Login Login and comment as BetterEvaluation member or simply fill out the fields below. Whenever possible, it is desirable to identify a comparison or control group.

This may be a group of persons or an area in the community that does not receive the intervention but has characteristics similar to the group that does. Another possible comparison group is a similar, nearby community. The same pre- and post-intervention measurements should be made of the comparison group as of the group receiving the intervention. Regardless of the evaluation design, certain fundamental procedures should be followed, the most important of which are outlined in the eight steps that follow.

Carefully State the Hypothesized Effects. In the project planning phase, the department should carefully state its expectations of a gang enforcement strategy that is implemented correctly. There may be one anticipated effect or many, examples of which are: The strategy will reduce gang-related crime by 30 percent. The strategy will eliminate drug trafficking by the 86th Street Crew. The strategy will reduce fear of crime among residents in the southwest quadrant. The strategy will result in conviction of two hardcore gang leaders under Federal Triggerlock statutes.

In some programs, the effects may be expected to occur in stages: for example, the program will reduce street-level drug sales by gang members, which will reduce fear of crime. In this case, one stated effect is anticipated to result in another. Identify Possible Unintended Effects.

Most programs have potential associated risks that also should be identified early in the planning phase to allow the department an opportunity to address them better. The evaluation can assess whether the unintended effects occurred. Examples of unintended but anticipated effects are: Officers will object to the organizational changes required for implementation. New gang leaders will quickly assume control after leaders targeted by the strategy are incarcerated.

Neighborhood residents will react negatively to increased enforcement efforts. Key personnel will be transferred or will retire. Define Measurement Criteria. The indicators that will be measured for example, gang-related crime and community support must be clearly defined to obtain consistent, reliable measurements. What is meant by gang related crime? Should a gang motive be present before crimes are so classified? Are measures of some crimes more important than others?

How are satisfaction with the police, fear of crime, or community support defined? Terms should be clarified so the department and evaluators understand exactly what is to be measured and how.

Determine Appropriate Time Periods. Data collection can be costly, and information about results is important to people with investments in the program.

For practical reasons, evaluators often must compromise ideal time periods. This step responds to issues such as how far back in time baseline data should be collected and how long the program should operate to give it a fair opportunity to show results. Generally, the BJA Urban Street Gang Program demonstration sites obtained data on key dependent variables for example, drug arrests, driveby shootings, and gang-related homicides for a 5-year period before project commencement.

This method helped identify trends and aberrations. For example, an unusual problem or a unique special operation could have resulted in an unusually high arrest rate in a given year. The demonstration sites were expected to devote the first 3 months to the needs assessment and planning processes, with program activities occurring for the next 12 to 15 months.

The impact evaluation should be based on data accumulated for at least 12 months of program activities. The demonstration sites were required to submit their evaluation reports in their 18th month. Although not a steadfast rule, most jurisdictions should be able to make some assessments about program impact after 12 months of activities.

However, the appropriate period may vary considerably, depending on the nature of the problem, the type and complexity of the response, and other factors. Monitor Program Implementation. Systematic program monitoring is required to aid the site in correcting and overcoming problems, to effect management improvements that may reduce future implementation failures, and to interpret program results correctly.

Collect Data Systematically. The data collected on program implementation, hypothesized effects, and unintended effects must be as accurate as possible.

If more than one person collects data, each must follow the same rules and use the same definitions. If data are collected over a long period, the same rules and definitions must be used throughout.

Analyze Data. Data analysis should produce a description of the program as it was implemented. If the evaluation design is strong enough, analysis can go beyond describing what happened and provide convincing explanations of why it happened. The analysis should present the evidence that helps determine whether the program had its hypothesized effects and whether it resulted in any unintended effects.



0コメント

  • 1000 / 1000