Program Evaluation Policy and Procedures

An agency should have a policy or framework that applies to all program evaluation work it does. This is a set of broad statements about the minimum requirements for each evaluation, regardless of diversity in the evaluands and evaluation objectives.

A framework is a set of high-level standards. The framework in this post is based on a participatory approach to evaluation. There are eight elements in the framework.

Planning an evaluation

A Feasibility Study should be done to determine if the anticipated benefits of doing an evaluation will justify the estimated costs. See posts on Feasibility Study for details.

Assuming that an evaluation feasibility study supports doing a program evaluation, planning begins by preparing an evaluation design that includes the purpose of the evaluation, evaluation objectives, stakeholder groups, primary needs for information, methodology, reporting to different audiences, evaluators, budget, and a time line for each stage of the evaluation.

When the design document is consistent with this framework, generally the evaluation design is the document approved by the appropriate representatives of management, partners, and program participants. The evaluators are accountable for using the design to plan and complete evaluation activities that achieve the evaluation purpose and objectives.

A detailed work plan evolves as activities are scheduled and completed to achieve the evaluation objectives. Each evaluation plan will be reviewed against this framework. Exceptions to elements in this framework will be explained in the evaluation plan.

1. Values – Characteristics of an evaluation that are valued most.
  1. In addition to typical methods for collecting and analyzing information to achieve evaluation objectives, the evolving plan shall include designated time for reflection and discernment.
  2. Participatory methods shall be used throughout all aspects of the evaluation exercise. These methods include involving stakeholders in developing questions and engaging in analyzing and interpreting collected data.
  3. The goodness of a program shall be defined by notions of goodness described by different groups of stakeholders as described in the evaluation design. Evidence will be collected for each notion included in the approved evaluation design.
2. Utilization of findings.
  1. The evaluation design will describe the primary audience for using the evaluation findings, and how that audience intends to use them. Other audiences can use at least some of the findings but they may have limitations. The evaluation report will describe limitations in using findings for other purposes.
  2. The evaluation design will also describe other audiences and the probable means of reporting to them.
3. Theory of Social Change (ToC) within the surrounding context.
  1. Each evaluation will examine the appropriateness of the implicit and explicit theory of change undergirding the program design.
  2. Each evaluation will document the process followed to develop the program design. The evaluator will comment on the process regarding the role of ToC in the process.
  3. Each evaluation will document achievements and how the interactions between project persons, partners and participants reflect Christian values. If achievement or interactions are unsatisfactory the evaluation report will include recommendations regarding investigating theories of change that may guide future programing that will have better results.
4. Knowledge of assets in the context that strengthen program results.
  1. The evaluation will examine the assessments that guided the program design to determine if assets were considered; if so, the evaluation will document how the design included assets to strengthen the program.
  2. The evaluation will examine how the program monitored assets in the context and how management responded to opportunities to use them.
5. Knowledge of obstacles in the context that could reduce program effectiveness or efficiency.
  1. The evaluation will examine the identification of assumptions that were considered which if valid would have major negative consequences and how that affected the design of the program.
  2. The context for each program result will be examined to identify obstacles to achieving maximum results.
6. Assumptions about evaluation approach.
  1. When program objectives can be achieved by applying knowledge based on cause-effect relationships, the evaluation shall use appropriate methods to document and analyze significance of achievements. This is outcome or impact evaluation.
  2. Generally cause-effect methods are not appropriate for documenting change in spiritual dimensions of reality. To evaluate such change an evaluation will include rigorous documentation of information collected through spiritual practices and qualitative methods of inquiry.
  3. If both types of evaluation are desired for a program, key stakeholders should agree on whether there will be two separate evaluations or one mixed methods evaluation. If the decision is to do one mixed methods evaluation, then the evaluation team needs to include an experienced cause-effect evaluator and an experienced spiritual-qualitative evaluator who respect each other’s expertise.
7. Program implementation monitoring.

Various aspects of program implementation should be monitored at least quarterly. An evaluation design should include an objective to examine monitoring results for the period covered by the evaluation. Typical topics to analyze: adequacy of the indicators used, validity and reliability of indicator results, use of monitoring information by management; accuracy of reporting, etc.

8. Framework revision.

Every five years this framework will be reviewed by the agency and partners to determine its relevance and usefulness. A participatory process will be used to determine modifications.

Leave a Reply

Your email address will not be published. Required fields are marked *