Category Archives: General Evaluation Topics

Program Evaluation Policy and Procedures

An agency should have a policy or framework that applies to all program evaluation work it does. This is a set of broad statements about the minimum requirements for each evaluation, regardless of diversity in the evaluands and evaluation objectives.

A framework is a set of high-level standards. The framework in this post is based on a participatory approach to evaluation. There are eight elements in the framework.

Planning an evaluation

A Feasibility Study should be done to determine if the anticipated benefits of doing an evaluation will justify the estimated costs. See posts on Feasibility Study for details.

Assuming that an evaluation feasibility study supports doing a program evaluation, planning begins by preparing an evaluation design that includes the purpose of the evaluation, evaluation objectives, stakeholder groups, primary needs for information, methodology, reporting to different audiences, evaluators, budget, and a time line for each stage of the evaluation.

When the design document is consistent with this framework, generally the evaluation design is the document approved by the appropriate representatives of management, partners, and program participants. The evaluators are accountable for using the design to plan and complete evaluation activities that achieve the evaluation purpose and objectives.

A detailed work plan evolves as activities are scheduled and completed to achieve the evaluation objectives. Each evaluation plan will be reviewed against this framework. Exceptions to elements in this framework will be explained in the evaluation plan.

1. Values – Characteristics of an evaluation that are valued most.
  1. In addition to typical methods for collecting and analyzing information to achieve evaluation objectives, the evolving plan shall include designated time for reflection and discernment.
  2. Participatory methods shall be used throughout all aspects of the evaluation exercise. These methods include involving stakeholders in developing questions and engaging in analyzing and interpreting collected data.
  3. The goodness of a program shall be defined by notions of goodness described by different groups of stakeholders as described in the evaluation design. Evidence will be collected for each notion included in the approved evaluation design.
2. Utilization of findings.
  1. The evaluation design will describe the primary audience for using the evaluation findings, and how that audience intends to use them. Other audiences can use at least some of the findings but they may have limitations. The evaluation report will describe limitations in using findings for other purposes.
  2. The evaluation design will also describe other audiences and the probable means of reporting to them.
3. Theory of Social Change (ToC) within the surrounding context.
  1. Each evaluation will examine the appropriateness of the implicit and explicit theory of change undergirding the program design.
  2. Each evaluation will document the process followed to develop the program design. The evaluator will comment on the process regarding the role of ToC in the process.
  3. Each evaluation will document achievements and how the interactions between project persons, partners and participants reflect Christian values. If achievement or interactions are unsatisfactory the evaluation report will include recommendations regarding investigating theories of change that may guide future programing that will have better results.
4. Knowledge of assets in the context that strengthen program results.
  1. The evaluation will examine the assessments that guided the program design to determine if assets were considered; if so, the evaluation will document how the design included assets to strengthen the program.
  2. The evaluation will examine how the program monitored assets in the context and how management responded to opportunities to use them.
5. Knowledge of obstacles in the context that could reduce program effectiveness or efficiency.
  1. The evaluation will examine the identification of assumptions that were considered which if valid would have major negative consequences and how that affected the design of the program.
  2. The context for each program result will be examined to identify obstacles to achieving maximum results.
6. Assumptions about evaluation approach.
  1. When program objectives can be achieved by applying knowledge based on cause-effect relationships, the evaluation shall use appropriate methods to document and analyze significance of achievements. This is outcome or impact evaluation.
  2. Generally cause-effect methods are not appropriate for documenting change in spiritual dimensions of reality. To evaluate such change an evaluation will include rigorous documentation of information collected through spiritual practices and qualitative methods of inquiry.
  3. If both types of evaluation are desired for a program, key stakeholders should agree on whether there will be two separate evaluations or one mixed methods evaluation. If the decision is to do one mixed methods evaluation, then the evaluation team needs to include an experienced cause-effect evaluator and an experienced spiritual-qualitative evaluator who respect each other’s expertise.
7. Program implementation monitoring.

Various aspects of program implementation should be monitored at least quarterly. An evaluation design should include an objective to examine monitoring results for the period covered by the evaluation. Typical topics to analyze: adequacy of the indicators used, validity and reliability of indicator results, use of monitoring information by management; accuracy of reporting, etc.

8. Framework revision.

Every five years this framework will be reviewed by the agency and partners to determine its relevance and usefulness. A participatory process will be used to determine modifications.

Getting Program Evaluation Right

This post is an outline of the article: “Getting evaluation right: a five point plan,” by Dr. Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie), October 25, 2012. Retrieve from https://oxfamblogs.org/fp2p/getting-evaluation-right-a-five-point-plan/

The 5-point plan is a prototype for an organization’s evaluation policy that embeds evaluation exercises throughout the program planning, implementation, and follow-up processes.

This is an outline of the five points; go to the article for details. The audience is an International Non-governmental Organization.

Point 1: Have a good theory of change/causal pathway/impact pathway or whatever you want to call it.

Theories of change are good for understanding the program, for schematics and great communication tools too. Additionally an evidence-based theory of change can help you decide where you need most investigation, where a process evaluation is sufficient, where a counterfactual analysis of outcomes is required and where a simple tracking of indicators is useful.

Point 2: Put in place monitoring and information systems. Track process and process/output and some outcome indicators across program areas.

Put together a set of detailed standard operating procedures for collecting information on process indicators. Train persons in doing the procedures and periodically verify that they are doing them correctly. At least one full-time skilled person should manage data collection and analysis.

Point 3: Think about measuring attributable change.

[FGC note. Consult with an expert about the costs and benefits of doing this right. Don’t agree to include this in an evaluation design unless the benefit is worth the cost.]

Point 4: Undertaking cost and cost effectiveness studies.

What are the priced and non-priced inputs in the project? Think about whether you want to use these projects in other places? Scale them up?

Put together a standardized template with cost categories and measurement methods. (E.g. how will you measure the cost of using good seeds for the farmer? It’s not just the cost of procurement or transportation but also the cost of additional manure, the cost of storage for seed and post-harvest produce.)

Point 5: Focus on implementation research as an important part of your program design.

Systematically document implementation factors, and put together a protocol which contains questions that are relevant to informing all stages of the evaluation. This is where participatory methods, focus groups, observational scrutiny, process research should come in, and also inform your theory of change.

Types of Program Results

A program result is the difference between the status of some condition prior to program implementation and the status of that condition after program implementation. There are several types of results. I am using terminology that is common in evaluation literature.

I am doing this exercise because I believe good evaluation involves using language precisely as different viewpoints are explored. For example, saying that a program achieves results is not precise use of language. Achievement is something that is a consequence of human action. So program implementation, the actions of human actors, achieves results.

Program effects are results directly related to expectations for the program, often in the form of aims, goals, and objectives.

Program impact are effects that are practically significant and sustained over a relatively long period of time. The evaluation needs to rule out plausible non-program factors that could have led to each effect; the evaluation reporting should include the conclusions of these efforts.

Side effects are results that are not directly related to program expectations.

Undesirable consequences are results that are harmful in someway.

I welcome comments.

ToR, Evaluation Design, Evaluation Plan

There are different types of plans that guide an evaluation exercise. I define three of them in this post.

Terms of reference (ToR)… Stakeholder expectations to be met by the evaluator, and evaluator expectations to be met by the client. The document is like a contract.

Evaluation design… The purpose and objectives of the evaluation exercise, and the means for achieving them. The primary function of the design is to anticipate possible alternative explanations for program results, and describe data collection and analysis methods to render implausible as many of them as feasible, leaving a stronger case that results can be attributed to program implementation.

Evaluation plan… Day-by-day assignments for each member of the evaluation team for each day of the exercise.

Comments welcome.

 

Research Design, Evaluation Design

The purpose of this post is to introduce the concept of research design and evaluation design to people with little or no research training or experience. There are many types of evaluation designs, but two types are key to planning impact evaluation: experimental designs and quasi-experimental designs. The characteristics of each type are described.

LINK… Research design intro

Click on this link for an outline for an evaluation design document…http://evalfrank.com/2017/10/outline-evaluation-design-document/

Obstacles to Critical Thinking

The attached file describes nine obstacles to critical thinking; failure to overcome them can lead to flawed reasoning. Train yourself to spot these obstacles as you do evaluation work, and train yourself in overcoming them. Your work will have a better chance of leading to changes in community development planning and implementation that make the world a better place.

LINK… Obstacles to Critical Thinking in Evaluation

Agency Framework for Evaluation Policy

An agency should have a policy or framework that applies to all evaluation work it does. This is a set of broad statements about the minimum requirements for each evaluation, regardless of diversity in the evaluands and evaluation objectives. A framework is a set of high-level standards.

This post describes seven groups of statements to guide planning a program evaluation. It can serve as the core of an evaluation policy for an agency. Brian Wolters at the Grand Rapids [Michigan] Center for Community Transformation worked with me to develop the statements.

110Agency Eval Framework-Policy

As always, constructive criticism and suggestions are welcome.

Mindful Evaluation

Note August 14, 2017: Errors in the original post have been corrected.

Mindfulness

Giving  full attention to thoughts, sensations and feelings as well as what we are doing in the present moment; nonjudgmental sustained and focused observation of self; activities to understand at a deeper level what I know and how I know it.

Mindful evaluation is not an evaluation model. It is any evaluation in which the evaluator is deeply aware of the immediate situation and his influence on it throughout the entire evaluation exercise. Mindfulness is consistent with holistic thinking; I encourage transformative evaluators to explore it as an aspect of professional development.

Cullen Puente and Bender (2015) discuss seven steps to follow to increase mindfulness, which in turn increases sound decision making as an evaluator.

1.Take time to think through your intention for incorporating mindfulness principles into your evaluation practice. Visualize yourself being more mindful in each stage of the evaluation. Set aside a specific regular time to practice various ways of being more mindful; practice, practice, practice.

Many web sites describe varieties of exercises to cultivate different aspects of mindfulness. For example, this site describes exercises for experiencing sensations from your eyes, ears, nose, mouth, etc. at a deeper level, and then reflecting on the feelings and thoughts that follow. Retrieve from http://www.practicingmindfulness.com/16-simple-mindfulness-exercises/.

2.Cultivate your ability to pay attention while disregarding distractors.

3.Become more aware of your emotions; accept them as legitimate aspects of who you are. The more you can allow them to be part of your conscious experience, the more you understand how they shape your thinking, the less they will interfere with objective data analysis and interpretation.

Explore your perspectives on cultural, economic, political, social and linguistic matters; these perspectives influence your interpretation of cognitive input with or without your awareness. Heighten your awareness to enrich your evaluation practice and reveal personal preferences that can taint evaluation findings.

4.Cultivate self-reflexivity by asking yourself what you are doing at the moment and why. Think through what constitutes evidence that supports a conclusion. Think through different types of truth, and the many ways we have of distorting our perceptions to fit our preconceived notions about probable evaluation findings. Continually explore your understanding of what is real and not real; what is credible knowledge and what is not.

5.Practice deep listening. Good listeners can elicit more information from others. Also, they may encourage others to reflect on what they are sharing, which may lead to richer information.

6.Stay curious and open. Practice being child-like, playful with your thoughts, as you apply your evaluative skills.

7.Creatively mitigate the influence of preconceived ideas and personal biases.

For a more detailed discussion read

Cullen Puente , Anne and Bender, April. (2015). Mindful Evaluation: Cultivating Our Ability to Be Reflexive and Self‐Aware. Journal of MultiDisciplinary Evaluation. 11:25, 51-59. http://www.jmde.com

Related Posts

http://evalfrank.com/2014/11/principles-for-holistically-planning-an-evaluation/

http://evalfrank.com/2014/11/improving-your-evaluation-work-through-reflective-practice/

Evaluation Model Framework for an Organization

An evaluation model framework is a set of guidelines for selecting or modifying an evaluation model. When an organization has such a framework, reasoned decisions can be made about which models are appropriate. Some elements of a model may not be consistent with the framework; those elements should be modified. Or parts of one model and parts of another model put together fit the framework better than either model.

This 2-page paper illustrates six topics to include in an evaluation framework… Constructing an Evaluation Framework

Two other post discuss evaluation frameworks and models.

http://evalfrank.com/2013/09/evaluation-models-or-approaches/… A review of the concept of an evaluation model, and a discussion of models with some similarities to transformative evaluation.

http://evalfrank.com/2015/04/evaluation-model-for-te/… A description of the features of transformative evaluation based on an evaluation model template.