Category Archives: General Evaluation Topics

ToR, Evaluation Design, Evaluation Plan

There are different types of plans that guide an evaluation exercise. I define three of them in this post.

Terms of reference (ToR)… Stakeholder expectations to be met by the evaluator, and evaluator expectations to be met by the client. The document is like a contract.

Evaluation design… The purpose and objectives of the evaluation exercise, and the means for achieving them. The primary function of the design is to anticipate possible alternative explanations for program results, and describe data collection and analysis methods to render implausible as many of them as feasible, leaving a stronger case that results can be attributed to program implementation.

Evaluation plan… Day-by-day assignments for each member of the evaluation team for each day of the exercise.

Comments welcome.

 

Research Design, Evaluation Design

The purpose of this post is to introduce the concept of research design and evaluation design to people with little or no research training or experience. There are many types of evaluation designs, but two types are key to planning impact evaluation: experimental designs and quasi-experimental designs. The characteristics of each type are described.

LINK… Research design intro

Click on this link for an outline for an evaluation design document…http://evalfrank.com/2017/10/outline-evaluation-design-document/

Obstacles to Critical Thinking

The attached file describes nine obstacles to critical thinking; failure to overcome them can lead to flawed reasoning. Train yourself to spot these obstacles as you do evaluation work, and train yourself in overcoming them. Your work will have a better chance of leading to changes in community development planning and implementation that make the world a better place.

LINK… Obstacles to Critical Thinking in Evaluation

Agency Framework for Evaluation Policy

An agency should have a policy or framework that applies to all evaluation work it does. This is a set of broad statements about the minimum requirements for each evaluation, regardless of diversity in the evaluands and evaluation objectives. A framework is a set of high-level standards.

This post describes seven groups of statements to guide planning a program evaluation. It can serve as the core of an evaluation policy for an agency. Brian Wolters at the Grand Rapids [Michigan] Center for Community Transformation worked with me to develop the statements.

110Agency Eval Framework-Policy

As always, constructive criticism and suggestions are welcome.

Mindful Evaluation

Note August 14, 2017: Errors in the original post have been corrected.

Mindfulness

Giving  full attention to thoughts, sensations and feelings as well as what we are doing in the present moment; nonjudgmental sustained and focused observation of self; activities to understand at a deeper level what I know and how I know it.

Mindful evaluation is not an evaluation model. It is any evaluation in which the evaluator is deeply aware of the immediate situation and his influence on it throughout the entire evaluation exercise. Mindfulness is consistent with holistic thinking; I encourage transformative evaluators to explore it as an aspect of professional development.

Cullen Puente and Bender (2015) discuss seven steps to follow to increase mindfulness, which in turn increases sound decision making as an evaluator.

1.Take time to think through your intention for incorporating mindfulness principles into your evaluation practice. Visualize yourself being more mindful in each stage of the evaluation. Set aside a specific regular time to practice various ways of being more mindful; practice, practice, practice.

Many web sites describe varieties of exercises to cultivate different aspects of mindfulness. For example, this site describes exercises for experiencing sensations from your eyes, ears, nose, mouth, etc. at a deeper level, and then reflecting on the feelings and thoughts that follow. Retrieve from http://www.practicingmindfulness.com/16-simple-mindfulness-exercises/.

2.Cultivate your ability to pay attention while disregarding distractors.

3.Become more aware of your emotions; accept them as legitimate aspects of who you are. The more you can allow them to be part of your conscious experience, the more you understand how they shape your thinking, the less they will interfere with objective data analysis and interpretation.

Explore your perspectives on cultural, economic, political, social and linguistic matters; these perspectives influence your interpretation of cognitive input with or without your awareness. Heighten your awareness to enrich your evaluation practice and reveal personal preferences that can taint evaluation findings.

4.Cultivate self-reflexivity by asking yourself what you are doing at the moment and why. Think through what constitutes evidence that supports a conclusion. Think through different types of truth, and the many ways we have of distorting our perceptions to fit our preconceived notions about probable evaluation findings. Continually explore your understanding of what is real and not real; what is credible knowledge and what is not.

5.Practice deep listening. Good listeners can elicit more information from others. Also, they may encourage others to reflect on what they are sharing, which may lead to richer information.

6.Stay curious and open. Practice being child-like, playful with your thoughts, as you apply your evaluative skills.

7.Creatively mitigate the influence of preconceived ideas and personal biases.

For a more detailed discussion read

Cullen Puente , Anne and Bender, April. (2015). Mindful Evaluation: Cultivating Our Ability to Be Reflexive and Self‐Aware. Journal of MultiDisciplinary Evaluation. 11:25, 51-59. http://www.jmde.com

Related Posts

http://evalfrank.com/2014/11/principles-for-holistically-planning-an-evaluation/

http://evalfrank.com/2014/11/improving-your-evaluation-work-through-reflective-practice/

Evaluation Model Framework for an Organization

An evaluation model framework is a set of guidelines for selecting or modifying an evaluation model. When an organization has such a framework, reasoned decisions can be made about which models are appropriate. Some elements of a model may not be consistent with the framework; those elements should be modified. Or parts of one model and parts of another model put together fit the framework better than either model.

This 2-page paper illustrates six topics to include in an evaluation framework… Constructing an Evaluation Framework

Two other post discuss evaluation frameworks and models.

http://evalfrank.com/2013/09/evaluation-models-or-approaches/… A review of the concept of an evaluation model, and a discussion of models with some similarities to transformative evaluation.

http://evalfrank.com/2015/04/evaluation-model-for-te/… A description of the features of transformative evaluation based on an evaluation model template.

Report Executive Summary: Bottom Line Format

Most program stakeholders expect evaluation recommendations to describe feasible actions that will improve the program evaluated or other similar programs. That is the bottom line. 

This post gives tips for writing the Executive Summary from a “bottom line” perspective where useful results are presented first, followed by contextual information.

It also gives stakeholders tips on what to expect from an evaluator to make the Executive Summary a useful management tool.

Link to 2.5 page document: Post exec summary

 

Importance of Triangulating Evaluators

The value of including stakeholders in the evaluation team has various dimensions.

  • It can increase the usefulness of evaluations if their views and expertise are considered and integrated whenever appropriate. This requires a skilled evaluation facilitator and stakeholder commitment to substantial participation, particularly in analysis and interpretation activities.
  • Participatory evaluation methods can be used to create consensus and ownership in relation to the development activities.
  • Dialogue with stakeholders can help improve understanding and responsiveness to their needs and priorities.

In evaluation work “triangulation” is a fancy word that stands for using multiple methods to collect data, data sources, perspectives and evaluators to develop a more in-depth understanding of whatever is being studied or evaluated. Independent corroboration of a result strengthens its utility for decision making as well as extending our knowledge.

See post on triangulation … Introduction to triangulation

The triangulation dimension is not given the same degree of attention in the participatory community development evaluation literature. Participation by stakeholders can be a critical way of revealing and dealing with bias, and uncovering complexity in how the evaluated program is affecting participants and others.

Triangulation is not evaluation magic. Two common assumptions about the value of triangulation need to be examined closely.

  1. Does it eliminate bias?

The first assumption is that bias will be eliminated in a multimethod design. Although different methods can yield different understandings of the object of investigation it is difficult to conclude that those different understandings somehow neutralize any biases present. Each may not compensate for the limitations.

  1. Does it reveal true propositions?

The second common assumption is that use of triangulation will lead to convergence upon true propositions. Conflicting findings is a typical outcome of using different methods for collecting information especially if there is both quantitative and qualitative information. The evaluator must be prepared to wrestle with ambiguity creatively and to encourage others to do so. Exploration of possible explanations for differences in findings may lead to valuable conclusions that otherwise would not be included. Patton (Qualitative Evaluation Methods, 1980, pp. 329-332) recommends triangulation during analysis of the information, where different teams of evaluators or different members of the same evaluation team use different analysis approaches. Exploration of differences in conclusions may lead to additional insights about the object of evaluation.

Triangulation is not magic, but it can lead to better informed conclusions and evaluation advice.

See post on evaluation advice…  Evaluation Advice

Evaluation Advice

The information offered by an evaluator to stakeholders to guide or direct their decision making constitutes technical advice. Several terms describe different types of technical advice included in evaluation reporting. It is helpful to differentiate between conclusion, recommendation, suggestion, issue, concern, and opinion.

Conclusion: the answer to a specific question included in the evaluation design, or an interpretation of outcomes related to a particular information need. A conclusion must be logically consistent with available evidence, as well as relevant knowledge and experience. Sometimes the evaluation will conclude that the question cannot be answered by the available evidence, knowledge and experience. If this is the case, it should be clearly stated along with a description of what would be required to answer the question.

Recommendation: a statement offered as worthy of acceptance or approval by stakeholders. Based on available evidence, knowledge and experience the evaluator is saying that it is reasonable for stakeholders to adopt the action included in the statement. It is essential to keep in mind, however, that as stakeholders consider the recommendation in light of other factors, stakeholders may decide reasonably not to adopt the recommendation. If stakeholders are involved in the interpretation of evaluation results before the report is prepared, the report is less likely to contain recommendations which are not adopted.

Suggestion: a statement offered for consideration because it is associated with something of interest in the evaluation. Although the evaluator deems this information as desirable or fitting for consideration (otherwise it would not be included as advice), the evaluator believes further study is needed before stating the information in the form of a recommendation.

Issue: a statement of something about which reasonable persons can disagree. Sometimes the attempt to answer a specific question during the evaluation uncovers different perspectives about it, each of which could be regarded as an answer to the question. Detailed description of the perspectives, and the values associated with each, is valuable advice, even when no specific suggestion or recommendation for action can be supported by available evidence, knowledge or experience. This description may help stakeholders resolve or reduce any conflict created by the issue.

Concern: a statement of something which is important to someone because it poses a threat, or it is believed that it will lead to an undesirable consequence, or empirical verification is desired. Confirmation or disconfirmation of a concern, or illumination of a concern, is valuable advice for stakeholders.

Opinion: a statement of plausible consequences of studying or not studying a suggestion; adopting or not adopting a recommendation; confronting or ignoring an issue or concern.