Tag Archives: participatory

Importance of Triangulating Evaluators

The value of including stakeholders in the evaluation team has various dimensions.

  • It can increase the usefulness of evaluations if their views and expertise are considered and integrated whenever appropriate. This requires a skilled evaluation facilitator and stakeholder commitment to substantial participation, particularly in analysis and interpretation activities.
  • Participatory evaluation methods can be used to create consensus and ownership in relation to the development activities.
  • Dialogue with stakeholders can help improve understanding and responsiveness to their needs and priorities.

In evaluation work “triangulation” is a fancy word that stands for using multiple methods to collect data, data sources, perspectives and evaluators to develop a more in-depth understanding of whatever is being studied or evaluated. Independent corroboration of a result strengthens its utility for decision making as well as extending our knowledge.

See post on triangulation … Introduction to triangulation

The triangulation dimension is not given the same degree of attention in the participatory community development evaluation literature. Participation by stakeholders can be a critical way of revealing and dealing with bias, and uncovering complexity in how the evaluated program is affecting participants and others.

Triangulation is not evaluation magic. Two common assumptions about the value of triangulation need to be examined closely.

  1. Does it eliminate bias?

The first assumption is that bias will be eliminated in a multimethod design. Although different methods can yield different understandings of the object of investigation it is difficult to conclude that those different understandings somehow neutralize any biases present. Each may not compensate for the limitations.

  1. Does it reveal true propositions?

The second common assumption is that use of triangulation will lead to convergence upon true propositions. Conflicting findings is a typical outcome of using different methods for collecting information especially if there is both quantitative and qualitative information. The evaluator must be prepared to wrestle with ambiguity creatively and to encourage others to do so. Exploration of possible explanations for differences in findings may lead to valuable conclusions that otherwise would not be included. Patton (Qualitative Evaluation Methods, 1980, pp. 329-332) recommends triangulation during analysis of the information, where different teams of evaluators or different members of the same evaluation team use different analysis approaches. Exploration of differences in conclusions may lead to additional insights about the object of evaluation.

Triangulation is not magic, but it can lead to better informed conclusions and evaluation advice.

See post on evaluation advice…  Evaluation Advice

Ten Seed Technique (TST)

The ten seed technique is a participatory monitoring and evaluation tool that documents a group’s perceptions about a wide range of topics and issues. The attached file describes and illustrates the technique. Click the link → Ten Seed Technique

Search the site using keyword “ten seed” to see examples of TE indicators based on TST.

Review the post on evaluating TD outcomes to keep in mind the essential characteristics of TE.  Click the link → TE

Evaluation wheel for participatory evaluation

The evaluation wheel is a graphic device for comparing evaluands on specific criteria. (Remember that an evaluand is a fancy word for whatever is being evaluated.) It is a versatile tool that can be used in a variety of ways:

  • Individuals can use it to indicate strengths and weaknesses of their performance on some task over a period of time.
  • As a group exercise it can illuminate differences in perceived strengths and weaknesses of something by the group members. A composite wheel is constructed from wheels completed by individuals.
  • As a multi-group exercise it enables comparison of perceived strengths and weaknesses of something as perceived by the different groups.

Continue reading

How participatory was that evaluation?

Some time ago I became concerned that “participatory evaluation” was being used to refer to vastly different evaluation approaches. One evaluation report called a participatory evaluation merely surveyed a sample of people, but there was no engagement by the evaluator with anyone about any aspect of the evaluation process. Apparently simply responding to a questionnaire was considered participation.

Other evaluation reports varied widely in the extent to which the evaluator engaged with others in designing and implementing an evaluation. I prepared a simple tool for rating the extent to which an evaluator engaged others while carrying out the evaluation. The tool is attached below. I welcome comments on the usefulness of the tool and other ways of determining the extent to which an evaluation can meaningfully be called a participatory evaluation.

Tool Assessment of Strength of Participation in an Evaluation