How-To's

How to build evaluation capacity in a sector or organisation.

How to build sound outcomes / performance management systems.

How to link research and evaluation plans to an organisation’s strategic planning processes.

How to draw an outcomes hierarchy (intervention logic).

How to relate three different types of intervention logic (planned, research evidence / expert opinion and implemented).

How to institutionalise formative evaluation (evaluation aimed at improving programme implementation) into an agency’s processes.

How to monitor community development and community action programmes.

How to evaluate a set of similar centrally funded programmes being implemented by a number of autonomous agencies (or communities).

How to get an overview of evaluation approaches, types / purposes, methods, and designs.



How to build evaluation capacity in a sector or organisation.

There are three key aspects to building evaluation capacity in a sector or organisation as follows:  using appropriate evaluation models; building a sector / organisational culture of evaluation through appropriate evaluation training and awareness raising at all levels; and fostering strategic sector-wide evaluation question priority setting.   These are discussed in detail in Duignan, P. (2001). Building Social Policy Evaluation Capacity.

 Top of page


How to build sound outcomes/performance management systems.

There are a set of definitions and principles which underlie sound outcomes/performance management systems. Outcomes/performance management systems are made up of elements (e.g. outcomes, outputs, indicators) which are often not adequately defined.  A sound outcomes system will clearly define the characteristics of the elements being used within it.  An effective way of defining such elements is in terms of their: measurability, attributability (whether changes in the element need to be able to be attributed to a particular agency) and accountability (whether a particular agency is being held accountable for changes in the element).  A set of principles can be defined which should underlie sound outcomes systems.  Outcome system definitions and principles are discussed in detail in Duignan, P. (2004). Principles of Outcomes Hierarchies: Contribution Towards a General Analytical Framework for Outcomes Systems (Outcomes Theory).

Top of page


How to link research and evaluation plans to an organisation’s strategic planning processes.

There are five key steps to linking research and evaluation planning to an organisation’s strategic planning processes as follows:  setting out the intervention logic (outcomes hierarchy) which lies beneath the organisation’s strategy; facilitating a quality stakeholder discussion about research and evaluation priorities; developing a knowledge management infrastructure for evaluation questions and results; undertaking research and evaluation capacity building to enable identification of priorities; and allowing for three levels of measurement and evaluation (strategic not-necessarily attributable indicators for overall strategic monitoring, clearly attributable performance indicators for accountability purposes, and selective priority evaluation studies to inform progressive development of the intervention logic).  These are discussed in detail in Duignan, P. (2004).  Linking Research and Evaluation Plans to an Organisation’s Statement of Intent (SOI).  This paper looks at the case of a government organisation which has to produce a Statement of Intent as part of its strategic planning process, but the principles outlined are generally applicable to any type of strategic planning within an organisation.

 Top of page


How to draw an outcomes hierarchy (intervention logic).

Outcomes hierarchies are one type of intervention logic/programme theory/results chain which set out a cascading hierarchy of outcomes from final high level outcomes down to low level outcomes.  There are various ways in which an outcomes hierarchy can be set out.  The OH Diagramming Approach has the following characteristics: outcomes hierarchies are set out as diagrams; final outcomes are placed at the top of the diagram; elements are expressed as outcomes rather than processes; any number of links are allowed between outcomes within a diagram; the focus of the outcomes hierarchy is identified by differentiating a core section of the diagram from other higher level outcomes to which an organisation, programme or activity contributes.  The details of drawing outcomes hierarchies using the method are set out in Duignan, P. (2004) Intervention Logic: How to Build Outcomes Hierarchy Diagrams Using the OH Diagramming Approach.

 Top of page


How to relate three different types of intervention logic (planned, research evidence/expert opinion and implemented).

Three potentially different intervention logics can be identified for any organisation, programme or activity: a planned logic; a research based/expert opinion logic; and an as implemented logic.  One way in which the iterative relationship between these three logics can be viewed is set out in Duignan, P. (2004). Achieving Outcomes Through Evidence Based (Informed) Practice: Iterative Intervention Logic (Programme Model) Development (The I3Cycle).  

 Top of page


How to institutionalise formative evaluation (evaluation aimed at improving programme implementation) into an agency’s processes.

The purpose of formative evaluation is to improve the implementation of an organisation, programme or activity.  It can be distinguished from two other evaluation purposes, process evaluation which describes what occurs in the course and context of a programme or activity and outcome/impact evaluation which examines the intended and unintended, positive and negative outcomes/impacts of an organisation, programme or activity.  Encouraging an organisation to institutionalise formative evaluation can be done by: increasing decision makers awareness of the potential of formative evaluation; gaining acceptance of the concept of formative evaluation by key decision makers; encouraging institutional arrangements and values which support independent formative evaluation; developing appropriate formative evaluation skills in staff at all levels; developing formative evaluation specialists; and setting up one or more pilot projects to evaluate the use of formative evaluation within the organisation. This process is described in more detail in Duignan, P. (2004).  The Use of Formative Evaluation by Government Agencies. 

 Top of page


How to monitor community development and community action programmes.

Community development and community action programme present particular problems for evaluation as the methods being used and the outcomes being sought are often difficult to measure and attribute to the activity of a particular organisation or programme.  Traditionally there has been a focus on only reporting outputs (easily measurable and attributable measures such as the number of meetings held).  However, demanding measurement and definitive attribution of higher level final outcomes is often not technically feasible (for instance because of a large number of other organisations promoting the same objectives) or affordable (the budget needed to undertake such an evaluation can be much larger than the budget assigned for the community programme itself).  A framework for reporting on both output type activity and community level outcomes is set out in Duignan, P.,  Casswell, S., Howden-Chapman, P., Moewaka Barnes, H., Allan, B. & K. Conway (2003). Community Project Indicators Framework (CPIF).
 

Top of page


How to evaluate a set of similar centrally funded programmes being implemented by a number of autonomous agencies (or communities).

In a world in which the implementation of centrally funded programmes and activities is increasingly being devolved to individual agencies working in different settings or communities, evaluation can be particularly difficult.  The traditional model of central funding was for a central agency to specify in detail what implementing agencies were required to do when implemeting the activity.  In contrast, contemporary approaches are sometimes based on a totally devolved model where a central agencies has no say whatsoever in the way in which implementing agencies implement a programme.  The second scenario presents particular problems for evaluation because it results in a diversity of programme objectives and methods, diverse indicator monitoring and difficulties with consistent evaluation.  The Collaborative Ongoing Formative Evaluation Workshop Process (COFE) for Implementing Change has been developed to deal with the major problems in this type of distributed programme implementation.  Advantages of this process include: a shared sense of ownership by the implementers and the central agency; a mechanism for accountability to the central agency whilst still providing for a fair degree of autonomy on the part of the implementing agencies; opportunities for input of evidence-based practice, peer review and best practice sharing between implementing agencies; clarity about the specification of common and specific objectives;  agreement on indicator specification and in some cases collection of indicator information; documentation of the process of the programme; ability for implementation agencies to negotiate as a group with the central funder; and the ability to get the central funder to take up common problems with other stakeholders. This process is set on in detail in Duignan, P. & S. Casswell (2002).  Collaborative Ongoing Formative Evaluation Workshop Process (COFE) for Implementing Change.

Top of page


How to get an overview of evaluation approaches, types/purposes, methods, and designs.

Evaluation terminology is often confusing as concepts at different conceptual levels are often discussed without a clear understanding of how they relate to each other. Four different conceptual levels of evaluation terms can be identified as follows: approaches which set out a general way of looking at or conceptualising evaluation; purposes which identify the different purposes of different types of evaluation; methods which are the specific research and related methods used in evaluations; and designs which are the way in which the other evaluation ingredients (approaches, purposes and methods) are put together.  These four different levels of evaluation terminology are discussed in Duignan, P. (2001). Evaluation Approaches, Purposes, Methods and Designs.

Top of page









First posted June 2005