Systematic Outcomes Analysis

A complete solution to outcomes, strategy, monitoring, evaluation and contracting

5. Evaluation (high level outcome)

The fifth step in Systematic Outcomes Analysis is looking at what high level outcome evaluation designs are appropriate, feasible and affordable (this uses the Outcome Evaluation Building Block (Evaluation[outcome])). Evaluations are different from indicator monitoring because they are usually more 'one-off' processes while indicator monitoring (covered in the last two sections on the indicator building-blocks) is more about routinely collected information. Systematic Outcomes Analysis divides evaluation up into seven areas of focus. This fifth step in Systematic Outcomes Analysis is concerned only with the first area of focus (Focus 1: Establishing whether a particular intervention has caused an improvement in high level outcomes). This first focus of evaluation attempts to make a claim about high level causality.  This should be contrasted with other types of evaluation which are dealt with in the next step (non-outcome evaluation). These other types of evaluation (often called formative or process, as opposed to outcome, evaluation) do not try to make causal claims about high level outcomes. Systematic Outcomes Analysis identifies seven possible outcome evaluation designs, one or more of which can be applied in trying to work out whether or not an intervention has caused high level outcomes to change. 

Step 5.1. Identify possible outcome evaluation questions and map them onto your outcomes model

5.1.1 Identify a set of possible outcome evaluation questions. These will based on the high level outcomes in your outcomes model. For instance, if you have three high level outcomes then you are likely to have three high level outcome evaluation questions such as: 'Did the intervention cause outcome X to improve?'. At this stage don't worry about whether or not it's possible to actually answer the outcome evaluation questions you are identifying. The first point is to identify the questions and then, secondly, you will work out whether or not it's feasibility to answer them.

5.1.2 Map the evaluation questions onto your outcomes model. It is important to map your evaluation questions back onto your outcomes model. Any one evaluation question can usually be worded in more than one way and this can cause confusion. It's a waste of time to ask exactly the same evaluation question more than once. This can occur without you realizing it is happening because the question has been worded differently each time. This is particularly a problem in large-scale evaluations where different evaluation teams are being commissioned to undertake a number of different evaluation sub-projects. Mapping all high level evaluation questions back onto your outcomes model lets you see when the same evaluation question is being asked with different wordings. You'll know this is happening because you'll be trying to map the different questions back onto the same spot on your outcomes model.

Step 5.2   Assess the appropriateness, feasibility and affordability of the seven Systematic Outcomes Evaluation outcome evaluation designs

5.2.1 Work out whether the highest-level evaluation question(s) are within the scope of the type of evaluation you should be attempting. For instance, if a funder is commissioning many programs using the same method as you're using across the country, it may be more efficient for the funder to answer the highest level evaluation question itself - 'Does this method work?'. It usually does not make sense for many programs throughout the country to try to undertake similar expensive outcome evaluations which attempt to answer exactly the same evaluation question. You, and your funder, need to be clear about the overall evaluation scheme (see Step 7), are they wanting you to prove that the whole roll-out of the program was effective, or are they using an overall evaluation scheme which relies on proving that an approach works when piloted and just making sure that best-practice is being used when it comes to the whole roll-out of the program. 

5.2.2 Assess whether the remaining high level outcome evaluation question(s) which you consider are within scope for you to try to answer (there should usually only be a few of these), can be answered. To do this, look at the appropriateness, feasibility and affordability of each of the seven outcome evaluation designs identified in Systematic Outcomes Analysis. These designs are set out below. For more information on the designs look in the models section here.

5.2.2.1 Design 1: True experiment design

5.2.2.2 Design 2: Regression discontinuity design

5.2.2.3 Design 3: Time series analysis design

5.2.2.4 Design 4: Constructed comparison group design

5.2.2.5 Design 5: Exhaustive causal identification and elimination design

5.2.2.6 Design 6: Expert judgement design

5.2.2.7 Design 7: Key informant judgement design.

The selection of these designs has implications for other steps within Systematic Outcomes Analysis. Only the first four of these design are able to provide you with an effect size. An effect size gives you a quantitative measure of how much an intervention changes particular outcomes. If none of these four designs can be done (and often they're neither appropriate, feasible or affordable), you're limited in the types of economic and comparative analysis you can undertake (see Step 8).

Copyright Paul Duignan 2005-2007 (updated March 2007)