Systematic Outcomes Analysis

A complete solution to strategic planning, monitoring, evaluation and contracting

Seven possible outcome evaluation designs

[Under construction]. Systematic Outcomes Analysis (in the Evaluation (outcome) building-block of the system) identifies an exhaustive set of seven possible outcome evaluation designs. The fact that it is claimed that this is an exhaustive set of outcome evaluation designs enables it to be used to establish exactly what outcome evaluation is, and is not possible, for any intervention.  High-level outcome evaluation questions are identified in Systematic Outcomes Analysis and are examined to see if any of the seven possible outcome evaluation designs are appropriate, feasible and affordable.

The seven possible outcome evaluation designs are:

Design 1: True experiment design.

Applying an intervention to a group (intervention group) and comparing it to a group (control group) which has not received the intervention where there is no reason to believe that there are any relevant differences between the groups (e.g. through randomly assigning to intervention and control group).

Design 2: Regression discontinuity design.

Ordering those to be selected on the basis of their level on an outcome of interest, only intervening in those who have the 'worse' levels (intervention group) and comparing their changes on the outcome with those who did not receive the intervention (control group).

Design 3: Time series analysis design.

Tracking an outcome of interest over many observations in a situation where the intervention starts at a specific point in time. There should be a clear change in the series of observations at the time when the intervention started.

Design 4: Constructed comparison group design.

Identifying a 'group' which is similar in as many regards as possible to the group receiving the intervention. This includes both identifying other actual groups, or constructing a nominal control 'group' of what those receiving the intervention would have been like if they did not receive the interventions (e.g. propensity matching).

Design 5: Exhaustive causal identification and elimination design.

Systematically and exhaustively looking for all the possibilities which could have caused a change in outcomes and eliminating these alternative explanations in favor of the intervention. Needs to go well beyond just developing an explanation as to why the intervention could have worked without dismissing all alternative explanations which can be identified. Sometimes called a 'forensic' method.

Design 6: Expert judgement design.

Asking experts to judge whether they think that the intervention caused the outcomes using whatever why they want to make this judgement. (This design, along with some of the other designs in some instances, is rejected by some stakeholders as not a valid way of determining whether an intervention actually caused high level outcomes. It is included here because it is accepted by some other stakeholders as actually doing this.)

Design 7: Key informant judgement design.

Asking key informants (a selection of those who are likely to know what has happened)  to judge whether they think that the intervention caused the outcomes using whatever why they want to make this judgement. (This design, along with some of the other designs in some instances, is rejected by some stakeholders as not a valid way of determining whether an intervention actually caused high level outcomes. It is included here because it is accepted by some other stakeholders as actually doing this.)

Of these designs the first four can be used to estimate effect sizes. Effect sizes are a quantitative measurement of the amount an intervention has changed an outcome. Estimated effect sizes are essential for carrying out some of the elements in other building blocks. The Prerequisites building blocks 5-8 diagram sets out the prerequisites here.

[Note: This list of designs is still provisional within Systematic Outcomes Analysis. The first five were derived from the work of Michael Scriven identifying causal evaluation designs. The last two have been added because they are accepted by some stakeholders in some real-world circumstances as providing evidence that an intervention has caused high-level outcomes. Different disciplines use different terms for these types of designs and the names of the designs may be changed. For instance, general regression analyses as often undertaken in economic analysis are currently included under the 'constructed comparison group design'. Comment on whether this is actually an exhaustive list of designs would be appreciated (send to paul (at) parkerduignan.com).]

Copyright Paul Duignan 2005-2007 (updated March 2007)