A complete solution to strategic planning, monitoring, evaluation and contracting
[UNDER CONSTRUCTION] Systematic Outcomes Analysis works with the set of basic building blocks which have been identified in outcomes theory - the OIE* Basic Model. These are set out in the diagram on the right.
These elements are:
1. An outcomes model - O. Setting out how you think your program is working - all of the important steps needed to achieve high-level outcomes. Once built according to the set of standards used in Systematic Outcomes Analysis these models can be used for strategic planning, business planning and more. The standards are here.
2. Indicators - I[nn-att]. Not-necessarily attributable indicators showing general outcomes progress. These do not need to be attributable to (able to be proved that they are caused by) any one particular player.
3. Attributable indicators - I[att]. Indicators which are able to be attributed to particular players. Outputs (the goods and services produced by a player) are all attributable indicators.
4. High-Level Outcome evaluation - E[outcome]. Ways of proving that a particular player caused high-level outcomes. Systematic Outcomes Analysis identifies the seven outcome evaluation designs which can do this.
5. Non High-Level Outcome evaluation - E[n-outcome]. Other types of evaluation which do not claim to measure high-level outcomes, but which are used to improve the outcomes model and examine its context (called formative and process evaluation).
* This was earlier known as the OIIWA model. The model is current being updated so its title may change.
[UNDER CONSTRUCTION] Systematic Outcomes Analysis expands its focus out from the OIE Basic Model building-blocks of outcomes theory by adding these additional elements:
Economic and comparative evaluation - E[economic]). Cost, cost effectiveness and cost benefit analysis. Systematic Outcomes Analysis identifies 9 types of economic and comparative evaluation.
Overall monitoring and evaluation scheme - Overall M & E Scheme. The overall monitoring and evaluation scheme, including what is being done in regard to both piloting and full roll-out monitoring and evaluation.
Doers - D. The players who are directly acting to change outcomes within an outcomes model (also known as intervention organizations, providers)
Funders - F. The funding and control organizations which contract Doers to intervene in an outcomes model.
Contracting arrangements - C. The types of contracting arrangements which can be entered into by a Funder and a Doer. Systematic Outcomes Analysis identifies three different types of contracting that can be used.
[UNDER CONSTRUCTION] Part of the power of Systematic Outcomes Analysis is that it lets you see the relationships between the different building blocks in the system. This lets you integrate outcomes models, strategy, indicators and the various types of evaluation together with contracting. In particular, it tells you which elements you need to have done in earlier building blocks if you are wanting to try and do specific elements in later building blocks. This helps you avoid a situation where you are not able to do a later building block element because you did not do one of its prerequisites earlier when you had the chance. The first prerequisites model on the left (Prerequisites of elements in Systematic Outcomes Analysis building blocks 1-9) looks at the relationships between all of the building blocks. In the diagram, a solid line means that you must have done the earlier element in order to do the later element (the one with the inward arrow). A dotted line means that it is optional as to which elements you do, but that you must have done some of them. The reader new to Systematic Outcomes Analysis wanting to understand this diagram better should look at the models section of this web site.
[UNDER CONSTRUCTION] On the left is the second prerequisites diagram in Systematic Outcomes Analysis. Part of the power of Systematic Outcomes Analysis is that it lets you see the relationships between the different building blocks in the system. This lets you integrate outcomes models, strategy, indicators and the various types of evaluation together with contracting. In particular, it tells you which elements you need to have done in earlier building blocks if you are wanting to try and do specific elements in later building blocks. This helps you avoid a situation where you are not able to do a later building block element because you did not do one of its prerequisites earlier when you had the chance. The second prerequisites model on the left (Prerequisites of elements in Systematic Outcomes Analysis building blocks 5-9) looks in more detail at the relationship between building blocks 5-8. In the diagram, a solid line means that you must have done the earlier element in order to do the later element (the one with the inward arrow). A dotted line means that it is optional as to which elements you do, but that you must have done some of them. The reader new to Systematic Outcomes Analysis wanting to understand this diagram better should look through the models section of this web site.
There is confusion in outcomes and performance management systems as to the types of outcomes which you are allowed to include in any system. This is reflected in criticisms such as: "no, you haven't given us the type of outcomes we want, the ones you've specified are all too low-level, they're just outputs"; or, alternatively, "no, the outcomes you've specified are all too high-level, how will you be able to prove that it was you who made them change?" A range of different terms are also used for outcomes, and sometimes used in different ways: e.g. final outcomes, impacts, intermediate outcomes, strategic outcomes, priorities, key drivers, outputs, activities etc.
Systematic Outcomes Analysis cuts through the potential confusion caused by contradictory demands about the level your outcomes should be at and the many terms used in outcomes and performance management systems by drawing on outcomes theory finding that outcomes can have five major features, these features are set out below:
Influencible - able to be influenced by a player
Controllable - only influenced by one particular player
Measurable - able to be measured
Attributable - able to be attributed to one particular player (i.e. proved that only one particular player changed it)
Accountable - something that a particular player will be rewarded or punished for
Using these features of outcomes enables us to be very clear about the type of outcome we are talking about when doing Systematic Outcomes Analysis. In particular it lets us distinguish between not-necessarily attributable outcomes (or more correctly their indicators - called I[nn-att] in Systematic Outcomes Analysis; and attributable indicators - called I[att] in Systematic Outcomes Analysis. Using this approach we are able to draw outcomes models which include all steps and outcomes at all levels. This type of model is very powerful for strategic planning and other purposes and is what Doers should focus on for strategic purposes. We can then go back to our model and identify which outcomes have attributable indicators for accountability purposes. This avoids conflicting demands about the level of outcomes which are allowed within an outcomes model.
[Under construction]. Systematic Outcomes Analysis (in the Evaluation (outcome) building-block of the system) identifies an exhaustive set of seven possible outcome evaluation designs. The fact that it is claimed that this is an exhaustive set of outcome evaluation designs enables it to be used to establish exactly what outcome evaluation is, and is not possible, for any intervention. High-level outcome evaluation questions are identified in Systematic Outcomes Analysis and are examined to see if any of the seven possible outcome evaluation designs are appropriate, feasible and affordable.
The seven possible outcome evaluation designs are:
Design 1: True experiment design.
Applying an intervention to a group (intervention group) and comparing it to a group (control group) which has not received the intervention where there is no reason to believe that there are any relevant differences between the groups (e.g. through randomly assigning to intervention and control group).
Design 2: Regression discontinuity design.
Ordering those to be selected on the basis of their level on an outcome of interest, only intervening in those who have the 'worse' levels (intervention group) and comparing their changes on the outcome with those who did not receive the intervention (control group).
Design 3: Time series analysis design.
Tracking an outcome of interest over many observations in a situation where the intervention starts at a specific point in time. There should be a clear change in the series of observations at the time when the intervention started.
Design 4: Constructed comparison group design.
Identifying a 'group' which is similar in as many regards as possible to the group receiving the intervention. This includes both identifying other actual groups, or constructing a nominal control 'group' of what those receiving the intervention would have been like if they did not receive the interventions (e.g. propensity matching).
Design 5: Exhaustive causal identification and elimination design.
Systematically and exhaustively looking for all the possibilities which could have caused a change in outcomes and eliminating these alternative explanations in favor of the intervention. Needs to go well beyond just developing an explanation as to why the intervention could have worked without dismissing all alternative explanations which can be identified. Sometimes called a 'forensic' method.
Design 6: Expert judgement design.
Asking experts to judge whether they think that the intervention caused the outcomes using whatever why they want to make this judgement. (This design, along with some of the other designs in some instances, is rejected by some stakeholders as not a valid way of determining whether an intervention actually caused high level outcomes. It is included here because it is accepted by some other stakeholders as actually doing this.)
Design 7: Key informant judgement design.
Asking key informants (a selection of those who are likely to know what has happened) to judge whether they think that the intervention caused the outcomes using whatever why they want to make this judgement. (This design, along with some of the other designs in some instances, is rejected by some stakeholders as not a valid way of determining whether an intervention actually caused high level outcomes. It is included here because it is accepted by some other stakeholders as actually doing this.)
Of these designs the first four can be used to estimate effect sizes. Effect sizes are a quantitative measurement of the amount an intervention has changed an outcome. Estimated effect sizes are essential for carrying out some of the elements in other building blocks. The Prerequisites building blocks 5-8 diagram sets out the prerequisites here.
[Note: This list of designs is still provisional within Systematic Outcomes Analysis. The first five were derived from the work of Michael Scriven identifying causal evaluation designs. The last two have been added because they are accepted by some stakeholders in some real-world circumstances as providing evidence that an intervention has caused high-level outcomes. Different disciplines use different terms for these types of designs and the names of the designs may be changed. For instance, general regression analyses as often undertaken in economic analysis are currently included under the 'constructed comparison group design'. Comment on whether this is actually an exhaustive list of designs would be appreciated (send to paul (at) parkerduignan.com).]
[UNDER CONSTRUCTION] Systematic Outcomes Analysis identifies seven possible areas of evaluation focus. The first of these is the focus of the 5th building block Evaluation[outcome]. The other six are the focus of the 6th building block - Evaluation[non-outcome].
The seven possible areas of focus for evaluation are:
Focus 1: Establishing whether a particular intervention has caused an improvement in high-level outcomes (outcome/impact evaluation)
Focus 2: Establishing whether a particular intervention has caused an improvement in mid-level outcomes (process evaluation)
[UNDER CONSTRUCTION] In the 7th building block Evaluation[economic & comparative] two types of intervention comparison are used.
[UNDER CONSTRUCTION] The overall monitoring and evaluation scheme is the way in which monitoring and evaluation for any piloting which is being done and for the full roll-out is carried out. In particular, if there is a pilot phase, the relationship between the monitoring and evaluation approach for this and for the full roll-out needs to be decided on.
[Under construction] Systematic Outcomes Analysis identifies three possible contracting approaches which can be negotiated between funders and doers. The distinctions made here are based on Systematic Outcomes Analyses understanding of the different features of outcomes here. The three possible types of contracting arrangements are:
Arrangement 1: Accountable for outputs only.
In this arrangement Doers are accountable only for producing specified outputs and nothing else.
Arrangement 2: Accountable for outputs AND for 'managing for outcomes'
In this arrangement Doers are accountable for producing specific outputs AND also for 'managing for outcomes'. Managing for outcomes means that they also need to be thinking about whether their outputs are the best way that high level outcomes can be achieved and this includes the other factors which may influence high level outcomes and negate the effectiveness of their outputs. In this arrangement Doers are NOT held accountable for the achievement of high level outcomes. Exactly how managing for outcomes is defined is an interesting question as Funders/Control Organizations need to somehow work out whether or not Doers are actually 'managing for outcomes'. If Doers do this in diverse ways it is very difficult for funders to know whether they are doing it properly without emerging themselves in the details of the way Doers are doing it. It is rather like Funders/Control Organizations attempting to work out whether Doers are being financially responsible if there were no standardized accounting systems and accounting conventions. From the point of view of Systematic Outcomes Analysis a solution to this problem is for Funders to require that that Doers undertake, and Systematic Outcomes Analysis of their funded projects and have these peer reviewed/audited, just as would happen in the accounting area.
Arrangement 3: Accountable for not fully controllable outcomes
In this arrangement Doers are held to account for not fully controllable outcomes, which sounds somewhat paradoxical. However this does occur in the private sector. It is most suited to those situations where it is not appropriate, feasible or affordable to work out what can be attributed to particular Doers and where the Funder/Control Organization receives and directly benefits from the achievement of high level outcomes. In these cases the Funder/Control Organization is willing to share its increase in wealth with those who, probably (or possibly), but ultimately unprovably, influenced their good fortune. Within this arrangement, Doers end up 'insuring' Funders/Control Organizations against the times when high level outcomes are not achieved and Funders/Control Organizations do not receive any increase in wealth. In those cases the Doer does not receive the bonus etc which they do receive when high level outcomes are achieved. Therefore, Doers usually demand a premium from Funders/Control Organizations to manage their risk against those times when things go badly not due to any fault on the part of the Doer. This sort of arrangement is in place in regard to the salaries of top executives within the private sector. In the public sector it is less likely to occur because those making decisions within Funder/Control Organizations do not usually personally benefit from achieving high level outcomes, many Doers need to collaborate to achieve many public sector outcomes, and politicians, taxpayer and the media are normally resistant to large sums of money being paid out to people in circumstances where there is the possibility that they did not in fact have 'earned it'.
Copyright Paul Duignan 2005-2007 (updated March 2007)