Systematic Outcomes Analysis

A complete solution to strategic planning, monitoring, evaluation and contracting

Models


Basic building blocks of outcomes systems (OIE Basic Model)

OIE Basic Model

Outcomestheoryimage24-7
magnify

[UNDER CONSTRUCTION] Systematic Outcomes Analysis works with the set of basic building blocks which have been identified in outcomes theory - the OIE* Basic Model. These are set out in the diagram on the right.

These elements are:

1. An outcomes model - O. Setting out how you think your program is working - all of the important steps needed to achieve high-level outcomes. Once built according to the set of standards used in Systematic Outcomes Analysis these models can be used for strategic planning, business planning and more. The standards are here.

2. Indicators - I[nn-att]. Not-necessarily attributable indicators showing general outcomes progress. These do not need to be attributable to (able to be proved that they are caused by) any one particular player.

3. Attributable indicators - I[att]. Indicators which are able to be attributed to particular players. Outputs (the goods and services produced by a player) are all attributable indicators.

4. High-Level Outcome evaluation - E[outcome]. Ways of proving that a particular player caused high-level outcomes. Systematic Outcomes Analysis identifies the seven outcome evaluation designs which can do this.

5. Non High-Level Outcome evaluation - E[n-outcome]. Other types of evaluation which do not claim to measure high-level outcomes, but which are used to improve the outcomes model and examine its context (called formative and process evaluation).

* This was earlier known as the OIIWA model. The model is current being updated so its title may change.

Extended building blocks of outcomes systems (OIE Extended Model)

OIE Extended Model

OutcomestheoryimageD89-1
magnify


[UNDER CONSTRUCTION] Systematic Outcomes Analysis expands its focus out from the OIE Basic Model  building-blocks of outcomes theory by adding these additional elements:

Economic and comparative evaluation - E[economic]). Cost, cost effectiveness and cost benefit analysis. Systematic Outcomes Analysis identifies 9 types of economic and comparative evaluation.

Overall monitoring and evaluation scheme - Overall M & E Scheme. The overall monitoring and evaluation scheme, including what is being done in regard to both piloting and full roll-out monitoring and evaluation.

Doers - D. The players who are directly acting to change outcomes within an outcomes model (also known as intervention organizations, providers)

Funders - F. The funding and control organizations which contract Doers to intervene in an outcomes model.

Contracting arrangements - C. The types of contracting arrangements which can be entered into by a Funder and a Doer. Systematic Outcomes Analysis identifies three different types of contracting that can be used.

Prerequisites between elements in building blocks 1-9

Prerequisites building blocks 1-9

OutcomestheoryimageD96-2-930x570
magnify

[UNDER CONSTRUCTION] Part of the power of Systematic Outcomes Analysis is that it lets you see the relationships between the different building blocks in the system. This lets you integrate outcomes models, strategy, indicators and the various types of evaluation together with contracting. In particular, it tells you which  elements you need to have done in earlier building blocks if you are wanting to try and do specific elements in later building blocks. This helps you avoid a situation where you are not able to do a later building block element because you did not do one of its prerequisites earlier when you had the chance. The first prerequisites model on the left  (Prerequisites of elements in Systematic Outcomes Analysis building blocks 1-9) looks at the relationships between all of the building blocks. In the diagram, a solid line means that you must have done the earlier element in order to do the later element (the one with the inward arrow). A dotted line means that it is optional as to which elements you do, but that you must have done some of them. The reader new to Systematic Outcomes Analysis wanting to understand this diagram better should look at the models section of this web site.

Prerequisites between elements in building blocks 5-8

Prerequisites building blocks 5-8

OutcomestheoryimageD95-1-930x570
magnify

[UNDER CONSTRUCTION] On the left is the second prerequisites diagram in Systematic Outcomes Analysis. Part of the power of Systematic Outcomes Analysis is that it lets you see the relationships between the different building blocks in the system. This lets you integrate outcomes models, strategy, indicators and the various types of evaluation together with contracting. In particular, it tells you which  elements you need to have done in earlier building blocks if you are wanting to try and do specific elements in later building blocks. This helps you avoid a situation where you are not able to do a later building block element because you did not do one of its prerequisites earlier when you had the chance. The second prerequisites model on the left  (Prerequisites of elements in Systematic Outcomes Analysis building blocks 5-9) looks in more detail at the relationship between building blocks 5-8. In the diagram, a solid line means that you must have done the earlier element in order to do the later element (the one with the inward arrow). A dotted line means that it is optional as to which elements you do, but that you must have done some of them. The reader new to Systematic Outcomes Analysis wanting to understand this diagram better should look through the models section of this web site.

Features of outcomes

There is confusion in outcomes and performance management systems as to the types of outcomes which you are allowed to include in any system. This is reflected in criticisms such as: "no, you haven't given us the type of outcomes we want, the ones you've specified are all too low-level, they're just outputs"; or, alternatively, "no, the outcomes you've specified are all too high-level, how will you be able to prove that it was you who made them change?" A range of different terms are also used for outcomes, and sometimes used in different ways: e.g. final outcomes, impacts, intermediate outcomes, strategic outcomes, priorities, key drivers, outputs, activities etc.

Systematic Outcomes Analysis cuts through the potential confusion caused by contradictory demands about the level your outcomes should be at and the many terms used in outcomes and performance management systems by drawing on outcomes theory finding that outcomes can have five major features, these features are set out below:

Influencible - able to be influenced by a player

Controllable - only influenced by one particular player

Measurable - able to be measured

Attributable - able to be attributed to one particular player (i.e. proved that only one particular player changed it)

Accountable - something that a particular player will be rewarded or punished for

Using these features of outcomes enables us to be very clear about the type of outcome we are talking about when doing Systematic Outcomes Analysis. In particular it lets us distinguish between not-necessarily attributable outcomes (or more correctly their indicators - called I[nn-att] in Systematic Outcomes Analysis; and attributable indicators - called I[att] in Systematic Outcomes Analysis. Using this approach we are able to draw outcomes models which include all steps and outcomes at all levels. This type of model is very powerful for strategic planning and other purposes and is what Doers should focus on for strategic purposes. We can then go back to our model and identify which outcomes have attributable indicators for accountability purposes. This avoids conflicting demands about the level of outcomes which are allowed within an outcomes model.

Seven possible outcome evaluation designs

[Under construction]. Systematic Outcomes Analysis (in the Evaluation (outcome) building-block of the system) identifies an exhaustive set of seven possible outcome evaluation designs. The fact that it is claimed that this is an exhaustive set of outcome evaluation designs enables it to be used to establish exactly what outcome evaluation is, and is not possible, for any intervention.  High-level outcome evaluation questions are identified in Systematic Outcomes Analysis and are examined to see if any of the seven possible outcome evaluation designs are appropriate, feasible and affordable.

The seven possible outcome evaluation designs are:

Design 1: True experiment design.

Applying an intervention to a group (intervention group) and comparing it to a group (control group) which has not received the intervention where there is no reason to believe that there are any relevant differences between the groups (e.g. through randomly assigning to intervention and control group).

Design 2: Regression discontinuity design.

Ordering those to be selected on the basis of their level on an outcome of interest, only intervening in those who have the 'worse' levels (intervention group) and comparing their changes on the outcome with those who did not receive the intervention (control group).

Design 3: Time series analysis design.

Tracking an outcome of interest over many observations in a situation where the intervention starts at a specific point in time. There should be a clear change in the series of observations at the time when the intervention started.

Design 4: Constructed comparison group design.

Identifying a 'group' which is similar in as many regards as possible to the group receiving the intervention. This includes both identifying other actual groups, or constructing a nominal control 'group' of what those receiving the intervention would have been like if they did not receive the interventions (e.g. propensity matching).

Design 5: Exhaustive causal identification and elimination design.

Systematically and exhaustively looking for all the possibilities which could have caused a change in outcomes and eliminating these alternative explanations in favor of the intervention. Needs to go well beyond just developing an explanation as to why the intervention could have worked without dismissing all alternative explanations which can be identified. Sometimes called a 'forensic' method.

Design 6: Expert judgement design.

Asking experts to judge whether they think that the intervention caused the outcomes using whatever why they want to make this judgement. (This design, along with some of the other designs in some instances, is rejected by some stakeholders as not a valid way of determining whether an intervention actually caused high level outcomes. It is included here because it is accepted by some other stakeholders as actually doing this.)

Design 7: Key informant judgement design.

Asking key informants (a selection of those who are likely to know what has happened)  to judge whether they think that the intervention caused the outcomes using whatever why they want to make this judgement. (This design, along with some of the other designs in some instances, is rejected by some stakeholders as not a valid way of determining whether an intervention actually caused high level outcomes. It is included here because it is accepted by some other stakeholders as actually doing this.)

Of these designs the first four can be used to estimate effect sizes. Effect sizes are a quantitative measurement of the amount an intervention has changed an outcome. Estimated effect sizes are essential for carrying out some of the elements in other building blocks. The Prerequisites building blocks 5-8 diagram sets out the prerequisites here.

[Note: This list of designs is still provisional within Systematic Outcomes Analysis. The first five were derived from the work of Michael Scriven identifying causal evaluation designs. The last two have been added because they are accepted by some stakeholders in some real-world circumstances as providing evidence that an intervention has caused high-level outcomes. Different disciplines use different terms for these types of designs and the names of the designs may be changed. For instance, general regression analyses as often undertaken in economic analysis are currently included under the 'constructed comparison group design'. Comment on whether this is actually an exhaustive list of designs would be appreciated (send to paul (at) parkerduignan.com).]

Seven possible evaluation areas of focus

[UNDER CONSTRUCTION] Systematic Outcomes Analysis identifies seven possible areas of evaluation focus.  The first of these is the focus of the 5th building block Evaluation[outcome]. The other six are the focus of the 6th building block - Evaluation[non-outcome].

The seven possible areas of focus for evaluation are:

Focus 1: Establishing whether a particular intervention has caused an improvement in high-level outcomes (outcome/impact evaluation)

Focus 2: Establishing whether a particular intervention has caused an improvement in mid-level outcomes (process evaluation)

Focus 3: Describing an outcomes model that is actually being implemented in a specific instance (including estimating the cost of undertaking an intervention) (process evaluation)

Focus 4: Comparing an outcomes model being implemented with proposed outcomes model(s) (process evaluation)

Focus 5: Aligning an outcomes model being implemented with a proposed outcomes model or its enhancement (formative evaluation; best practice application)

Focus 6: Describing the different understandings, interpretations or meanings stakeholder have of an outcomes model and its implementation

Focus 7: Describing the effect of the context on the implementation of an outcomes model.

Three groups of possible economic evaluation analyses

[Under construction] Systematic Outcomes Analysis identifies an exhaustive set of twelve possible types of economic analysis (grouped into three groups of four analyses each) which are used in the Economic and Comparative Evaluation 6th building-block of the system). The fact that it is claimed that this is an exhaustive set of economic evaluation designs enables the list to be used to establish exactly what economic evaluation is, and is not, possible for any  intervention or set of interventions. Moving through the three overall groups of analyses, if a later analysis can be done, then by definition one of corresponding earlier analyses can also be done. So if you can do 3.2 you can also do 2.2, 2.1 and all of the analyses 1.1-1.4. The analyses are grouped into three sets - those you can do when you do not have actual effect-size estimates for attributable outcomes above the intervention; those you can do when you have estimates for mid-level outcomes and those you can do if you have estimates for high-level attributable outcomes. In summary, you can only do the first grouping if you have estimated the cost of the intervention in the 5th building block (Checklist Step 6.1.1.2); for the second grouping you also need to have estimated mid-level outcome effect sizes in the 5th building block (by using one of the outcomes evaluation designs 1-4 in Checklist Step 6.1.1.1); for the third grouping you need to have estimated high-level outcome effect sizes in the 4th building block (by using one of the outcome evaluation designs 1-4 in Checklist Step 5.2.2).

In addition, another important prerequisite of any type of cost benefit analysis (1.3,1.4,2.3,2.4,3.3,3.4 below) is that a comprehensive outcomes model has been developed. The robustness of a cost benefit analysis depends on it providing a comprehensive measurement of the costs and benefits associated with an intervention. It is easy to distort the results of a cost benefit analysis in any direction you wish simply by leaving out either the costs of the benefits of important outcomes. In Systematic Outcomes Analysis all cost benefit analyses should be mapped back onto the outcomes model. This lets the reader of such an analysis quickly overview what is, and what is not included in the analysis and how this relates to the underlying outcomes model. It is not easy to get an understanding of what is going on in a cost benefit analysis without this sort of approach.

The set of prerequisites which exist between building blocks 5-8 are set out in the Prerequisites building blocks 5-8 diagram here.

The twelve analyses within the three groups of economic 7th Attributable evaluation are:

1: No attributable outcomes above intervention

Analysis 1.1 Cost of intervention analysis, single intervention.
Cost of intervention analysis just looks at the cost of an intervention not its effectiveness (how much it costs to change an outcome by a certain amount) or the benefits (the result of subtracting the dollar cost of the program from the benefits of the program estimated in dollar terms). This analysis allows you to say what the estimated cost of the intervention is (e.g. $1,000 per participant).

Analysis 1.2 Cost of intervention analysis, multi-intervention comparison. Same as 1.1 but a multi-intervention comparison. This analysis allows you to compare the costs of different interventions (e.g. Program 1 - $1,000 per participant; Program 2 - $1,500 per participant or to put it in terms of Program 2 costing 1 1/2 times more than Program 1 per participant.

Analysis 1.3 Cost benefit analysis, set of arbitrary high level effect size estimates, single intervention. Even where you do not have any attributable outcomes above the intervention, but you do have an estimate of the cost of the intervention, you can just use some arbitrary (hypothetical) effect sizes and therefore, if you can estimate these in dollar terms you can do a hypothetical cost benefit analysis (e.g. for a hypothetical effect size of 5%, 10% or 20%). It is essential that this type of hypothetical analysis is clearly distinguished from Analyses 3.3 which is based on estimates from actual measurement of effect sizes. This analysis allows you to estimate the overall benefit (or loss) of running the intervention if any of these effect sizes were achieved (e.g. there would be a loss of $500 per participant for a 5% effect size, a gain of $100 for a 10% effect size and gain of $600 per participant for a 20% effect size.

Analysis 1.4 Cost benefit analysis, set of arbitrary high level effect size estimates, multi-intervention comparison. Same as 1.3 but a multi-intervention comparison. This analysis allows you to compare the overall loss or gain from more than one program for various hypothetical effect sizes (e.g. for a 5% effect size, Program 1 would have an estimated loss of $500 per participant whereas Program 2 would have a gain of $200 and so on, you could even theoretically vary the arbitrary effect sizes if there was some reason to believe that there would be differences, e.g. a general population program is likely to have a lower effect size than an intensive one to one program, but this may not say anything about the overall loss or gain when comparing two such programs). It is essential that this type of hypothetical analysis is clearly distinguished from Analyses 3.4 which is based on estimates from actual measurement of effect sizes.

2: Attributable mid-level outcomes

Analysis 2.1 Cost effectiveness analysis, attributable mid-level outcomes, single intervention. In this analysis, estimates are available of the attributable effect of the intervention on mid-level outcomes. When combined with the estimated cost of the intervention this allows you to work out the cost of achieving a certain level of effect on mid-level outcomes (e.g. a 6% increased in X cost approximately $1,000 per participant).

Analysis 2.2 Cost effectiveness analysis, attributable mid-level outcomes, multi-intervention comparison. Same as 2.1 but a multi-intervention comparison. This analysis lets you work out the cost of achieving a certain level of effect on mid-level outcomes for a number of interventions (e.g. a 6% increase in X cost approximately $1,000 per participant for Program 1 whereas it cost $1,500 for Program 2). It is likely that the measured effect sizes of different interventions will vary, therefore you may need to adjust estimates to a common base. This may or may not reflect what would happen in regard to the actual programs in reality.

Analysis 2.3 Cost benefit analysis, attributable mid-level outcomes, single intervention.
Analysis 2.4 Cost benefit analysis attributable mid-level outcomes, multi-intervention comparison.

3: Attributable high-level outcomes

Analysis 3.1 Cost effectiveness analysis, attributable high-level outcomes, single intervention. Same as 2.1 except you can work out the cost of achieving a high-level outcome effect size of a certain amount.

Analysis 3.2 Cost effectiveness analysis, attributable high-level outcomes, multi-intervention
comparison. Same as 2.2 except you can work out the cost of achieving a high-level outcome effect size of a certain amount and compare this across more than one intervention.

Analysis 3.3 Cost benefit analysis, attributable high-level outcomes, single intervention.

Analysis 3.4 cost benefit analysis, attributable high-level outcomes, multi-intervention comparison.
   
[Note: This list of designs is still provisional within Systematic Outcomes Analysis. For instance, there could theoretically be a 'cost benefit analysis, set of arbitrary mid-level effect size estimates, single intervention or multi-intervention comparison', however it is not clear why anyone would do this rather than 1.3 or 1.4 which sets arbitrary high-level effect sizes. Comment on whether this is actually an exhaustive list of designs would be appreciated (send to paul (at) parkerduignan.com).].

Two intervention comparisons

[UNDER CONSTRUCTION] In the 7th building block Evaluation[economic & comparative] two types of intervention comparison are used.

Comparison 1: Quantitative multi-intervention effect size meta-analysis
This type of comparison relies on effect sizes being available for high level outcomes from all of the interventions which are being compared. If they are not available then this type of analysis cannot be done. The Prerequisites building blocks 5-8 diagram here sets out the prerequisites for elements within the Systematic Outcomes Analysis building blocks. If Comparison 1 is being used by decision makers as the major factor in selecting between different interventions, this only makes sense if one instance. This is where the ease of undertaking E[outcome] studies which produce effect sizes (Designs 1-4 here) is not very different for the different interventions. If it is not then decision makers will be biasing their decisions in the direction of the easy undertaking E[outcome] Designs 1-4 rather than rational decision making on the basis of a critical assessment of the strengths and weaknesses of the evidence available. Comparison 1 decision making is likely to enhance decision making in situations such as the comparison of pharmaceuticals, but less suited to situations where decision makers are attempting to compare very different interventions (e.g. individual interventions versus community programs).

Comparison 2: Mixed qualitative/quantitative multi-intervention comparison analysis
This comparison covers a diverse range of different type of comparison between interventions. This includes general reviews of what works and what does not work, summaries of qualitative studies etc. It is not restricted to just qualitative findings because there is no reason why quantitative results should not be also considered in such reviews.

Overall monitoring & evaluation scheme

[UNDER CONSTRUCTION] The overall monitoring and evaluation scheme is the way in which monitoring and evaluation for any piloting which is being done and for the full roll-out is carried out. In particular, if there is a pilot phase, the relationship between the monitoring and evaluation approach for this and for the full roll-out needs to be decided on.

The overall monitoring and evaluation scheme used in a particular cases can use various combinations of the Systematic Outcomes Analysis building-blocks. Obviously, if parts of earlier building blocks are not appropriate, feasible or affordable then this will limit the possibilities for overall monitoring and evaluation strategy. The prerequisites for these schemes are set out in the Prerequisites building blocks 5-8 diagram here. It is important that all stakeholders understand which overall monitoring and evaluation scheme is being used. Several typical overall monitoring and evaluation schemes are:

Scheme 1: Full roll-out outcome evaluation plus some additional evaluation

Full roll-out Evaluation[outcome] plus some or all of Outcomes model, Indicators[nn-att], Indicators[att], Evaluation[n-outcome], Evaluation[economic & comparative].

Scheme 2: Pilot outcome evaluation plus additional evaluation AND only additional evaluation on full-roll out

Pilot evaluation Evaluation[outcome] plus some or all of Outcomes model, Indicators[nn-att], Indicators[att], Evaluation[n-outcome], Evaluation[economic & comparative] AND full roll-out limited to some or all of Outcomes model, Indicator[nn-att], Indicator[att], Evaluation[n-outcome], Evaluation[economic & comparative].


Three possible contracting arrangements

[Under construction] Systematic Outcomes Analysis identifies three possible contracting approaches which can be negotiated between funders and doers. The distinctions made here are based on Systematic Outcomes Analyses understanding of the different features of outcomes here. The three possible types of contracting arrangements are:

Arrangement 1: Accountable for outputs only.

In this arrangement Doers are accountable only for producing specified outputs and nothing else.

Arrangement 2: Accountable for outputs AND for 'managing for outcomes'

In this arrangement Doers are accountable for producing specific outputs AND also for 'managing for outcomes'. Managing for outcomes means that they also need to be thinking about whether their outputs are the best way that high level outcomes can be achieved and this includes the other factors which may influence high level outcomes and negate the effectiveness of their outputs. In this arrangement Doers are NOT held accountable for the achievement of high level outcomes. Exactly how managing for outcomes is defined is an interesting question as Funders/Control Organizations need to somehow work out whether or not Doers are actually 'managing for outcomes'. If Doers do this in diverse ways it is very difficult for funders to know whether they are doing it properly without emerging themselves in the details of the way Doers are doing it. It is rather like Funders/Control Organizations attempting to work out whether Doers are being financially responsible if there were no standardized accounting systems and accounting conventions. From the point of view of Systematic Outcomes Analysis a solution to this problem is for Funders to require that that Doers undertake, and Systematic Outcomes Analysis of their funded projects and have these peer reviewed/audited, just as would happen in the accounting area.

Arrangement 3: Accountable for not fully controllable outcomes

In this arrangement Doers are held to account for not fully controllable outcomes, which sounds somewhat paradoxical. However this does occur in the private sector. It is most suited to those situations where it is not appropriate, feasible or affordable to work out what can be attributed to particular Doers and where the Funder/Control Organization receives and directly benefits from the achievement of high level outcomes. In these cases the Funder/Control Organization is willing to share its increase in wealth with those who, probably (or possibly), but ultimately unprovably, influenced their good fortune. Within this arrangement, Doers end up 'insuring' Funders/Control Organizations against the times when high level outcomes are not achieved and Funders/Control Organizations do not receive any increase in wealth. In those cases the Doer does not receive the bonus etc which they do receive when high level outcomes are achieved. Therefore, Doers usually demand a premium from Funders/Control Organizations to manage their risk against those times when things go badly not due to any fault on the part of the Doer. This sort of arrangement is in place in regard to the salaries of top executives within the private sector. In the public sector it is less likely to occur because those making decisions within Funder/Control Organizations do not usually personally benefit from achieving high level outcomes, many Doers need to collaborate to achieve many public sector outcomes, and politicians, taxpayer and the media are normally resistant to large sums of money being paid out to people in circumstances where there is the possibility that they did not in fact have 'earned it'.


Copyright Paul Duignan 2005-2007 (updated March 2007)