Systematic Outcomes Analysis

A complete solution to outcomes, strategy, monitoring, evaluation and contracting

The Model

All essential aspects of the Systematic Outcomes Analysis model are set out below. This page complements the How To List page here, which gives you step by step instructions for actually doing Systematic Outcomes Analysis. If you are working with printed versions you should print out this page as well as the How To List page as you will need to refer to the model when using the How To page.

You can either scroll down through the sections on this page (a good way to start if you have not seen this material before) or you can click on the index on the left hand side to separately open just one of the sections on this page.

Immediately below is a description of the building blocks that are common to all outcomes and performance management systems. First the basic building blocks are set out and then a set of extended building blocks. Then the prerequisites between building blocks are detailed - these show what has to be done earlier if you are going to be able to do certain aspects of later building blocks. Following this there is a description of the features of outcomes - a way of thinking about outcomes which clearly distinguishes between outcomes with different characteristics. These concepts provide a clear conceptual basis each part of the Systematic Outcomes Analysis process.

There then follows a list of  the seven possible outcome evaluation designs; seven possible evaluation areas of focus; two types of intervention comparison; ten possible economic evaluation analyses; two overall monitoring and evaluation schemes; three possible contracting arrangements; and a summary of issues in the management of monitoring and evaluation.

The model is comprehensive, not complex. The building blocks provide a foundation for understanding a number of interrelated issues: outcomes, strategic planning, indicator monitoring, evaluation, intervention comparison, economic analysis, overall monitoring and evaluation schemes, and contracting arrangements.

If you work within organizations, you will at some stage in your career have to deal with most, if not all, of these. Any investment you make now in learning about Systematic Outcomes Analysis will provide a clear framework for understanding these interrelated issues. This will prevent the confusion and rework that stems from dealing with each of these aspects of organizational life in a piecemeal fashion.  

[V1.1.2]

Basic building blocks of outcomes systems

Basic building blocks

Outcomestheoryimage24-7
magnify

Systematic Outcomes Analysis works with the set of basic building blocks which have been identified in outcomes theory - the OIE* Basic Model. These are set out in the diagram on the right.

These elements are:

1. An outcomes model - O. Setting out how you think your program is working - all of the important steps needed to achieve high-level outcomes. Once built according to the set of standards used in Systematic Outcomes Analysis these models can be used for strategic planning, business planning and more.

2. Indicators - I[nn-att]. Not-necessarily attributable indicators showing general outcomes' progress. These do not need to be attributable to (able to be proved that they are caused by) any one particular player.

3. Attributable indicators - I[att]. Indicators which are able to be attributed to particular players (that is, you can prove that they have been caused by one particular player). The measurements of outputs (the goods and services produced by a player) are attributable indicators.

4. High Level Outcome evaluation - E[outcome]. Ways of proving that a particular player caused high level outcomes. Systematic Outcomes Analysis identifies the seven outcome evaluation designs which can do this.

5. Non High Level Outcome evaluation - E[n-outcome]. Other types of evaluation which do not claim to measure high level outcomes, but which are used to improve the outcomes model and examine its context (called formative and process evaluation).

* This was earlier known as the OIIWA model. The model is current being updated so its title may change.

[V1.1.2]

Photo Album

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Vestibulum bibendum, ligula ut feugiat rutrum, mauris libero ultricies nulla, at hendrerit lectus dui bibendum metus. Phasellus quis nulla nec mauris sollicitudin ornare. Vivamus faucibus. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Cras vulputate condimentum ipsum. Duis urna eros, commodo id, sagittis sed, sodales eu, ante. Etiam ante. Cras risus dolor, porta nec, adipiscing eu, scelerisque at, metus. Mauris nunc eros, porttitor nec, tincidunt ut, rutrum eget, massa. In facilisis nisi. Sed non lorem malesuada quam egestas bibendum. Quisque bibendum ullamcorper purus. Integer id diam vel elit adipiscing consectetuer. Phasellus vitae eros. Vivamus laoreet consectetuer tortor. In congue dignissim quam. Suspendisse nec purus vel velit ultricies bibendum.

Extended building blocks of outcomes systems

Extended building blocks

OutcomestheoryimageD89-1
magnify


Systematic Outcomes Analysis expands its focus out from the OIE Basic Model  building-blocks of outcomes theory by adding these additional elements:

Economic and comparative evaluation - E[economic]). Cost, cost effectiveness and cost benefit analysis. Systematic Outcomes Analysis identifies ten types of economic and two types of comparative evaluation.

Overall monitoring and evaluation scheme - Overall M & E Scheme. The overall monitoring and evaluation scheme, including what is being done in regard to both piloting and full roll-out monitoring and evaluation.

Doers - D. The players who are directly acting to change outcomes within an outcomes model (also known as intervention organizations, providers)

Funders - F. The funding and control organizations which contract Doers to intervene in an outcomes model.

Contracting arrangements - C. The types of contracting arrangements which can be entered into by a Funder and a Doer. Systematic Outcomes Analysis identifies three different types of contracting that can be used.

[V1.1.2]

Prerequisites between elements in building blocks 1-9

Prerequisites building blocks 1-9

OutcomestheoryimageD96-2-930x570
magnify

Part of the power of Systematic Outcomes Analysis is that it lets you see the relationships between the different building blocks in the system. This lets you integrate outcomes models, strategy, indicators and the various types of evaluation together with contracting. In particular, it tells you which  elements you need to have done in earlier building blocks if you are wanting to try and do specific elements in later building blocks. This helps you avoid a situation where you cannot do a later building block element because you did not do one of its prerequisites at an earlier stage when you had the chance. The first prerequisites model is on the left  (Prerequisites of elements in Systematic Outcomes Analysis building blocks 1-9). It looks at the relationships between all of the building blocks. Within this diagram, a solid line means that you must have done the earlier element in order to do the later element (the one with the inward arrow). A dotted line means that it is optional as to which elements you do, but that you must have done some of them. The reader new to Systematic Outcomes Analysis wanting to understand this diagram better should look at the model section of this web site.

[V1.1.2]

Prerequisites between elements in building blocks 5-8

Prerequisites building blocks 5-8

OutcomestheoryimageD95-2-930x570
magnify

On the left is the second prerequisites diagram in Systematic Outcomes Analysis. Part of the power of Systematic Outcomes Analysis is that it lets you see the relationships between the different building blocks in the system. This lets you integrate outcomes models, strategy, indicators and the various types of evaluation together with contracting. In particular, it tells you which  elements you need to have done in earlier building blocks if you are wanting to try and do specific elements in later building blocks. This helps you avoid a situation where you are not able to do a later building block element because you did not do one of its prerequisites at an earlier stage when you had the chance. The second prerequisites model on the left  (Prerequisites of elements in Systematic Outcomes Analysis building blocks 5-8) looks in more detail at the relationship between building blocks 5-8. In the diagram, a solid line means that you must have done the earlier element in order to do the later element (the one with the inward arrow). A dotted line means that it is optional as to which elements you do, but that you must have done some of them. The reader new to Systematic Outcomes Analysis wanting to understand this diagram better should look through the model section of this web site.

[V1.1.2]

Features of outcomes

There is confusion in outcomes and performance management systems as to the types of outcomes which you are allowed to include in any system. This is reflected in criticisms such as: "no, you haven't given us the type of outcomes we want, the ones you've specified are all too low-level, they're just outputs"; or, alternatively, "no, the outcomes you've specified are all too high-level, how will you be able to prove that it was you who made them change?" In discussions about outcomes systems, a range of different terms are also used for outcomes, and sometimes used in different ways: e.g. final outcomes, impacts, intermediate outcomes, strategic outcomes, priorities, key drivers, outputs, activities etc.

Systematic Outcomes Analysis cuts through the potential confusion caused by contradictory demands about the level your outcomes should be at and the many terms used in outcomes and performance management systems. It does this by using the outcomes theory principle that outcomes can have five major features, these features are:

Influenceable - able to be influenced by a player

Controllable - only influenced by one particular player

Measurable - able to be measured

Attributable - able to be attributed to one particular player (i.e. proved that only one particular player changed it)

Accountable - something that a particular player will be rewarded or punished for.

Using these features of outcomes enables us to be very clear about the type of outcome we are talking about when doing Systematic Outcomes Analysis. In particular, it lets us clearly specify which types of outcomes we will allow into outcomes models. The standards for drawing outcomes models used in Systematic Outcomes Analysis allow any influenceable outcome to be included in an outcomes model. This is in contrast to a number of types of outcomes models which try to keep outcomes to just attributable outcomes (ones which it can be proved one particular player caused). The more general type of outcomes model drawn in Systematic Outcomes Analysis is much more useful for strategic planning, monitoring, evaluation and contracting purposes. In Systematic Outcomes Analysis, attributable indicators and accountable outcomes are mapped onto the model at a later stage after the more general model is built.

[V1.1.2] 

Outcome model standards

A set of standards has been developed for drawing comprehensive and technically sound outcomes models which can be used as the basis of any Systematic Outcomes Analysis. These standards are set out in full here and summarized below:

1. Use outcomes not activities. You can change an activity (doing) into an outcome (done) by just changing the wording (for instance changing: Increasing citizen participation to Increased citizen participation).

2. Outcomes models can include any of the 'cascading set of causes in the real world'. The steps that are put into models do not have to be limited to measurable, attributable or accountable outcomes. Attributable outcomes are those for which changes can clearly be attributed to an individual player. For a brief discussion of the different features of outcomes see here. There is usually a lot of resistance to including in outcomes models non-measurable and non-attributable outcomes. This is because stakeholders are wanting to manage their risk around being held to account for achieving all of the outcomes they put into such models. This is a genuine risk but it is managed in Systematic Outcomes Analysis by measurement, attribution and accountability being dealt with in a separate stage after the building the outcomes model. An outcomes model that is limited to measurable, attributable or accountable outcomes is usually useless for strategic planning (it limits you to trying to do the measurable and/or attributable rather than the important). It is also of limited value for monitoring and evaluation planning as it only lets you visualize what you already know rather than what you do not yet know (which is usually what you are trying to explore in monitoring and evaluation planning).

3. Don't force your outcomes model into particular horizontal 'layers' within the model. Often outcomes models are divided up into a number of layers such as - inputs, outputs, intermediate outcomes and final outcomes.  However whether or not something is an output is simply a result of its measurability and attributability (see the section on the features of outcomes). Therefore, in some models outputs may reach further up one side of a model than another. Forcing artificial horizontal layers onto an outcomes model distorts it and makes it harder for stakeholders to ‘read’ the logical flow of causality in the model. The concept of outputs is useful for accountability purposes and they can be identified at whatever level of a model they are located at a later stage after the outcomes model has been draw without demanding horizontal banding of into outputs, intermediate outcomes etc.

4. Don't 'siloize'. Siloizing is when you draw an outcomes model in a way that artificially forces lower level outcomes to only contribute to separate high level outcomes. In the real world, good lower level outcomes can contribute to multiple high level outcomes. Any outcome can potentially contribute to any other outcome in a model, the way you draw the model should allow for this.

5. Use 'singular' not 'composite' outcomes. Composite outcomes contain both a cause and an effect (e.g. increase seat-belt use through tougher laws). Outcomes like this should be stated as two separate outcomes. The use of words like through, or by in an outcome show that you are looking at a composite, rather than a singular outcome. Composite outcomes permanently lock an outcome with a particular strategy (e.g. increased seatbelt use in this case with the strategy of tougher laws); dividing these into separate outcomes gives more analytical power to your outcomes model because it allows you to consider the possibility that other strategies could lead to the outcome which is being sought.

6. Keep outcomes short. Outcomes models with wordy outcomes are hard to read. Include separate descriptive notes with each of your outcomes if you need more detail on them.

7. Put outcomes into an hierarchical order. Use the simple rule that you can tell that outcome A is above outcome B in an instance where, if you could magically make A happen, you would not bother trying to make B happen.

8. Each level in an outcomes model should include all the relevant steps needed to achieve the outcome(s) above it.

9. Keep measurements/indicators separate from the outcomes they are attempting to measure. Measurement should not be allowed to dominate an outcomes model. Within Systematic Outcomes Analysis measurement is introduced at a later stage after the outcomes model has been built. In those relatively small number of cases where a measurement also acts as an intervention in its own right (e.g. some audit procedures), then it can be included as an outcome within a model.

10. Put a 'value' in front of your outcome (e.g. suitable, sufficient, adequate). You do not need to define this at the time you build your outcomes model. If it is not clear exactly what it amounts to, it can become the subject of an evaluation project at a later stage.

11. Develop as many outcome 'slices' as you need (but no more). In an outcomes model you are trying to communicate to yourselves and to other stakeholders the nature of the world in which you are trying to intervene. Slices can be seen as a series of cuts through the world of outcomes in your area of interest. For instance you might have slices at the national, locality, organizational and individual level. The trick is to get the smallest number of slices needed to effectively communicate the relevant outcomes in the model.

12. Do not assume that you need a single high-level outcome at the top of an organization's outcomes model. Outcomes models should be about the external world, not just about your organization. Often organizations are delegated to undertake interventions in a number of areas that are best modeled separately. This is a better approach than artificially trying to force outcomes relating to different areas under a single integrated high level outcome.

13. Include both current high-priority and lower priority outcomes. Your outcomes model should be as accurate a model as you can draw of the  ‘cascading set of causes in the real world’; therefore, it should not just be about the current priorities you can afford to work on if they are a sub-set of the wider outcomes picture. Within Systematic Outcomes Analysis you map a (typically more limited number of) priorities onto your more comprehensive outcomes model after you have built the model. This allows you in the future to think strategically about alternative options and to change your priorities. If your outcomes model only includes your current priorities it gives you no steer as to how your current priorities map onto the real world.  In a public sector context this also allows outcomes models to support public officials providing ‘free and frank advice’ about how the world is – i.e. the 'cascading set of causes in the real world'. It is then up to elected government officials to decide what their priorities will be and these can be mapped onto the underlying outcomes model. This approach means that outcomes models do not have to change every time there is a change in particular elected officials or the government as a whole. If elected officials' priorities change this is reflected by mapping their priorities onto the more comprehensive outcomes model and by the public officials the moving to carry out these priorities.

[V1.1]

[This summary is drawn from Duignan, Paul (2007). Visualising outcomes in social policy: constructing quality outcomes sets for maximising impact. Social Policy, Research and Evaluation Conference, Wellington New Zealand, 5 April 2007.]

Seven possible outcome evaluation designs

Systematic Outcomes Analysis (in the Evaluation (outcome) building-block of the system) uses an exhaustive set of seven possible outcome evaluation designs. This list can be used to establish exactly what outcome evaluation is, and what outcome evaluation is not possible, for any intervention.  High level outcome evaluation questions are identified in Systematic Outcomes Analysis and are examined to see if any of the seven possible outcome evaluation designs are appropriate, feasible and affordable.

The seven possible outcome evaluation designs are:

Design 1: True experiment design.

Applying an intervention to a group (intervention group) and comparing it to a group (control group) which has not received the intervention where there is no reason to believe that there are any relevant differences between the groups (e.g. through random assignment to to intervention and control group).

Design 2: Regression discontinuity design.

Applying an intervention only to a group (intervention group) who are the 'worst off' in terms of their initial levels on some outcome of interest. The results for this group are then compared to the results for a wider untreated group (control group) and they should have improved relative to the control group. 

Design 3: Time series analysis design.

Tracking an outcome of interest over many observations in a situation where the intervention starts at a clearly specified point in time. If a clear change in the series of observations is observed at the time when the intervention starts, this is regarded as evidence that the intervention had an effect. 

Design 4: Constructed comparison group design.

Identifying a 'group' which is similar in as many regards as possible to the group receiving the intervention. This can include either identifying other actual groups, or constructing a nominal control 'group' of what would have happened to those receiving the intervention if they had not, in fact, received it (e.g. propensity matching).

Design 5: Exhaustive causal identification and elimination design.

Systematically and exhaustively looking for all the possibilities which could have caused a change in outcomes and eliminating these alternative explanations in favor of the intervention as the best explanation for what happened. This approach needs to go well beyond just developing an explanation as to why the intervention could have worked without dismissing all alternative explanations which can be identified. Sometimes called a 'forensic' evaluation method.

Design 6: Expert judgement design.

Asking experts to judge whether they think that the intervention caused the outcomes by using whatever way they believe is appropriate to make this judgement. (This design, along with some of the other outcome designs in some instances, is rejected by some stakeholders as not being a valid way of determining whether an intervention actually caused high level outcomes. It is included in the list of outcome evaluation designs here because it is accepted by other stakeholders as actually doing this.)

Design 7: Key informant judgement design.

Asking key informants (a selection of those who are likely to know what has happened)  to judge whether they think that the intervention caused the outcomes and allowing them to do this in whatever way they believe is appropriate to make this judgement. (This design, along with some of the other designs in some instances, is rejected by some stakeholders as not a valid way of determining whether an intervention actually caused high level outcomes. It is included in the list of evaluation designs here because it is accepted by other stakeholders as actually doing this.)

Of these designs, the first four can be used to estimate effect sizes. Effect sizes are a quantitative measurement of the amount an intervention has changed an outcome. Estimated effect sizes are essential for carrying out some of the elements in other building blocks. The Prerequisites building blocks 5-8 diagram sets out the prerequisites here.

[Note: This list of designs is still provisional within Systematic Outcomes Analysis. The first five were derived from the work of Michael Scriven identifying causal evaluation designs. The last two have been added because they are accepted by some stakeholders in some real-world circumstances as providing evidence that an intervention has caused high level outcomes. Different disciplines use different terms for these types of designs and therefore the names of these designs within Systematic Outcomes Analysis may be changed in the future. For instance, general regression analyses as often undertaken in economic analysis are currently included under the 'constructed comparison group design' this may or may not be a good idea, it may be that they should have their own design type in this list. Comment on whether this is actually an exhaustive list of designs would be appreciated (send to paul (at) parkerduignan.com).]

[V1.1.2]

Seven possible evaluation areas of focus

Systematic Outcomes Analysis identifies seven possible areas of evaluation focus.  The first of these is the focus of the 5th building block Evaluation[outcome]. The other six are the focus of the 6th building block - Evaluation[non-outcome].

The seven possible areas of focus for evaluation are:

Focus 1: Establishing whether a particular intervention has caused an improvement in high level outcomes (outcome/impact evaluation)

Focus 2: Establishing whether a particular intervention has caused an improvement in mid level outcomes (process evaluation)

Focus 3: Describing an outcomes model that is actually being implemented in a specific instance (including estimating the cost of undertaking an intervention) (process evaluation)

Focus 4: Comparing an outcomes model being implemented with proposed outcomes model(s) (process evaluation)

Focus 5: Aligning an outcomes model being implemented with a proposed outcomes model or its enhancement (formative evaluation; best practice application)

Focus 6: Describing the different understandings, interpretations or meanings stakeholder have of an outcomes model and its implementation

Focus 7: Describing the effect of the context on the implementation of an outcomes model.
[V1.1.2]

Two types of intervention comparisons

Systematic Outcomes Analysis in the 7th building block Evaluation[economic & comparative], uses two types of intervention comparison.

Comparison 1: Quantitative multi-intervention effect size meta-analysis
This type of comparison relies on effect sizes being available for high level outcomes from all of the interventions which are being compared. If they are not available then this type of analysis cannot be done. The Prerequisites building blocks 5-8 diagram here sets out the prerequisites for elements within the Systematic Outcomes Analysis building blocks. If Comparison 1 is being used by decision makers as the major factor in selecting between different interventions, this only makes sense in one case. This is a case where the ease of undertaking E[outcome] studies which produce effect sizes (Designs 1-4 here) does not differ in any major way for the different interventions being compared. If there are major differences in the ease with which this type of outcome design can be undertaken, then decision makers will be biasing their decisions in the direction of interventions for which it is easy to undertake E[outcome] Designs 1-4. Comparison 1 decision making is likely to enhance decision making in situations such where the effectiveness of different types of pharmaceuticals are being compared, but it is less well suited to situations where decision makers are attempting to compare very different interventions with different ease of outcome evaluation (e.g. individual interventions versus community programs).

Comparison 2: Mixed qualitative/quantitative multi-intervention comparison analysis
This comparison covers a diverse range of different types of comparisons between interventions. This includes general reviews of what works and what does not work, summaries of qualitative studies etc. It is not restricted to just qualitative findings because there is no reason why quantitative results should not be also considered in such reviews.
[V1.1.2]

Ten possible economic evaluation analyses

Systematic Outcomes Analysis uses an exhaustive set of ten possible types of economic analysis (grouped into three groups of analyses) in the Economic and Comparative Evaluation 7th building-block of the system). This list is used to establish exactly what economic evaluation is, and is not, possible for any  intervention or set of interventions. Moving through the three overall groups of analyses, if a later analysis can be done, then by definition one of corresponding earlier analyses can also be done. So if you can do Analysis 3.2 you can also do 2.2, 2.1 and all of the Analyses 1.1-1.4. The analyses are grouped into three sets - those you can do when you do not have actual effect-size estimates for attributable outcomes above the intervention; those you can do when you have estimates for mid-level outcomes and those you can do if you have estimates for high-level attributable outcomes. In summary, you can only do the first grouping if you have estimated the cost of the intervention in the 5th building block (Checklist Step 6.1.1.2); for the second grouping you also need to have estimated mid-level outcome effect sizes in the 5th building block (by using one of the outcomes evaluation designs 1-4 in Checklist Step 6.1.1.1); for the third grouping you need to have estimated high-level outcome effect sizes in the 4th building block (by using one of the outcome evaluation designs 1-4 in Checklist Step 5.2.2).

In addition, another important prerequisite of any type of cost benefit analysis (Analyses 1.3,1.4,2.3,2.4,3.3,3.4 below) is that a comprehensive outcomes model has been developed. The robustness of a cost benefit analysis depends on it providing a comprehensive measurement of all of the relevant costs and benefits associated with an intervention. It is easy to distort the results of a cost benefit analysis in any direction you wish by simply leaving out either the costs or the benefits of one or more important outcomes. In Systematic Outcomes Analysis all cost benefit analyses should be mapped back onto an outcomes model. This lets the reader of such an analysis quickly overview what is, and what is not, included in the analysis and how this relates to the underlying outcomes model. It is not easy to assess the comprehensiveness of a cost benefit analysis without using this type of approach.

The set of prerequisites which exist between building blocks 5-8 are set out in the Prerequisites building blocks 5-8 diagram here.

The ten economic evaluation analyses grouped into three groups are:

1: No attributable outcomes above intervention

Analysis 1.1 Cost of intervention analysis, single intervention.
Cost of intervention analysis just looks at the cost of an intervention not its effectiveness (how much it costs to change an outcome by a certain amount) or the benefits (the result of subtracting the dollar cost of the program from the benefits of the program estimated in dollar terms). This analysis allows you to say what the estimated cost of the intervention is (e.g. $1,000 per participant).

Analysis 1.2 Cost of intervention analysis, multi-intervention comparison. Same as 1.1 but a multi-intervention comparison. This analysis allows you to compare the costs of different interventions (e.g. Program 1 - $1,000 per participant; Program 2 - $1,500 per participant or to put it in terms of Program 2 costing 1 1/2 times more than Program 1 per participant.

Analysis 1.3 Cost benefit analysis, set of arbitrary high level effect size estimates, single intervention. Even where you cannot establish any attributable outcomes above the intervention, but you do have an estimate of the cost of the intervention, you can just use some arbitrary (hypothetical) effect sizes. These can be used, if they can be estimated in dollar terms, to do a hypothetical cost benefit analysis (e.g. for a hypothetical effect size of 5%, 10% or 20%). It is essential that this type of hypothetical analysis is clearly distinguished from Analyses 3.3 which is based on estimates from actual measurement of effect sizes. This analysis allows you to estimate the overall benefit (or loss) of running the intervention if any of the hypothetical effect sizes were achieved (e.g. there would be a loss of $500 per participant for a 5% effect size, a gain of $100 for a 10% effect size and gain of $600 per participant for a 20% effect size).

Analysis 1.4 Cost benefit analysis, set of arbitrary high level effect size estimates, multi-intervention comparison. Same as 1.3 but a multi-intervention comparison. This analysis allows you to compare the overall loss or gain from more than one program for various hypothetical effect sizes (e.g. for a 5% effect size, Program 1 would have an estimated loss of $500 per participant whereas Program 2 would have a gain of $200 and so on, you could even theoretically vary the arbitrary effect sizes if there was some reason to believe that there would be differences, e.g. a general population program is likely to have a lower effect size than an intensive one to one program, but this may not say anything about the overall loss or gain when comparing two such programs). It is essential that this type of hypothetical analysis is clearly distinguished from Analyses 3.4 which is based on estimates from actual measurement of effect sizes.

2: Attributable mid-level outcomes

Analysis 2.1 Cost effectiveness analysis, attributable mid-level outcomes, single intervention. In this analysis, estimates are available of the attributable effect of the intervention on mid-level outcomes. When combined with the estimated cost of the intervention this allows you to work out the cost of achieving a certain level of effect on mid-level outcomes (e.g. a 6% increased in X cost approximately $1,000 per participant).

Analysis 2.2 Cost effectiveness analysis, attributable mid-level outcomes, multi-intervention comparison. Same as 2.1 but a multi-intervention comparison. This analysis lets you work out the cost of achieving a certain level of effect on mid-level outcomes for a number of interventions (e.g. a 6% increase in X cost approximately $1,000 per participant for Program 1 whereas it cost $1,500 for Program 2). It is likely that the measured effect sizes of different interventions will vary, therefore you may need to adjust estimates to a common base. This may or may not reflect what would happen in regard to the actual programs in reality.

3: Attributable high-level outcomes

Analysis 3.1 Cost effectiveness analysis, attributable high level outcomes, single intervention. Same as 2.1 except you can work out the cost of achieving a high level outcome effect size of a certain amount.

Analysis 3.2 Cost effectiveness analysis, attributable high level outcomes, multi-intervention
comparison. Same as 2.2 except you can work out the cost of achieving a high level outcome effect size of a certain amount and compare this across more than one intervention.

Analysis 3.3 Cost benefit analysis, attributable high level outcomes, single intervention. In this analysis, figures are available for the cost of the intervention, its attributable effect on high level outcomes, and the costs and benefits of all outcomes can be reasonably accurately determined in dollar terms. If this information is not available this type of analysis cannot be done. This analysis lets you compare the overall loss or gain from running the program (e.g. the program cost $1,000 per participant and other negative impacts of the program are estimated at $1,000 while the benefits of the program are estimated at $2,500 per participant. Therefore there is an overall benefit of the program of $500 per participant.)

Analysis 3.4 cost benefit analysis, attributable high level outcomes, multi-intervention comparison. Same as 3.3 but a multi-intervention comparison. This analysis lets you work out the overall cost or benefit for a number of programs compared (e.g. Progam 1 has an overall benefit of $500 whereas Program 2 has and overall benefit of only $200 per participant).
   
[Note: This list of designs is still provisional within Systematic Outcomes Analysis. For instance, there could theoretically be a 'cost benefit analysis, set of arbitrary mid-level effect size estimates, single intervention or multi-intervention comparison', however it is not clear why anyone would do this rather than 1.3 or 1.4 which sets arbitrary high level effect sizes. Comment on whether this is actually an exhaustive list of designs would be appreciated (send to paul (at) parkerduignan.com)].

[V1.1]

Two overall monitoring & evaluation schemes

Systematic Outcomes Analysis identifies a number of possible overall monitoring and evaluation schemes. These identify what type of monitoring and evaluation is being done for any piloting phase and what is being done for the full roll-out of the intervention. In particular, if there is a pilot phase, the relationship between the monitoring and evaluation approach for this and for the full roll-out needs to be determined.

The overall monitoring and evaluation scheme used in a particular case can use various combinations of the Systematic Outcomes Analysis building-blocks. Obviously, if parts of earlier building blocks are not appropriate, feasible or affordable then this will limit the possibilities for overall monitoring and evaluation scheme which can be used. The prerequisites for these schemes are set out in the Prerequisites building blocks 5-8 diagram here. It is important that all stakeholders understand which overall monitoring and evaluation scheme is being used. Two typical overall monitoring and evaluation schemes are:

Scheme 1: Full roll-out outcome evaluation plus some additional evaluation

Full roll-out Evaluation[outcome] plus some or all of Outcomes model, Indicators[nn-att], Indicators[att], Evaluation[n-outcome], Evaluation[economic & comparative].

Scheme 2: Pilot outcome evaluation plus additional evaluation AND only additional evaluation on full-roll out

Pilot evaluation Evaluation[outcome] plus some or all of Outcomes model, Indicators[nn-att], Indicators[att], Evaluation[n-outcome], Evaluation[economic & comparative] AND full roll-out limited to some or all of Outcomes model, Indicator[nn-att], Indicator[att], Evaluation[n-outcome], Evaluation[economic & comparative].
[V1.1.2]

Three possible contracting arrangements

Systematic Outcomes Analysis uses three possible contracting approaches which can be negotiated between funders and doers. The distinctions made here are based on the distinctions used in  Systematic Outcomes Analysis regarding the different features of outcomes here. The three possible types of contracting arrangements are:

Arrangement 1: Accountable for outputs only.

In this arrangement, Doers are accountable only for producing specified outputs and nothing else.

Arrangement 2: Accountable for outputs AND for 'managing for outcomes'

In this arrangement, Doers are accountable for producing specific outputs AND also for 'managing for outcomes'. Managing for outcomes means that they also need to be thinking about whether their outputs are the best way that high level outcomes can be achieved and this includes the other factors which may influence high level outcomes and negate the effectiveness of their outputs. In this arrangement Doers are NOT held accountable for the achievement of high level outcomes. Exactly how managing for outcomes is defined is an interesting question as Funders/Control Organizations need to somehow work out whether or not Doers are actually 'managing for outcomes'. If Doers do this in diverse ways it is very difficult for funders to know whether they are doing it properly without emerging themselves in the details of the way it is being undertaken by particular Doers. It is rather like Funders/Control Organizations attempting to work out whether Doers are being financially responsible in a situation where there were no standardized accounting systems or accounting conventions. From the point of view of Systematic Outcomes Analysis a solution to this problem is for Funders to require that that Doers undertake a Systematic Outcomes Analysis of their funded projects and have these peer reviewed/audited, in an analogous way to what happens in the accounting area.

Arrangement 3: Accountable for not fully controllable outcomes

In this arrangement, Doers are held to account for not fully controllable outcomes, which sounds somewhat paradoxical. However exactly this does occur in the private sector. It is most suited to those situations where it is not appropriate, feasible or affordable to work out what can be attributed to particular Doers and where the Funder/Control Organization receives and directly benefits from the achievement of high level outcomes. In these cases the Funder/Control Organization is willing to share its increase in wealth with those who, probably (or possibly), but ultimately unprovably, influenced their good fortune. 

Within this arrangement, Doers end up 'insuring' Funders/Control Organizations against the times when high level outcomes are not achieved and Funders/Control Organizations do not receive any increase in wealth. In those cases the Doer does not receive the bonus etc which they would have received if high level outcomes had been achieved. Therefore, Doers usually demand a premium from Funders/Control Organizations to manage their risk against those times when things go badly not due to any fault on their part. This sort of arrangement is in place for the salaries of top executives within the private sector. In the public sector, it is less likely to occur because those making decisions within Funder/Control Organizations do not usually personally benefit from achieving high level outcomes; many Doers need to collaborate to achieve many public sector outcomes; and politicians, taxpayer and the media are normally resistant to large sums of money being paid out to people in circumstances where there is the possibility that they may not have, in fact, 'earnt it'.

[V1.1.2]

Management of monitoring & evaluation - issues

Systematic Outcomes Analysis identifies a set of issues in the management of monitoring and evaluation as follows:

Issue 1: Consultation with stakeholders regarding monitoring and evaluation planning, implementation and review.  The consultation processes with internal and external stakeholders regarding monitoring and evaluation planning, implementation and review, needs to be detailed in any monitoring and evaluation planning.

Issue 2: Evaluation management structure. The structure for governance, management and undertaking evaluation activities needs to be clearly determined in monitoring and evaluation planning. In small evaluations this will all be managed within existing governance and management structures. In larger evaluations, specific governance structures may be set up (e.g. Steering groups), technical input may be sought (e.g. Technical Advisory Groups), structures to manage internal evaluation staff will be established, and systems for contracting with external evaluators will be established.

Issue 3: Internal versus external evaluators. There are pros and cons with using internal versus external evaluators and these need to be considered in evaluation planning. These are as follows:

•   Integration with strategic decision-making. Likely to be more with internal rather than external evaluators.

•  Independent judgement. Likely to be less with internal rather than external evaluators.

•  Cost. Likely to be less with internal rather than external evaluators.

•  Range of evaluation skills. Likely to be less with internal rather than external evaluators.

•  Institutional knowledge retention. Likely to be more with internal rather than external evaluators.

•  Getting the evaluation done. Likely to be less prioritization of the evaluation by internal staff distracted by other organizational priorities than in the case of external evaluators.

•  Drift in the evaluation questions being answered. Likely to be less 'evaluation question drift' (where the evaluation does not end up answering the questions stakeholders thought it going to answer) with internal rather than external evaluators. Once exception to this is if political factors within the organization change the direction of the evaluation and in this case, internal evaluators are more likely to be influenced than external evaluators.

Issue 4: Knowledge management. Evaluations create large amounts of information which needs to be managed both at the time the evaluation is being run and so that the findings and lessons learnt from the evaluation can continue to inform strategic decision making in the future. This requires careful attention to knowledge management systems within and across organizations.

Issue 5: Risk management. There are a number of risks which need to be managed in monitoring and evaluation. The general risks faced by any monitoring and evaluation are set out below with the way in which these are managed when using Systematic Outcomes Analysis.

•  Not asking and answering the right monitoring and evaluation questions. Systematic Outcomes Analysis, by linking evaluation questions back onto the outcomes model and by going through the set of seven areas of evaluation focus increases confidence that all important evaluation questions have been identified. Having your Systematic Outcomes Analysis peer reviewed increases the chances that you are considering the right questions.  

•  Lack of stakeholder confidence in the independence of monitoring and evaluation. The more transparent an evaluation is the more stakeholders are able to decide if they have confidence in it. Systematic Outcomes Analysis provides a fully transparent view of an evaluation. Appropriate consultation with stakeholders (1 above); the right evaluation management structure (2 above); and  the right selection of internal versus external evaluators (3 above) all decrease the risk of a lack of stakeholder confidence.

•  Getting evaluators with the right skills to undertake the evaluation. This depends on the actual availability of evaluators with the right skills to undertake the evaluation plus having the funding available to employ or contract them. Systematic Outcomes Analysis will help you clarify the exact evaluation questions you want answered (e.g. high level outcome evaluation questions versus other evaluation questions) and this is likely to help you when assessing whether potential evaluators have the skills needed to answer particular questions. Involving an additional independent expert evaluator on selection panels and advisory groups can mean that more informed decisions are made about whether evaluators being employed or contracted have the right skills.

•  Drift in the evaluation questions being answered away from those stakeholders think are being answered. Systematic Outcomes Analysis makes sure that all stakeholders are clear about exactly which evaluation questions are being answered. Equally importantly (and often neglected in evaluation planning) it also identifies which evaluation questions are not being answered and the reasons why. Visually mapping all evaluation questions back onto the outcome model reduces much stakeholder and evaluator confusion about what is and what is not being answered.

•  Lack of ongoing control of externally contracted evaluations due to contracting organization staff turnover. For evaluations which take a number of years this can be a major problem. Systematic Outcomes Analysis reduces this risk by providing a transparent evaluation plan which should be progressively updated throughout the evaluation so that any new staff can quickly understand what the the details of what the evaluation is trying to do and how it is going about it. 

•  Lack of integration of monitoring and evaluation. These two aspects are often not well integrated. Systematic Outcomes Analysis fully integrates them.

•  Disconnect between evaluation findings and ongoing strategic planning. Systematic Outcomes Analysis can be used to fully integrates evaluation with strategic planning if the same outcomes model is used for both evaluation planning and strategic prioritization.

Issue 6: Evaluation costing. The potential cost of answering evaluation questions needs to be identified in monitoring and evaluation plans if decision makers are to make rational monitoring and evaluation resource allocation decisions. In addition, evaluation cost estimates are needed for budgeting once evaluation projects commence.

[V1.1]


Copyright Paul Duignan 2005-2007 (updated March 2007)