Systematic Outcomes Analysis

A complete solution to outcomes, strategy, monitoring, evaluation and contracting

How To List: step by step instructions for doing Systematic Outcomes Analysis

This How To List provides step by step instructions for doing Systematic Outcomes Analysis. It is based on the building-blocks in the OIE Extended Diagram which are shown in the diagram on the right.

While there is a general sequence for doing Systematic Outcomes Analysis as set out here, you may undertake stages in a different order if this suits the project or organization you are working with. If you are printing out the How To List so you can work though it, you should also print out The Model. This is because at different stages in the How To List you are referred to more detailed material which is included in the model.

This page sets out all of the How To List, if you just want a particular section click on it in the index on the left.

1. Outcomes model

Step 1.1  Getting agreement to do Systematic Outcomes Analysis

The first step in Systematic Outcomes Analysis is getting agreement from stakeholders to start building an outcomes model (creating the Outcomes Model Building Block). This is a model of the cascading set of causes in the real world for the area in which you want to intervene. Ideally you should try to get as much buy-in as possible from the highest level of stakeholders/management you can. You can use the introductory presentation on Systematic Outcomes Analysis to show what the process is all about. Even if you cannot get enough buy-in for the whole process, sometimes you can get agreement for you to start by just building an outcomes model. When the benefits of using the model are seen, you can then move on to developing the other building blocks of Systematic Outcomes Analysis. Motivations for using the approach can differ. Sometimes it can be because people want to use the system only for strategic planning; sometimes for monitoring or evaluation planning; sometimes for thinking about the possibilities for economic evaluation; and sometimes for working out what should be done regarding outcomes-focused contracting. It does not really matter what the initial motivation is for doing Systematic Outcomes Analysis; once some of the building blocks are in place, it becomes clear to stakeholders that they can use the approach for meeting a range of their core organizational needs. Regardless of why you are doing Systematic Outcomes Analysis, the first step is always building a robust outcomes model, you do this in the following way:

1.1.1 Hold a meeting of the wider group of internal stakeholders who are interested in an outcomes model being built and explain the Systematic Outcomes Analysis approach to them (you can use the introductory presentation).

1.1.2 Get agreement from the wider group to set up a smaller working group to actually build the outcomes model. This should include the highest level or stakeholder or manager you can get; someone who knows how to draw outcomes models (i.e. who has studied the outcomes model standards); and two to four content specialists. Building models with groups larger than this is often difficult. Others can be called into the small working group building the model as their specific expertise is needed. Typically, developing the model will take a three to four meetings held once a week. These meetings should allow sufficient time for participants to get focused on the task at hand. The meetings should usually last less than two hours, but not longer than half a day, this is because the work requires close concentration. Additional people can sit in on the meetings if they are interested in finding out how the process works.

Step 1.2  Drawing a comprehensive outcomes model in the smaller working group

The way in which the model is developed is important. If artificial constraints are put on the type of outcome that is allowed to go into the model (e.g. only measurable and attributable outcomes) then it is likely that the model will be useless for later stages of the Systematic Outcomes Analysis process. To make sure that the model is built in the most useful way possible, the model should conform to the Systematic Outcomes Analysis outcomes model standards

Step 1.3 Checking your model with stakeholders

Once the smaller working group has developed a draft of the outcomes model, it should be checked with the wider group of external stakeholders and then with other groups of external stakeholders. 

1.3.1 Checking back with the initial wider group of internal stakeholders should take place relatively early in the development of the model to make sure that the smaller working group is on track (say after the smaller group has had three meetings). There should be a brief introduction to the wider group making it clear that in Systematic Outcomes Analysis, the outcomes model should be of the real world and can contain not currently measurable or attributable outcomes. It should also be made clear that the model is only a draft, however it should be presented in a tidy format so as to avoid members of the wider group making too many minor changes. A tidy format makes it feel that the model should not be amended unless there is good reason to do so. The emphasis at this stage is both on the outcomes and links within the model but also on whether the smaller working group has got it right in terms of the way it has divided outcomes up into conceptual groups (or slices - see the outcomes model standards). This stage of developing the outcomes model may need more than one iteration, with the small working group going away and working on the model further and then bringing it back to the wider group several times.

1.3.2 Checking with external stakeholders. Once you have the outcomes model at a stage where your organization is happy for external stakeholders to see some of it as draft material, then you should take it to whatever groups of external stakeholders are relevant to the project. Remember to always brief them on the fact that in Systematic Outcomes Analysis it is fine for an outcome model to contain not currently measured or attributable outcomes as the purpose is to draw a model of the real world. The model is not just about what you can currently measure or prove that your organization has influenced. External stakeholders are usually able to relate to such more comprehensive models much better than ones which just focus on measurable and attributable outcomes focused just on an individual organization's perspective.

Step 1.4   Check you model against existing empirical research

The outcomes model you have drawn so far, and which you have checked with your stakeholders, is your (and your stakeholders) claim about how the world works regarding the interventions you are planning or currently undertaking. It may or may not be an accurate representation of how the world actually works in practice. It is in this step that you check you model against existing empirical research. What we mean by empirical research at this stage in Systematic Outcomes Analysis is simply research which relies on observations made of the world, rather than just on reasoned justification. We first check the model against empirical research and then, secondly, against reasoned justification. How extensive this process of checking is, depends on the circumstances.

1.4.1 Check your model (the outcomes and the links between them) against existing empirical research either yourself, if you have the skills to to do this, or by getting someone else to do it (e.g. experts in the field, university researchers etc).

Step 1.5   Develop reasoned justification for parts of your model where there is no existing empirical research

There is unlikely to be existing empirical research for every link between outcomes in your model. You therefore need to develop a reasoned argument for those links for which there is no existing empirical research. It is a mistake to only allow links in models which can be supported by empirical research. The is because different outcomes models and different parts of a particular outcomes model vary in the ease with which they can be validated using empirical research. If you inappropriately limit yourself to strictly empirically based models you can end up only doing that which is empirically able to be validated rather than doing what seems to be the most strategic thing to do a particular situation.

1.5.1 Develop reasoned justification for any parts of your model which are likely to be challenged by stakeholders. This justification only needs to be as extensive as is appropriate. For instance, there may be many instances in your model where the link between outcomes is obvious to stakeholders. You do not need to set out  justifications for all of these links.   

Step 1.5   Turn issues needing further work into outcomes model development projects

There are likely to be aspects of your outcomes model that need further work. For instance, developing more detailed outcomes in a particular area of the model; developing a further slice (a slice is simply a grouping of conceptually related outcomes e.g. national level, locality level, organizational level etc) in an area; or doing a literature review to look at the empirical evidence beneath one part of your model. All this work can be turned into outcomes model development projects.

1.5.1 Look at the state of development of your outcomes model, identify what further work needs to be done, and turn this work into outcomes model development projects. These will have to be prioritized later, together with the other projects which emerge from other parts of your Systematic Outcomes Analysis.

2. Strategy

The second step in Systematic Outcomes Analysis is developing strategy (the Strategy Building Block). In this step, you use the outcomes model you developed in the first step to decide on priorities and actions you and others will take. Depending on why you are doing Systematic Outcomes Analysis, you may not need to do this step at an early stage or at all (e.g. if you are just using it for monitoring and evaluation planning). 

Step 2.1: Use your outcomes model for strategic planning

2.1.1 Your outcomes model can be used to develop a vision and mission statement if a separate written statement of this is needed. Once you have build an outcomes model, the highest level of the outcomes model can be used in place of your vision and mission statement. However, if for any reason you need to have a separate vision and mission statement, it can be prepared from your list of high-level outcomes within your outcomes model. From the technical point of view this is a better way of working than developing a formal vision or mission statement before you build your outcomes model. Vision and mission statements often have artificial constraints on them, such as being a 'short pithy statement one sentence long'. The compromises involved in drafting vision and mission statements to meet these constraints often make them less than useful for fully defining the top level of an outcomes model. Of course, if you already have a vision or mission statement it can be useful to help you work out what your model's high level outcomes should be. 

2.1.2 Use your outcomes model to establish priorities for your next planning period. An outcomes model should provide a map of all of the steps which need to be undertaken to achieve the high level outcomes you are aiming for. It should be a comprehensive map of the real world outside your program or organization. Such a map will give you the best basis for thinking strategically about what it is you are trying to do and your priorities for doing it. Due to limited resources, you will usually have to make some decisions about how you are going to prioritize your activity. Working with a comprehensive outcomes model helps you make sure that your prioritization decisions are the most appropriate decisions. If you have built and appropriate model in the initial step of the Systematic Outcomes Analysis process, all you need to do at this stage is to examine the outcomes model and decide which lower level steps are your highest priorities for immediate action. This requires that you have drilled down your outcomes model to a suitably low level.

2.1.3 Agree with involved players (staff, organizational units, collaborators etc.) who is going to do what by mapping responsibilities onto your outcomes model. You can also map onto the outcomes in your outcomes model who is going to do what. This allows you to assign responsibility for actions and also to identify if there are important steps which have not been assigned to any player. You may also find that there are certain steps which are being focused on by too many players. Again this requires that you have drilled down your outcomes model to a suitably low level.

2.1.4 Drill down even further to lower levels of your outcomes model for more specific business planning if you wish. You can use your outcomes model as the start for drilling down to more and more specific levels beneath it. At these levels you can set out responsibilities of individual staff, teams or other collaborators. 

3. Indicators (not-necessarily attributable)

The third step in Systematic Outcomes Analysis is to identify indicators which can be used to measure some, or all, of your outcomes (the Not-Necessarily Attributable Indicators Building Block - Indicators[nn-att]). When working on the first step (the Outcomes Model) you did not need to worry about measurement. Now is the time when you can start thinking about it. In Systematic Outcomes Analysis, outcomes (or more correctly, their indicators) can be either attributable or not-necessarily attributable. An attributable indicator is a routine measurement which, by the mere fact of its measurement, establishes that the outcome has been caused by a particular player. Outputs (e.g. the number of books published or the number meetings arranged by a player) are good examples of attributable indicators. Not-necessarily attributable indicators (the type dealt with in this step) are ones which, even though they are able to be measured, do not definitively show which player caused them to change. For instance, improved health outcomes for a population may be influenced by a number of parties and other external factors. Not-necessarily attributable indicators are sometimes called environmental, state, shared or common indicators.

Remember when you are identifying these indicators that they are just not necessarily attributable. In other words, it is fine if some of them are attributable to particular players while others are not. If you are working with the relatively high levels of an outcomes model, most of the indicators are likely to be not attributable in many cases. However, the lower you go down an outcomes model, the more likely it is that the outcomes are attributable to particular players. For further information about attribution and the features of outcomes see here.

Step 3.1   Identify any not-necessarily attributable indicators and map them onto the outcomes model

3.1.1  Identify any not-necessarily attributable indicators. It is likely that in developing your outcomes model you have already thought of some measures for some of the outcomes in the model. Now think further about the outcomes in your model and the ways in which they could be routinely measured. At this stage, note that others may be measuring, or wanting to measure, some or even all of these not-necessarily attributable indicators. Since they are not just attributable to you, a range of other players are likely to be interested in measuring them.

3.1.2 Map not-necessarily attributable indicators back onto your outcomes model. An un-mapped list of indicators is very difficult to interpret because it does not show which outcomes in the underlying outcomes model are being measured and which are not. There is a danger when using an un-mapped indicator list that your strategic direction ends up being driven by what you can measure rather than what is the most sensible thing for you to do in your particular circumstances. Remember that you are unlikely to have indicators for every every outcome in your model. If this is so, then that is just the reality of the area in which you are working.

Step 3.2   Decide on further indicator measurement

3.2.1 Decide on which indicators to continue measuring and which to not measure. Having mapped your indicators back onto your outcomes model you now have a clear idea of which outcomes are, and which are not, being measured. You can now start to make strategic decisions about which indicators to continue measuring and which you may not need to measure in the future. For instance, there may be many indicators for outcomes in one part of your outcomes model because it is relatively easy to measure the outcomes on that side. However, in another part of your outcomes model, there may be few indicators. You may decide to move some of the resources currently being used to measure indicators on the easy-to-measure side to the hard-to-measure side. When making decisions about not-necessarily attributable indicators, you need to also take into account your priorities for attributable indicator measurement which are discussed in the next step. 

3.1.2 Identify any 'joint indicator' projects you can undertake with other stakeholders. Since you are looking at not-necessarily attributable indicators in this building block, you may find that there are other stakeholders who also have an interest in measuring some of the same indicators you want to measure. You may be able to work with these other stakeholders to develop joint projects to develop new, or to improve old, indicators. When discussing this with other stakeholders, having your indicators all mapped back onto the underlying outcomes model, will help clarify the discussion for them and for you.

Step 3.3   Identify any issues and turn them into indicator projects

3.3.1 Identify what needs to be done regarding developing not-necessarily attributable indicators and turn them into indicator projects. These projects will need to be prioritized alongside the other possible projects arising out of your Systematic Outcomes Analysis.

4. Indicators (attributable)

The fourth step in Systematic Outcomes Analysis is identifying indicators which are attributable to a particular player (the Attributable Indicators Building Block - Indicators[att]). These are routinely collected measures of an outcome where their mere measurement allows one to reasonably attribute any change in the indicator to the actions of a particular player. You may have already identified a number of these when you were identifying your not-necessarily attributable indicators in the previous building block. Remember, the previous building block dealt with not-necessarily attributable indicators, not definitely not attributable indicators. It may be that you do not need to do this fourth building block at an early stage in your Systematic Outcomes Analysis. This is because in many instances, the issue of attribution only comes in at the time when you are thinking about accountability. It may be that you are happy to have identified your indicators and at this stage you can skip this building block and come back to it in the future. For instance, you might need to do it when you have to work out who is accountable for what, for instance when you get around to contracting, or delegating, parts of your outcomes models to others for them to implement.

Step 4.1   Identify any attributable indicators and map them onto the outcomes model

4.1.1 Identify any attributable indicators. It is likely that in identifying the not-necessarily attributable indicators in the previous section you will have already identified a number of indicators (or perhaps all of them that you are interested in) which can be attributed to particular players (remember the indicators in the previous section were not-necessarily attributable indicators rather than definitely not attributable indicators). This fourth building block consists of simply identifying who particular indicators could can be attributed to. The further down you have drilled into an outcomes model, the more likely you are to have attributable indicators appearing. As stated above, assigning attributable indicators to players is only something you have to do if you are thinking about performance management, accountability and contracting. In some cases you might not need to bother to assign attributable indicators and just work with your set of not-necessarily attributable indicators identified in the previous section.

4.1.2 Map attributable indicators back onto the outcomes model. If you have identified new attributable indicators (rather than just deciding that some of the not-necessarily attributable indicators you identified in the previous section are able to be attributed to particular players and identifying them as such) you should now map the new indicators you have found onto your outcomes model. The benefits of mapping indicators onto your outcomes model discussed in the previous step are the same for attributable as for not-necessarily attributable indicators.

Step 4.2   Decide on further indicator measurement

4.2.1 Decide which indicators to continue measuring and which to stop measuring. Having mapped your attributable indicators back onto your outcomes model you now have a clear idea of which outcomes are, and which are not, being measured. You can now decide which attributable indicators you are going to continue measuring and which you are not going to measure in the future. For instance, you may find that you can stop measuring some lower level indicators because there is a higher level attributable indicator above them which when you measure it, will pick up the lower level indicator measurements.

Step 4.3   Think about the evaluation implications of how high up the outcomes model your attributable indicators reach

4.3.1 Examine how high up the outcomes model attributable indicators reach for its implications for evaluation. If your attributable indicators reach to the top of your outcomes model (not normally the case in the real world projects where people use Systematic Outcomes Analysis) then from a theoretical point of view, you do not normally need to do outcome evaluation (the fifth building-block in Systematic Outcomes Analysis). This is because the mere measurement of attributable indicators implies that they have been caused by a particular player. There is therefore no need to use outcome evaluation to establish that a particular player/intervention caused high-level outcomes. This situation should not be confused with the common mistake in performance management systems where the mere measurement of indicators even where they are not attributable, is taken to show that a program is improving high level outcomes. It only applies to attributable indicators. Avoiding this mistake is exactly why the distinction is made in Systematic Outcomes Analysis between attributable and not-necessarily attributable indicators.

Step 4.4   Identify any issues and turn them into indicator projects

4.4.1 Identify what needs to be done regarding attributable indicators and turn them into indicator projects. These projects will need to be prioritized alongside the other projects coming out of your Systematic Outcomes Analysis.

5. Evaluation (high level outcome)

The fifth step in Systematic Outcomes Analysis is looking at what high level outcome evaluation designs are appropriate, feasible and affordable (this uses the Outcome Evaluation Building Block (Evaluation[outcome])). Evaluations are different from indicator monitoring because they are usually more 'one-off' processes while indicator monitoring (covered in the last two sections on the indicator building-blocks) is more about routinely collected information. Systematic Outcomes Analysis divides evaluation up into seven areas of focus. This fifth step in Systematic Outcomes Analysis is concerned only with the first area of focus (Focus 1: Establishing whether a particular intervention has caused an improvement in high level outcomes). This first focus of evaluation attempts to make a claim about high level causality.  This should be contrasted with other types of evaluation which are dealt with in the next step (non-outcome evaluation). These other types of evaluation (often called formative or process, as opposed to outcome, evaluation) do not try to make causal claims about high level outcomes. Systematic Outcomes Analysis identifies seven possible outcome evaluation designs, one or more of which can be applied in trying to work out whether or not an intervention has caused high level outcomes to change. 

Step 5.1. Identify possible outcome evaluation questions and map them onto your outcomes model

5.1.1 Identify a set of possible outcome evaluation questions. These will based on the high level outcomes in your outcomes model. For instance, if you have three high level outcomes then you are likely to have three high level outcome evaluation questions such as: 'Did the intervention cause outcome X to improve?'. At this stage don't worry about whether or not it's possible to actually answer the outcome evaluation questions you are identifying. The first point is to identify the questions and then, secondly, you will work out whether or not it's feasibility to answer them.

5.1.2 Map the evaluation questions onto your outcomes model. It is important to map your evaluation questions back onto your outcomes model. Any one evaluation question can usually be worded in more than one way and this can cause confusion. It's a waste of time to ask exactly the same evaluation question more than once. This can occur without you realizing it is happening because the question has been worded differently each time. This is particularly a problem in large-scale evaluations where different evaluation teams are being commissioned to undertake a number of different evaluation sub-projects. Mapping all high level evaluation questions back onto your outcomes model lets you see when the same evaluation question is being asked with different wordings. You'll know this is happening because you'll be trying to map the different questions back onto the same spot on your outcomes model.

Step 5.2   Assess the appropriateness, feasibility and affordability of the seven Systematic Outcomes Evaluation outcome evaluation designs

5.2.1 Work out whether the highest-level evaluation question(s) are within the scope of the type of evaluation you should be attempting. For instance, if a funder is commissioning many programs using the same method as you're using across the country, it may be more efficient for the funder to answer the highest level evaluation question itself - 'Does this method work?'. It usually does not make sense for many programs throughout the country to try to undertake similar expensive outcome evaluations which attempt to answer exactly the same evaluation question. You, and your funder, need to be clear about the overall evaluation scheme (see Step 7), are they wanting you to prove that the whole roll-out of the program was effective, or are they using an overall evaluation scheme which relies on proving that an approach works when piloted and just making sure that best-practice is being used when it comes to the whole roll-out of the program. 

5.2.2 Assess whether the remaining high level outcome evaluation question(s) which you consider are within scope for you to try to answer (there should usually only be a few of these), can be answered. To do this, look at the appropriateness, feasibility and affordability of each of the seven outcome evaluation designs identified in Systematic Outcomes Analysis. These designs are set out below. For more information on the designs look in the models section here.

5.2.2.1 Design 1: True experiment design

5.2.2.2 Design 2: Regression discontinuity design

5.2.2.3 Design 3: Time series analysis design

5.2.2.4 Design 4: Constructed comparison group design

5.2.2.5 Design 5: Exhaustive causal identification and elimination design

5.2.2.6 Design 6: Expert judgement design

5.2.2.7 Design 7: Key informant judgement design.

The selection of these designs has implications for other steps within Systematic Outcomes Analysis. Only the first four of these design are able to provide you with an effect size. An effect size gives you a quantitative measure of how much an intervention changes particular outcomes. If none of these four designs can be done (and often they're neither appropriate, feasible or affordable), you're limited in the types of economic and comparative analysis you can undertake (see Step 8).

6. Evaluation (non high level outcome)

The sixth step in Systematic Outcomes Analysis is considering your non high level outcome evaluation questions (this uses the Non High Level Outcome Evaluation Building Block (Evaluation[non-outcome])). These are evaluation questions which, in contrast to high level outcome evaluation, do not attempt to make a claim about whether a player/intervention is causing high level outcomes to change. They may, however, be making a claim about changes to mid-level outcomes. Names for aspects of this type of evaluation include: developmental, implementation, formative, process or descriptive evaluation.

Step 6.1   The focus of non high level outcome evaluation

6.1.1 Decide on what the focus of non high level evaluation activity will be and identify evaluation questions accordingly. Systematic Outcomes Analysis identifies seven possible areas of evaluation focus. Six of these are included in this non high level evaluation step. The first area of focus - high level outcomes - is dealt with in the previous step). The remaining six areas of evaluation focus are dealt with in this step and are:

6.1.1.1 Focus 2: Establishing whether a particular intervention has caused an improvement in mid-level outcomes (process evaluation). Doing this requires using one or more of the seven possible outcome evaluation designs identified in Systematic Outcomes Analysis here. In this case, they're being applied to mid-level rather than high-level outcomes (they were applied to high level outcomes in the previous step (Step 5)). 

6.1.1.2 Focus 3: Describing an outcomes model that is actually being implemented in a specific instance (including estimating the cost of undertaking an intervention) (process evaluation).

6.1.1.3 Focus 4: Comparing an outcomes model being implemented with proposed outcomes model(s) (process evaluation)

6.1.1.4 Focus 5: Aligning an outcomes model being implemented with a proposed outcomes model or its enhancement (formative evaluation; best practice application)

6.1.1.5 Focus 6: Describing the different understandings, interpretations or meanings stakeholders have about an outcome model and its implementation

6.1.1.6 Focus 7: Describing the effect of the wider social, cultural, political, economic or other context on the implementation of an outcomes model.

7. Economic & comparative evaluation

[Draft Entry Currently Being Revised). The seventh step in Systematic Outcomes Analysis is economic and comparative evaluation (this uses the Economic and Comparative Evaluation Building-Block). It focuses on comparing the effectiveness of different interventions and, in the case of economic evaluation, assigning positive and negative monetary value to aspects of interventions and outcomes.

Step 7.1   Comparing the effectiveness of different interventions on similar outcomes

7.1.1 The effectiveness of different interventions can be compared to find out which intervention is better at achieving similar outcomes. This activity, which is an important part of evidence-based practice, relies on there being a good measure of the effect size of the intervention.  The effect size is the amount of change in an outcome which can be attributed to a particular intervention. For a quantitative effect size to be available for this type of analysis, one of four outcome evaluation designs (Designs 1-4) need to have been used to answer the high level outcome evaluation question (see Step 5: High Level Outcomes Evaluation. Stakeholders may have differing views as to which outcome evaluation designs they'll accept as actually providing sufficiently robust proof of an effect on high level outcomes.

If this type of effectiveness comparison is used as the major factor used in deciding between different interventions, it should only be used where the each of undertaking outcome evaluation designs 1-4 (see here for these designs) is similar for each of the interventions. Otherwise the decision maker can end up just doing the easily evaluable rather than the potentially effective. It also needs to be realized that even in those cases where one can fairly compare a number of interventions in terms of improving outcomes, the type of comparative analysis dealt with here only shows which one is the most effective but is silent on how much different interventions cost to implement. A very effective intervention that most of the population cannot afford, may be less desirable than a somewhat less effective intervention which, however, costs a lot less. The effect size is the amount of change in an outcome which can be attributed to a particular intervention. For a quantitative effect size to be available for this type of analysis, one of four outcome evaluation designs (Designs 1-4) need to have been used to answer the high level outcome evaluation question (see Step 5: High Level Outcomes Evaluation.

Comparison 1: Quantitative multi-intervention effect size meta-analysis

Comparison 2: Mixed qualitative/quantitative multi-intervention comparison analysis

Step 7.2   Identify what economic evaluation analyses are possible

7.2.1 Decide which of the ten economic evaluation analyses used in Systematic Outcomes Analysis can be done in the case of your program. Whether you'll be able to do one or more of these depends on what you've discovered is appropriate, feasible and affordable in the fourth step (High Level Outcome Evaluation). The economic analyses below are grouped into three sets - those you can do when you do not have actual effect-size estimates for attributable outcomes above the intervention; those you can do when you have estimates for mid-level outcomes and those you can do if you have estimates for high level attributable outcomes. In summary:

- you can only do the first group of analyses if you have estimated the cost of the intervention in the 6th Step (6.1.1.2 Focus 3: Describing an outcomes model that is actually being implemented in a specific instance);

- you can only do the second group of analyses if you have measured the mid-level outcome effect sizes in the 6th Step (6.1.1.1 Focus 2: Establishing whether a particular intervention has caused an improvement in mid-level outcomes); 

- you can only do the third group of analyses if you have measured the high level outcome effect size in the 5th Step (by using one of Designs 1-4 in Step in Step 5.2.2). 

More information on what these types of economic evaluation analyses consist of can be found in the model here. The ten possible types of economic evaluation analysis within their three groups are:

1. No attributable outcomes above intervention

      Analysis 1.1 Cost of intervention analysis, single intervention.
      Analysis 1.2 Cost of intervention analysis, multi-intervention comparison.
      Analysis 1.3 Cost benefit analysis, set of arbitrary high-level effect size estimates, single intervention.
      Analysis 1.4 Cost benefit analysis, set of arbitrary high-level effect size estimates, multi-intervention  comparison.

2: Attributable mid-level outcomes

      Analysis 2.1 cost effectiveness analysis single intervention.
      Analysis 2.2 cost effectiveness analysis, attributable mid-level outcomes, multi-intervention comparison.
   
3: Attributable high-level outcomes

      Analysis 3.1 cost effectiveness analysis, single intervention.
      Analysis 3.2 cost effectiveness analysis, multi-intervention comparison.
      Analysis 3.3 cost benefit analysis, single intervention.
      Analysis 3.4 cost benefit analysis, multi-intervention comparison.

8. Overall monitoring & evaluation scheme

The eight step in Systematic Outcomes Analysis is the overall monitoring and evaluation schema which is being used in a particular instance (this uses the Overall Monitoring and Evaluation Scheme Building-Block (Overall Monitoring & Eval)). This step is concerned with clarifying stakeholder expectations regarding the overall type of evaluation that's going to be used on an intervention.

Step 8.1   Identify piloting and/or full roll-out monitoring and evaluation scheme

8.1.1 The overall monitoring and evaluation scheme needs to be specified. In particular, if there's a pilot phase, the relationship between the monitoring and evaluation approach for this and the full roll-out monitoring and evaluation approach needs to be decided on.

The overall monitoring and evaluation scheme used in particular cases can use various combinations of the Systematic Outcomes Analysis building-blocks. Obviously, if parts of earlier building blocks are not appropriate, feasible or affordable then this will limit the possibilities which are available for the overall monitoring and evaluation scheme (this current step). It's important that all stakeholders understand which overall monitoring and evaluation strategy is being used. Two often used overall monitoring and evaluation schemes are:

8.1.1.1 Scheme 1: Full roll-out high-level outcome evaluation plus some non high-level outcome evaluation

Full roll-out Evaluation[outcome] plus some or all of Outcomes model, Indicators[nn-att], Indicators[att], Evaluation[n-outcome], Evaluation[economic & comparative].

8.1.1.2 Scheme 2: Pilot outcome high level outcome evaluation plus some non high level outcome evaluation; full roll-out ONLY NON high level outcome evaluation

Pilot evaluation Evaluation[outcome] plus some or all of Outcomes model, Indicators[nn-att], Indicators[att], Evaluation[n-outcome], Evaluation[economic & comparative] AND full roll-out limited to some or all of Outcomes model, Indicator[nn-att], Indicator[att], Evaluation[n-outcome], Evaluation[economic & comparative].

9.  Contracting arrangements

The ninth step in Systematic Outcomes Analysis building (this uses the Contracting Arrangements Building-Block (Contracting)), is concerned with the types of contracting arrangements which can be set up between a funder and a doer (a player who's charged with implementing a sub-set or all of the interventions within an outcomes model).

Step 9.1  Negotiating which contracting approach is appropriate

Systematic Outcomes Analysis identifies three possible contracting approaches which can be used. Which one of these is going to be used in a particular case needs to be negotiated between the Funder and the Doer. More information is available on these in the relevant model section here. The three possible approaches are as follows:

9.1.1 Arrangement 1: Contracting for outputs only

9.1.2 Arrangement 2: Contracting for outputs AND 'managing for outcomes'

9.1.3 Arrangement 3: Contracting for not fully controllable outcomes

Copyright Paul Duignan 2005-2007 (updated March 2007)