Outcomes theory knowledge base (Org)

This knowledge base provides a systematic treatment of outcomes theory as applied to managing the performance of organizations, programs, policies and collaborations [Org]. This site is for those interested in theory. If you want a practical implementation of this theory that can be used to design and implement working outcomes, evaluation, monitoring and performance management systems, you should use Systematic Outcomes Analysis based on the Outcomes Is It Working Analysis (OIIWA) approach from www.oiiwa.org site. If using any ideas or material from this knowledge base please cite this reference as: Duignan, P. (2005-insert current year) Insert name of page in Outcomes Theory Knowledge Base (Organizational) [Available at www.outcomestheory.org]. Any comments on any aspect of this knowledge base appreciated, please send to paul (at) parkerduignan.com.

Whole-intervention outcome attribution designs feasibility, timeliness and affordability (Org) [P6]

What are whole intervention outcome [1] attribution designs? 

Whole intervention high-level outcome attribution designs are evaluation designs which allow robust conclusions to be drawn about the influence of a particular intervention on higher-level outcomes within an outcomes hierarchy. Which particular evaluation designs are regarded as providing robust conclusions is a matter which is determined by a specific community of users in a particular outcomes system. [See Seven whole-intervention high-level outcome attribution designs (Org)[P8] for details of the seven such designs currently recognized within outcomes theory].

Why does outcomes theory draw a distinction between this type of outcome evaluation design and other lower-level and contextual evaluation and monitoring activity (often called formative or process evaluation [2])? Being able to make robust statements about the the effectiveness of particular interventions within an outcomes system is very powerful. The purpose of working with outcomes systems in an organizational context is to influence outcomes. Robust causal information about which interventions cause which changes to which high-level outcomes greatly aids strategic decision-making about the best interventions to implement. The fact that many stakeholders find this useful is reflected in the existence of a number of initiatives, usually developed under the headings of evidence-based, results-based or evidence-informed practice, to distill outcome effectiveness by selectively summarizing (through what is called metaanalaysis) those outcome evaluation studies which meet a particular set of robustness criteria determined by a particular community of users.

Outcomes theory includes this distinction between outcome attribution evaluations and other types of evaluation (often known as formative or process evaluation) because of the usefulness of being clear about when this type of robust causal information is available within an outcomes system and when it is not. Outcomes theory therefore distinguishes between three different outcomes system building blocks [3]: whole intervention outcome attribution evaluation designs (W in outcome theory's OIIWA schema) which are discussed further on this page; evaluation directed at answering additional types of evaluation questions which do not provide information about high-level intervention attribution (A in the OIIWA schema) and monitoring which in some outcomes systems provides routinely attributable indicators of changes in, usually lower level, outcomes (I[att] in the OIIWA schema).

A whole intervention high-level outcome attribution evaluation design is therefore formally defined as - an evaluation design which is regarded by an outcome system's community of users as providing robust proof of the effect of a specific intervention on high-level outcomes. 

Feasibility of such designs

Whole intervention high-level outcome attribution evaluation designs differ in their feasibility along the range: not feasible, low feasibility, moderate feasibility, and high feasibility.  

The distinction made in outcomes theory clearly identifying when robust whole intervention outcome attribution evaluation findings are and are not available within an outcomes system should not be construed as fostering unrealistic expectations regarding the feasibility of empirical outcomes attribution evaluation in general. Unfortunately, some in the evidence-based movement, hold the implicit or explicit assumption that the only source of significantly useful information about outcomes systems comes from whole intervention outcome attribution designs - often restricted to just a narrow set of experimental evaluation designs. When holding this belief is combined with the implicit or explicit assumption that such outcome attribution designs are just as feasible for all or most types of intervention, it leads to the serious error of trying to base intervention strategy selection almost solely on the outcome evaluation results of whole intervention outcome attribution evaluation designs. 

The same error occurs in economics and public policy when cost-benefit analyses are biased towards easy to measure costs and benefits and neglect to include difficult to measure (but often more important) costs or benefits.

In reality, different types of intervention differ markedly in regard to the feasibility and affordability of whole-intervention outcome attribution designs. Not recognizing this when applying an evidence-based practice or cost-benefit analysis leads to strategy selection based on ease of evaluation (e.g. a bias towards individual-level interventions which are easier to evaluate) rather than an evidential and analytical approach to selecting an optimal intervention strategy based on an assessment of what is currently known, not known and what it is feasible and affordable to know.

A whole-intervention outcomes attribution evaluation design feasibility is therefore formally defined as - the difficulty along the range from not feasible to high feasibility of successfully planning, implementing and obtaining timely and useful results from, a whole-intervention outcomes attribution evaluation design within a particular outcomes system. 

Timeliness of results from such designs

In some instances, whole-intervention high-level outcome attribution designs are highly feasible but the timeframe in which their results will be available to decision-makers is outside the intervention-selection decision window. This window is the period of time in which a decision-maker has to make a decision regarding the use of one or more interventions in order to change crucial outcomes wiithin an outcomes system.

A whole-intervention outcomes attribution evaluation design's timeliness is therefore formally defined as - whether or not it is possible to obtain useful results from a particular whole-intervention outcomes attribution evaluation design within the intervention-selection decision window of a particular decision-maker.

Affordability of such designs

Whole-intervention high-level outcome attribution evaluation designs differ in regard to cost. Given the resources available within any outcomes system they can range along the range: not affordable, low affordability, medium affordability, high affordability.

A whole-intervention outcomes attribution evaluation design's affordability is therefore formally defined as - the affordability along the range from not affordable to high affordability of successfully planning, implementing and obtaining timely and useful results from, a whole-intervention outcomes attribution evaluation design within a particular outcomes system. 

Notes:

[1] Sometimes the term impact is used rather than outcome for this type of evaluation design. Such usage is attempting to use the words outcome and impact to differentiate between moderately high and high-level outcomes within an outcomes hierarchy.

[2] Formative evaluation can be defined as any evaluation intended to optimize the planning and implementation of a program and process evaluation as any evaluation intended to describe the course or context of a program.

[3] In outcomes theory, the OIIWA schema stands for the five building blocks which underpin all outcomes systems [see Five OIIWA outcomes system building blocks [P5]].

V1-0.

Copyright Dr Paul Duignan 2005 www.outcomestheory.org