Owner 2279...Evaluation and Planning Ann Arbor, Michigan, United States
Abstract Information: What gets evaluated? A common answer is an activity linked to a delivery process, and cast within an organizational setting, all for the purpose of effecting change in the recipients of that activity. Whatever else “evaluation” means, it includes inquiry into relationships among process, organization, and outcome, all set within a surrounding environment. Despite the particulars of methodology or evaluation theory, the evaluation will be based on an intellectual edifice comprised of model, methodology, and data interpretation; all of which are shaped by beliefs about prediction, explanation, causality, and pattern.
This intellectual edifice has proved highly successful in generating persuasive knowledge about how programs operate and what they accomplish. Still, evaluation could be more persuasive if it drew deeply from Complexity Science. If we evaluate complex systems, our work should be informed by an understanding of the behavior of such systems. The purpose of this workshop is to introduce evaluators to complex system constructs and theory to show how that knowledge can inform our work. Beyond the specifics, three themes will be emphasized: 1) Complexity can be dealt with using common familiar methodologies, 2) Both “predictability” and “unpredictability” characterize complex behavior. 3) It is not always necessary or desirable to invoke complexity.
Relevance Statement: Evaluation needs Complexity Science because complex behavior characterizes many of the programs we evaluate.
The workshop’s immediate objective is to provide evaluators with the capability to apply constructs from Complexity Science in their evaluation practice. Another objective is to nudge Evaluation toward the condition of an “adjacent discipline” to Complexity Science. The workshop goes beyond how complexity is currently treated in our field. Currently our field’s engagement with complexity is too distant from practical decisions. To illustrate, consider an example involving statistics. One might make two statements:
• I will apply statistical thinking. • I will use logistic regression.
The first statement has meaning because it reflects “statistical thinking” – an epistemology that contains a belief about true score and error, sampling theory, attaching probability estimates to observations, an emphasis on group characteristics over individual characteristics, and so on. But to work with data, I would need to make a statement such as: “I will analyze an accident prevention program. I will use logistic regression because that tool is a good way to detect change in mean time between accidents.” To be able to make a statement like that, I would have to know what logistic regression is, and I would have to appreciate how it fits into “statistical thinking”. As with statistics, so it is with complexity. We in the Evaluation community tend to invoke complexity in a way that is analogous to the first statement. We recognize that the world behaves in complex ways, and that to do good evaluation we must appreciate complex behavior. But we are not very good at applying the specifics. Consider two examples of how understanding constructs from Complexity Science may affect decisions about evaluation.
EXAMPLE #1: Evaluators commonly work with models that specify linked chains (or networks) of intermediate outcomes. This tactic carries the implicit assumption that it is possible to identify intermediate outcomes, and that the path among those outcomes can be predicted. Complexity, however, suggests another way to conceptualize the path from action to outcome. Invoking “sensitive dependence” and “attractor shapes” leads to a theory of action in which it is impossible to specify a causal path in advance, but that there may be a metaphorical “tube” (of uneven cross section) running through the system, and that any path that falls within that space will lead to success.
EXAMPLE 2: Imagine an evaluation of an innovation adoption effort. One theory-based approach would base data collection and analysis on the the extensively demonstrated “S” curve. But a complexity lens would suggest inquiry based on networks among adopters and fractal growth by preferential attachment.
Neither of these examples obviate traditional evaluation. But they do show how Complexity Science provide otherwise unavailable insight.
Learning Objectives:
Upon completion, participants will understand how reasoning in terms of “emergence”, “sensitive dependence” and “attractors” can affect beliefs about program design, methodology, and data interpretation.
Upon completion, participants will be able to make decisions about trade-offs between applying constructs from Complexity Science and relying on traditional evaluation approaches.
Upon completion, participants will know how constructs from Complexity Science be applied to good advantage at different stages in the evaluation lifecycle.