Evaluation Managers and Supervisors
Jerome Gallagher, n/a
Sr. Evaluation Advisor
US Department of State, United States
Jerome Gallagher, n/a
Sr. Evaluation Advisor
US Department of State, United States
Melissa Chiappetta, MS
Senior Education Advisor
USAID, United States
Mateusz Pucilowski
Vice President, Evaluation, Research, and Analytics
Social Impact
Arlington, Virginia, United States
Adam Trowbridge, MIDP
Senior Evaluation Advisor
U.S. Department of State & USAID, United States
Location: Room 201
Abstract Information: Stories are powerful. They entrance and engage us, provide meaning to our perceptions and experiences, and help us build bonds with others. But the power of stories also makes them dangerous, especially for evaluators. The very qualities that make for good stories and storytelling can lead to errors in evaluation practice and mislead evaluation users. Good stories have a clear beginning and a tidy ending. Good stories are coherent, pulling together elements of a narrative into a singular whole. Good stories have motivated characters who act with intention and affect others. Evaluations and the programs they address, though, don’t often behave like stories. Social programs rarely have a simple and tidy narrative arc. Evidence is often messy and rarely fits into a singular coherent whole. Perspectives of program participants may not aggregate into meaningful patterns. While unreliable narrators and uncertainty may be found in avant-garde stories, they are ever-present in evaluation. And in evaluation, there is rarely a smoking gun. In this session, a panel of evaluators and evaluation managers from the Department of State, USAID, and an independent evaluation firm will discuss the dangerous side of stories. Drawing from real examples of evaluations of foreign assistance programs, the panelists will discuss how elements of good story-telling can lead evaluators astray and how evaluation managers can spot overly narrativized evaluations that substitute story for systematic inquiry.
Relevance Statement: Storytelling is a powerful and essential part of both human existence and the practice of evaluation. But is storytelling always good for evaluation? The conference website notes that “Evaluation 2023 will reflect on how storytelling contributes to and shapes the narrative of evaluations, and dive deeper into storytelling’s usage, benefits, and impacts on our practices.” The implicit assumption appears to suggest that stories and storytelling are invariably a positive influence on evaluation practice.
While acknowledging both the power of stories and the importance of stories to evaluation practice, this panel will argue that the power of stories comes with attendant risks for evaluation practice. The very qualities that make for good stories and storytelling can lead to errors in evaluation practice and mislead evaluation users.
In the "The Black Swan," author Nassim Taleb introduces the “narrative fallacy,” a notion which “addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them…Where this propensity can go wrong is when it increases our impression of understanding.” In short, humans can’t help but make stories in order to make sense of the world, but just because we created a good story doesn’t mean we’ve increased our understanding of the world. While a good story may be true, it is not truth that makes a good story. As Daniel Khaneman notes in "Thinking Fast and Slow," “The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen.”
This is a direct contrast with evaluation practice. Evaluation seeks to describe, compare, and explain through systematic inquiry and the reduction of biases in order to increase our understanding of the evaluand. The result of systematic inquiry might make for a good story, but that is not the goal. Good evaluation recognizes the role of randomness, embraces complexity when it finds it, and deals in abstractions that often hinder a good narrative.
Stories present a danger to evaluators at multiple points in the evaluation process. In analyzing evaluation data, the temptation of slipping into the narrative fallacy - the human urge to fit evidence into a simple and coherent story – requires constant vigilance. During the presentation of evaluation evidence, the demands of communicating nuanced, messy and contradictory findings to an audience can easily lead to the over-reliance on simple narrative.
In this session, evaluators and evaluation managers from the Department of State, USAID, and an independent evaluation firm will attempt to explore and explicate the dangerous side of stories. The panelists will draw from examples of evaluations of foreign assistance programs to illustrate specific means by which storytelling can lead evaluators astray. The panelists will also discuss how evaluation managers can spot overly narrativized evaluations that substitute story for systematic inquiry.
Presenter: Adam Trowbridge, MIDP – U.S. Department of State & USAID
Presenter: Melissa Chiappetta, MS – USAID
Presenter: Mateusz Pucilowski – Social Impact
Presenter: Jerome Gallagher, n/a – US Department of State