Abstract Information: Ernest House declared that evaluations must not only be true, they must be just. A common refrain in impact evaluation reports is that bias stemming from participation must be minimized to ensure internal validity. So how do impact evaluators negotiate the seeming imbalance in their evaluation practice between a desire to credibly show impact, and also attend to authentic stories based on the lived realities of the people affected? Are those really in conflict? This paper presents findings from a narrative-inquiry informed qualitative research study that examines those questions. It includes the findings from a combination of prompts that evoked detailed stories of impact evaluation practice, together with reflective questions based on conceptions of causation and justice from the evaluation literature. The co-creation of these impact evaluation narratives between seasoned impact evaluator and PhD student/one time Randomista is a unique way of conducting research on evaluation. It may be relevant to audience members interested in learning a new method for conducting research on evaluation, metaevaluators and evaluation managers developing new criteria for assessing the validity of evaluation findings, and impact evaluators looking for stories of how their colleagues negotiate similar challenges.
Relevance Statement: Relevance: Storytelling can be an important research tool. This paper presents the use of the constructivist method of narrative inquiry to build credible knowledge about salient philosophical concepts. The findings of this research can then be applied to metaevaluation, and can also be used to enhance evaluation praxis. The research builds on recent evaluation literature, especially work in causal mechanisms (Schmitt, 2020), as well as values in evaluation and social science research (Schwandt & Gates, 2021). It contributes to the body of knowledge in evaluation due to its novel approach (narrative inquiry within a case-based design) and use of a new theory not previously employed in evaluation research (epistemic injustice). It also connects new ideas to well-known evaluation theorists and scholars Ernest House and Donald Campbell, breathing new energy into their ideas about evaluating with validity. It is relevant to audience members interested in learning a new method for conducting research on evaluation, metaevaluators and evaluation managers developing new criteria for assessing the validity of evaluation findings, and impact evaluators looking for stories of how their colleagues negotiate similar challenges. House, E. R. (1980). Evaluating with validity. Sage Publications. Schmitt, J. (2020). Causal mechanisms in program evaluation. New Directions for Evaluation, 167. Schwandt, T. A., & Gates, E. F. (2021). Evaluating and valuing in social research. The Guilford Press.