Research on Evaluation
Alysson Akiko Oakley, PhD (she/her/hers)
Vice President
Pact, United States
Kate Krueger, n/a
Advisor
Pact, United States
Giovanni Dazzo, PhD (he/him/his)
Assistant Professor
University of Georgia, United States
Bret Barrowman, PhD (he/him/his)
Senior Technical Manager for Evaluation and Research
IRI, United States
Location: Room 204
Abstract Information: As a multidisciplinary sector, evaluators have always wrestled with the different epistemologies that shape our collective understanding of rigor, with the term often functioning as a stand-in for quality or legitimacy. This challenge has taken greater nuance as more evaluators and commissioners of evaluations have moved to embrace a greater variety of evaluation paradigms, for example those that seek to center justice and empowerment as central values. These paradigms continue to challenge what the sector should consider to be effective, legitimate, or rigorous evaluation practice. These theoretical disagreements have repercussions that are perhaps less discussed among practitioners than formal researchers, but no less relevant; disagreements on “rigor” have led to siloed thinking and misaligned expectations around evaluation standards. While evaluators may not and perhaps should not agree on a single definition of rigor, we would benefit from a shared understanding of how to conceptualize it when making program design, monitoring, and evaluation decisions. This panel approaches this challenge by understanding rigor as a process embodied in a series of trade-offs. Panelists will present an overview of perspectives on rigor and introduce a situational typology that helps structure practical decisions. Building from this typology, panelists will present contrasting perspectives for selecting approaches that seek to improve evaluative rigor in the service of producing measurement and evaluation that is valuable to program beneficiaries. Attendees will emerge with a clearer understanding of how to approach decision-making amongst a range of debates happening today, to better situate and justify their own methodological practice.
Relevance Statement: Debates about rigor connect to the AEA 2023 Conference theme, “The Power of Story,” in more ways than might meet the eye. Conversations about rigor set the tone for what kind of stories an evaluation can tell, whose stories count, and how those stories are constructed. Furthermore, the idea of “rigor” as a point of irreconcilable tension has become a narrative of its own that challenges sectoral learning. Formal debates about rigor have historically been the purview of evaluation scholars and researchers. However, those that perhaps struggle the most with rigor, and who must make real-life decisions that affect the lives of those influenced by programs undergoing evaluation, are the evaluation practitioners working amidst program funder constraints. The majority of program evaluation today is undertaken by such practitioners. This panel seeks to surface some of the thinking from this space to contribute to the more scholarly literature and debate around rigor. This panel will be of value to evaluators who seek to build a shared vocabulary for discussing evaluation rigor in an interdisciplinary and practitioner-focused context that is firmly rooted in the practical realities of program evaluation. Panelists represent differing evaluative paradigms, but also a variety of practical institutional experiences - as commissioners of evaluations, internal evaluators, external researchers, and professors of evaluation. It will build on previous debates and discussions by providing a terminological overview that accounts for new developments in the sector. Furthermore, it will provide both actionable recommendations aimed at practitioners and contextualizing framing aimed at the evaluation community more broadly. In addition, this panel will show how evaluators from different paradigms can contest and negotiate rigor, while ultimately working towards the same goals. Within the constraints of often-rigid program parameters, panelists will argue that disagreements about rigor are frequently (though not always) a manifestation of deeper structural constraints to effective program design and measurement to which evaluators with similar values may still respond differently. Panelists hail from different methodological paradigms, and have in real life disagreed over methodological choices. The audience will be able to observe how different paradigms speak to one another, and where they diverge, representing a spectrum of critical theory, critical realist, and positivist-informed paradigms manifested in day-to-day evaluation decision-making. The panel will draw from rich thinking and discussion about rigor and utilization which have already occurred, including: Abdul Latif Jameel Poverty Action Lab. 2021. Randomized Evaluation Research Resources. Brady, Henry E. and Collier, David. 2010. Rethinking Social Inquiry: Diverse Tools, Shared Standards, 2nd edition. King, Gary, Robert O. Keohane, and Sidney Verba. 2021. Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition. Kirkhart, K. E. (2010). Eyes on the Prize: Multicultural Validity and Evaluation Theory. American Journal of Evaluation, 31(3), 400–413. Mertens, Donna M. 2008. Transformative Research and Evaluation. Guilford Press. Patton, Michael Quinn. 2011. Essentials of Utilization-Focused Evaluation.
Presenter: Alysson Akiko A. Oakley, PhD (she/her/hers) – Pact
Presenter: Kate Krueger, n/a – Pact
Presenter: Giovanni P. Dazzo, PhD (he/him/his) – University of Georgia
Presenter: Bret Barrowman, PhD (he/him/his) – IRI