Research on Evaluation
Debra Rog, PhD
Vice President
Westat
ALEXANDRIA, Virginia, United States
Debra Rog, PhD
Vice President
Westat
ALEXANDRIA, Virginia, United States
Debra Rog, PhD
Vice President
Westat
ALEXANDRIA, Virginia, United States
Christina Christie, PhD
Wasserman Dean and Professor
UCLA School of Education and Information Studies, United States
Thomas Archibald, PhD (he/him/his)
Associate Professor
Virginia Tech, United States
Melvin Mark, PhD
Professor Emeritus of Psychology
Penn State University, United States
Tarek Azzam, PhD
Professor
University of California, Santa Barbara, United States
Leslie Fierro, PhD, MPH (she/her/hers)
Sydney Duder Professor in Program Evaluation
McGill University, United States
John Lavelle, PhD
Assistant Professor
University of Minnesota, United States
Location: Room 204
Abstract Information: Since the early 2000s, research on evaluation (RoE) has experienced somewhat of a rebirth. Research has been conducted on the methods and tools we use, the fidelity of implementation of theory, training on evaluation, and more. Although a number of aspects of evaluation have been the focus of the research conducted, the biggest category of research has focused on evaluation practice. And yet, despite surveys of evaluators finding a near unanimous appetite for RoE, concerns remain that what we are learning is not always changing the narrative of evaluation or shaping practice. Discussions of methods continue to have prescriptive or normative tones. Our ‘stories’ about evaluation designs and methods need to be ‘retold’ by research. Evaluation theorists and practitioners alike note the importance of shaping our methods with empirically derived guidance. As many have noted, how we talk about and practice our methods should be held to the same standards we use in evaluating programs and policies. The proposed Think Tank brings some of the leading researchers on evaluation together to discuss how we can re-tell our stories and use empirical study to continue to shape the narrative and, in turn, the practice of our methods. Facilitated by a long-time practitioner of evaluation, the goal of the Think Tank is to delve into how we can infuse what we have learned through RoE, the stories that get told about our methods, see that those stories help to shape and guide practice, and in turn, have practice continue to inform our methods. The session will be guided by Azzam and Jacobsen’s 2015 agenda for the future of RoE, addressing such topics as: - How can we produce RoE that is usable and relevant to practicing evaluators? - How can we use RoE to close the gaps between prescriptive evaluation theories and practice? - How can evaluators be incentivized and supported to engage in systematic data collection and evaluation in their every day practice that can inform RoE? - How can we foster the development of methods and tools that help us learn from evaluation in day-to day-practice so that we are informing the narrative of evaluation in real-time? - How do we effectively disseminate RoE and communicate the story on an ongoing basis, especially for individuals new to evaluation and who might self-describe as “accidental evaluators”? - How do we encourage theorists and practitioners alike to tell the full account of what we are learning about specific methods – the good, the bad, and the ugly…not just the good. Each of the leading researchers will provide a brief orientation to how they view the need for a tighter connection between empiricism and evaluation practice, and then each will guide the discussion of a small group. The groups will then reconvene and share emergent ideas and strategies for improving the evidence base of the narrative of our work.
Relevance Statement: Research on evaluation (RoE) has become an accepted part of our field, as exemplified by the establishment of the RoE TIG in 2008. RoE is particularly regarded as a tool for examining evaluation practice. For example, over half of the RoE studies conducted between 2005 and 2014 focused on evaluation practice (Coryn et al, 2017). Moreover, over 80% of surveyed AEA members reported that findings from RoE have influenced their thinking about evaluation and their practice (Coryn et al, 2016). Despite these encouraging findings, calls for strengthening the ties between evaluation theory and theory in practice continue (e.g., Christie, 2011). The evaluation field lags behind other fields, such as health care, in having a strong and consistent feedback loop between practice and research (Azzam and Jacobsen, 2015). Even when research is conducted on the translation of theory into practice (e.g., Miller and Campbell’s 2006 study of empowerment evaluation), it is not clear that the findings have influenced either practice or the narrative of the theory. This Think Tank offers an opportunity to explore how we can strengthen the feedback loop between RoE and practice, improve the quality and scope of RoE, and ensure that the narratives of our methods are empirically grounded. With the involvement of leading RoE scholars as small group leaders, this Think Tank will engage the audience in discussing how RoE can be conducted in a manner that is useful and relevant to practicing evaluators, how it can most effectively be communicated to influence practice, and, in turn, how practicing evaluators can help support and strengthen the quality of RoE. Understanding why evaluators deviate from prescribed models may help to revise those models and their stories. Understanding whether and when our theories can be implemented with integrity and what methods are most effective in which contexts and under which conditions are important aims for our field. This Think Tank can provide a powerful mechanism for fostering that understanding. Azzam T, and Jacobsen, M. (2015). Reflection on the future of research on evaluation. New Directions for Evaluation, 2015 (148), 103-116. Christie, C. A. (2011). Advancing empirical scholarship to further develop evaluation theory and practice. Canadian Journal of Program Evaluation, 26, 1–18. Coryn C., Ozeki S., Wilson L., Greenman G., Schröter D., Hobson K., Azzam T., and Vo A. (2016). Does research on evaluation matter? Findings from a survey of American evaluation association members and prominent evaluation theorists and scholars. American Journal of Evaluation, 37(2), 159–173. Coryn C., Wilson L., Westine C., Hobson K., Ozeki S., Fiekowsky E., Greenman G., and Schröter D. (2017) A Decade of Research on Evaluation: A Systematic Review of Research on Evaluation Published Between 2005 and 2014. American Journal of Evaluation, 38(3), 315-459. Miller, R. L., & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. American Journal of Evaluation, 27, 296– 319.Szanyi M., Azzam T., Galen M. (2013). Research on evaluation: A needs assessment. Canadian Journal of Program Evaluation, 27, 39–64.