Government Evaluation
Amia Downes, DrPH (she/her/hers)
Health Scientist
Centers for Disease Control and Prevention
Tucker, Georgia, United States
Location: White River Ballroom I
Abstract Information: While the pressure on federal agencies to demonstrate societal impact continues to increase, the definition of impact and the way in which it is measured, is complex.1 The Office of Management and Budget states that impact evaluation assesses the causal relationships of a program, policy, or organization’s activities and expected outcomes.2 Yet, the American Evaluation Association’s Guiding Principles state that systematic inquiries should be contextually and methodologically relevant. This principle seems to support the notion that evaluators should consider the context of their agency. For example, how should impact be defined and measured for agencies whose mission is achieved in whole or in part through scientific research, primarily regulatory, or direct or indirect services to the public? As a case study, we will share how research agencies in the U.S. and abroad are defining and measuring research impact using a variety of frameworks. Government organizations in other countries have been refining their definitions and approaches to measuring research impact for quite some time, such examining contribution and counterfactuals and using metrics like citations beyond academia and case studies.1, 3-7 The aim of this roundtable session is to discuss how federal evaluators may move the field forward in thinking more broadly about how impact is defined and measured across the U.S. government based on the contexts of an agency’s mission. This open forum may produce more relevant and useful information for measuring and demonstrating impact in federal agencies whose missions may involve research, regulatory functions, direct or indirect services to the public. 1) World Health Organization. Evaluation Practice Handbook. Geneva (Switzerland): WHO Press; 2013. 2) Office of Mgmt. & Budget, Exec. Office of the President, OMB M-21-27, Evidence-Based Policymaking: Learning Agendas and Annual Evaluation Plans (2021). Available from: https://www.whitehouse.gov/wp-content/uploads/2021/06/M-21-27.pdf 3) www.arc.gov.au [internet]. Canberra: Australian Research Council; c2022 [cited 2022 Feb 26]. Available from: https://www.arc.gov.au/about-arc/strategies/research-impact-principles-and-framework 4) Penfield T, Baker MJ, Scoble R, Wykes, MC. Assessment, evaluations, and definitions of research impact: A review. Research evaluation. 2014; 23(1), 21-32. 5) Donovan, C. State of the art in assessing research impact: introduction to a special issue. Research Evaluation. 2011; 20(3): 175–9. 6) Donovan C, Hanney S. The ‘Payback Framework’ explained. Research Evaluation 2011; 20(3): 181–183. 7) www.ref.ac.uk [internet]. UK Research and Innovation; c2022 [cited 2022 Feb 26]. Available from: https://www.ref.ac.uk
Relevance Statement: In the U.S., as an evaluation community we tend to think of impact evaluations as seeking to demonstrate a causal relationship between a program and observed outcomes through randomized controlled trials or quasi-experimental studies.1 However, the World Health Organization states “the measurement of impact is a complex issue that requires specific methodological tools to assess attribution, contribution and the counterfactual.”2(p21) In fact, the Australian Research Council defines research impact as “the contribution that research makes to the economy, society and environment, beyond the contribution to academic research,” to be assessed by qualitative information supplemented with quantitative data, where available.3 For example, narratives, surveys, testimonials, citations (beyond academia), and documentation are appropriate for demonstrating research impact.4-5 One of the American Evaluation Association’s guiding principles around systematic inquiry states that inquiries should be contextually and methodological relevant. Why define and measure the impact of federal research agencies in the same way as those whose missions are achieved through a regulatory or service function? The contexts in which agencies operate are likely to be quite different. For instance, there are several contextually and methodologically appropriate frameworks used by governments in multiple countries to assess research impact such as the Payback Framework, the Research Excellence Framework, as well as contribution analysis. 5-8 A discussion around how we as federal evaluators can broaden our thinking around what it means for our programs to have impact based on the context in which they operate and increase our knowledge of other approaches to demonstrating impact while considering their limitations would be useful in moving the field forward. A recognition that demonstrating attribution is challenging, measuring impact is complex, and as a federal U.S. evaluation community, we need to begin the conversation to broaden our thinking and acceptance of practices as other countries have done. 1) Office of Mgmt. & Budget, Exec. Office of the President, OMB M-21-27, Evidence-Based Policymaking: Learning Agendas and Annual Evaluation Plans (2021). Available from: https://www.whitehouse.gov/wp-content/uploads/2021/06/M-21-27.pdf 2) World Health Organization. Evaluation Practice Handbook. Geneva (Switzerland): WHO Press; 2013. 3) www.arc.gov.au [internet]. Canberra: Australian Research Council; c2022 [cited 2022 Feb 26]. Available from: https://www.arc.gov.au/about-arc/strategies/research-impact-principles-and-framework 4) Penfield T, Baker MJ, Scoble R, Wykes, MC. Assessment, evaluations, and definitions of research impact: A review. Research evaluation. 2014; 23(1), 21-32. 5) Donovan, C. State of the art in assessing research impact: introduction to a special issue. Research Evaluation. 2011; 20(3): 175–9. 6) Donovan C, Hanney S. The ‘Payback Framework’ explained. Research Evaluation 2011; 20(3): 181–183. 7) www.ref.ac.uk [internet]. UK Research and Innovation; c2022 [cited 2022 Feb 26]. Available from: https://www.ref.ac.uk 8) Downes A, Novicki E, Howard J. Using the contribution analysis approach to evaluate scientific impact: A case study of the National Institute of Occupational Safety and Health. American Journal of Evaluation. 2019; 40(2): 177-189.