Evaluation Managers and Supervisors
Alfred Rizzo, M.A. (he/him/his)
Senior Program Officer
Millennium Challenge Corporation
Washington, District of Columbia, United States
Cecilia Papariello, MSc (she/her/hers)
MEL Advisor & IRB co-chair
Encompass, LLC, United States
Location: Room 203
Abstract Information: To share respondents’ stories ethically, evaluators must consider the three foundational principles of human research ethics: respect for persons, beneficence, and justice. Through application of these principles, evaluators can ensure that they identify and sample the range of respondents who can tell the full story of an evaluation without being just a convenience sample (justice), whose benefits will outweigh the risks of sharing (beneficence), and who choose to tell their story (respect for persons). Institutional Review Boards (IRBs) play a critical role in safeguarding respondents so that they can tell their stories safely, speaking to evaluators who have made plans to obtain their consent, protect their data, and share findings ethically. However, two opposing tensions can interfere with IRBs’ ability to play this role. On the one hand, the bureaucracy inherent in an IRB review can slowdown or complicate evaluation timelines. On the other, efforts to cut through this bureaucracy, particularly for relatively low risk social scientific evaluations, can lead to cursory reviews that put respondents at risk. This 90-minute think-tank session aims to bring together independent evaluators, research ethicists, and donors to co-create strategies around how all stakeholders can work together to ensure ethical, efficient evaluations that tell respondents’ stories safely and transparently. Discussion questions will include: What are the challenges to upholding ethical principles in practice during evaluations? What roles do the different players in evaluations take in response to these challenges? How have different organizations resolved the tensions between efficacy and ethics, particularly considering evaluations deemed exempt from IRB review? What are the best practices to share data responsibly so that anonymity is maintained without losing the benefits of transparency? What practical steps can IRBs, evaluators, and funders take to ensure ethical story telling through evaluation?
Relevance Statement: When protecting human subjects in USG-funded research, evaluators and IRBs must typically comply with Federal Regulation 45 CFR 46 “Protection of Human Subjects”, often referred to as the ‘Common Rule.’ Prior to 2018, both social scientific research and evaluations involving human subjects could face an onerous IRB review process under the Common Rule. While IRB review was and remains critical for both medical and social scientific research and evaluation, many social scientists and evaluators at the time felt that IRBs had developed cumbersome regulations ill-fitting for low-risk, non-medical studies. In response, a 2018 revision to the Common Rule reduced administrative burdens to low-risk studies, particularly through increased exemption categories that undergo only limited IRB review. While ideally this revision to the Common Rule may have resolved some of the tensions between efficient evaluation processes and ethical practices while increasing consistency, in practice IRBs' attempts to operationalize the regulations have led to different interpretations of the regulations. As a result, some evaluators, researchers, and donors have grown concerned that the pendulum has gone too far the other way, with evaluations summarily undergoing a limited review that is more accurately described as cursory than limited, potentially allowing for unsafe or risky research practices to proceed. While it is unlikely that many evaluators actively engage in “IRB shopping,” i.e. looking for IRBs that are most likely to quickly approve evaluations with limited bureaucratic hurdles, for the purpose of avoiding ethical best practices, the end effect of streamlined IRB regulations may be increased risk for respondents. The 2018 Common Rule revision, however, is not the only challenge facing modern IRBs. For example, some evaluation funders set timelines that pressure evaluators to move quickly and create tensions with IRBs, even when operating with increased efficiency and speed of review following the 2018 Common Rule revision. Furthermore, the recent increased commitment of many funders towards sharing transparent data that allows others to reproduce the results of evaluations raises a host of other ethical questions. For example, is qualitative data always identifiable and, if so, how can it be shared confidentially so that participants’ stories are only shared in the way that they agreed? Similarly, are outliers in cleaned quantitative data sets potentially identifiable? If outliers are removed, is the analysis still reproducible? This think tank aims to address these challenges and questions by bringing together a range of evaluators, ethicists, and funders to share how their experiences with IRB review since the 2018 revision to the Common Rule, short evaluation timelines, and data transparency have effected evaluation practice and how to safeguard respondents. The session will seek opportunities for these groups to discuss and co-create solutions that maintain the efficiency of the post-2018 rule and the goals of transparent, reproducible evaluations, while maintaining the right of respondents to control their own stories.