Design and Analysis of Experiments
Heather MacArthur, PhD (she/her/hers)
Senior Associate
Blueprint
Kentville, Nova Scotia, Canada
Arden Cheadle, MA
Associate
Blueprint, United States
Abstract Information: In this panel, Blueprint researchers reflect on the process of implementing three RCTs and how different methodological choices impact the stories that our data allow us to tell. In contrast to the traditional story of RCTs that researchers often start with when designing a study – one of tight experimental control, standardized procedures, and the revelation of absolute truth-- we find that the complex influence and perspectives of researchers, program delivery partners, and participants frequently emerge throughout the course of the research to shift the narrative and reciprocally influence our methodological choices. First, feedback from front-line practitioners and participants (both direct and indirect) can reveal how the complications and ethical quandaries associated with implementing an RCT are experienced firsthand. As the study unfolds, dialogue with diverse stakeholders can highlight their unique perspectives on the research and illuminate better ways to communicate and implement an RCT, thus feeding back into study design. Second, because human/participant behaviour is messy and often resists the narrative of control that RCTs seek to impose, researchers may be faced with difficult decisions about how to handle that messiness. In making such decisions, we learn that choices made at any stage of the research will have consequences on the data, while still resulting in useful and important insights about the program being evaluated. Like all forms of research, but in contrast to the traditional story of RCTs that frame findings as universal, RCT results should be viewed as a product of the context in which they were generated, and transparency in reporting should enable readers to judge the appropriateness and effects of those decisions for themselves. In a twist on the traditional panel structure, panelists in this presentation will not present discrete papers, but will rather reflect on the theme of control in Blueprint’s RCTs from the perspective of different stakeholders in the research. Drawing on our own experiences, panelists will replicate what the process of implementing an RCT feels like by shifting between the perspectives of researchers, participants, and program delivery partners throughout the session, highlighting what the stories of each group bring to the design and emerging story of an RCT.
Relevance Statement: At Blueprint, we work with policymakers and service providers to drive social change by using credible and reliable methods of evaluation to scale social innovation in our policy ecosystem. Considered the “gold standard” for evaluation due to the ability to infer causality, Randomized Controlled Trials (RCTs) are routinely used to evaluate the efficacy of medical treatments but are less common in the evaluation of social programs. With the word “control” embedded in the name of the method itself, RCTs are assumed to consist of a standardized and broadly replicable set of techniques that, once implemented in a controlled environment (as they would be in a laboratory), will reveal the truth about the subject of study. While careful design and maintaining experimental validity are critical to the success of implementing an RCT, Blueprint’s work has highlighted the importance of considering how RCTs are experienced by people on the ground, and how decisions made to accommodate practitioner and participant needs impact the universality of the data generated from the study. Reflecting on our experience conducting RCTs of programs that help under-served populations find meaningful employment, Blueprint panelists will seek to demystify the process of undertaking an RCT by highlighting how the decisions we make as researchers impact the stories that our data allow us to tell, and what might be missing from those stories. We also highlight how the story of an RCT often changes as roadblocks and new perspectives are encountered throughout the research, and how evaluating social programs using RCTs presents additional complications beyond what would be experienced in a medical/laboratory setting. That is, even if care is taken to include all relevant stakeholders in the design of an RCT, the complex perspectives and influence of participants, front-line staff, and program managers frequently emerge throughout the course of a study to resist the narrative of “control” that RCTs evoke, demanding adaptation, flexibility, and nimbleness. In highlighting illustrative stories from Blueprint’s RCT projects, we will uncover important learnings and best practices that we hope will strengthen both our own and the audience’s evaluation toolkit and understanding of the process of delivering an RCT. We also aim to elicit reflection on the RCT method itself, by challenging and demystifying the notion that RCTs are immune to context and researcher influence. We will advance knowledge in the evaluation field by contemplating what it means to implement an RCT in the evaluation of a social program, where, unlike in a laboratory, exercising complete “control” may not be achievable. Specifically, we ask the question: How can we deliver a rigorous RCT that generates valuable insights on the impact of a social program, while balancing the need for experimental control with the need to maintain flexibility in conducting an evaluation with diverse stakeholders? We believe that these reflections will be of relevance and importance to evaluators who are currently implementing RCT projects, as well as others who may be interested in exploring this method of evaluation.
Presenter: Heather J. MacArthur, PhD (she/her/hers) – Blueprint
Presenter: Arden Cheadle, MA – Blueprint