Collaborative, Participatory & Empowerment Evaluation
Shannon Griswold, Ph.D.
Senior Learning Advisor
USAID, United States
Christina Synowiec, MSc
Senior Director
Results for Development, United States
Eric Djimeu, PhD
Associate Director
Results for Development, United States
Katie Bowman, MIA
Associate Director
Results for Development, United States
Location: Grand Ballroom 8
Abstract Information: Few activities build experimentation and feedback into implementation in ways that work to inform decision-making and lead to learning and adaptation in the activity cycle. First, program managers typically implement as initially planned or make changes to program rollout based on perceptions, rather than through evidence. Second, they may lack the skills needed to design rigorous learning activities. Third, while traditional M&E - including RCTs - provides valuable information on whether or not a program worked, the timelines for seeing results are typically too late. And fourth, many programs rely on ad hoc decision-making processes for program design because of resources, and may associate rigorous M&E with external accountability rather than internal learning.
What if there was an approach to generating rapid feedback to support program refinement and systematic learning on an ongoing basis? Where the goal is not to replace a program’s need for impact evaluations in answering the question “did the intervention work?”, but instead to work with the program to better answer the questions “what’s working or not working now” and “how do we use that information immediately to make things work better."
The Rapid Feedback Monitoring, Evaluation, Research, and Learning (RF MERL) approach uses a range of M&E tools to support continuous improvement for development programs. Join us to hear about three engagements in diverse contexts where we deployed a range of rapid methods to support program decision making and adaptation, in: 1) Mali, 2) Cambodia, and 3) Uganda.
Relevance Statement: Few activities build experimentation and feedback into implementation in ways that work to inform decision-making and lead to learning and adaptation within the activity cycle. Even those with a strong evidence-based design often implement activities using assumptions about their efficacy that could benefit from rigorous testing. Furthermore, implementers may not use monitoring systems to gather evidence about what is working well in the activity and what is not in order to help improve ongoing implementation. And, while independent impact evaluations may provide important evidence, that evidence often comes too late to improve the activities assessed before the end of the activity cycle.
A culture of continuous learning may be needed for a host of reasons, including changing consumer expectations, unexpected implementation constraints or faulty assumptions made at the project design stage. Our conventional approach to assessing the effectiveness of interventions by collecting data at the end of the intervention, provides limited opportunity to be responsive to day-to-day developments in the field. The long interval between the end of the intervention and completion of the evaluation means that the evidence generated is of limited utility in course correction and in providing timely feedback to implementers. Moreover, the lack of a systematic process for linking ongoing implementation learnings to modifications in project design - the feedback loop - often precludes adaptive implementation.
Recent thinking has posed a bold challenge to this orthodoxy aided by developments in theory, methods and practice, calling for an approach that promotes interaction between project designers, implementers, researchers and decision-makers, to encourage adaptation through learning. This newer approach, often using such terms as “responsive feedback” or “feedback loops,” calls for timely assessments that provide actionable feedback to implementers to course correct and achieve intended outcomes. The Rapid Feedback Monitoring, Evaluation, Research, and Learning (RF MERL) mechanism within USAID provides a concrete series of case studies for how to collect, analyze and disseminate evidence/data more rapidly and use it for decision-making.
Presenter: Eric Djimeu, PhD – Results for Development
Presenter: Christina Synowiec, MSc – Results for Development
Presenter: Shannon Griswold, Ph.D. – USAID
Presenter: Katie Bowman, MIA – Results for Development