Mixed Methods Evaluation
Bridget Lavin, Ph.D.
Independent Evaluator
Unaffiliated, Texas, United States
Ryan Kopper, n/a
Senior Research, Learning, and Analytics Specialist
World Vision US, Massachusetts, United States
Holta Trandafili, MA (she/her/hers)
Sr. Research, Learning, and Analytics Manager
World Vision US, Maryland, United States
Location: Grand Ballroom 10
Abstract Information: The Success Case Method (SCM) is an underutilized approach with much to offer to mixed methods aiming to capture impact stories. While stories used in evaluations are more often than not linked to qualitative methods, SCM sits in the middle of the methods spectrum. The SCM approach provides a practical path that links quantitative results to qualitative insights. It first presents an opportunity for evaluators and program stakeholders to create a shared understanding of the impact model. Then it uses quantitative survey data to identify success and nonsuccess cases based on the established impact model. Next, through random sampling of success and nonsuccess cases, it uses in-depth interviews or group discussions to document their stories. Lastly, it unpacks for whom a program or intervention works and under what circumstances. The rich case studies form two extremes –those succeeding most and least in a program, offer insights into the program's merit and the underlying mechanisms of success that can be used to explain the results in more nuanced ways and are useful for any reprogramming. This skills-building workshop will combine illustrations from at least six real-world evaluations, including a handful of small n studies and those with larger samples from multisectoral development programs with step-by-step instructions on using the approach through an example. Some of the topics to be covered include: how to define success (including the trials and tribulations of measuring partial success); how to design a useful and “right-sized” SCM quantitative survey; and how the qualitative portion of SCM can be adapted to measure outcomes at individual, household, or group levels and at different points during the program cycle to answer a range of questions about what works, for whom, and why. Handouts with more resources on SCM and tips on the pros and cons of the method will be shared at the end.
Relevance Statement: A hallmark of bringing the qualitative and quantitative data together in many evaluations resides in sequencing the qualitative and quantitative methods to triangulate findings. Despite best intentions, rigorous evaluations (including impact evaluations) often treat the quantitative and qualitative data and methods that collect them separately, beginning at the study design and continuing through reporting. With SCM, the data from the quantitative methods are tightly connected to qualitative data. The rich stories generated from SCM are packaged in case studies that easily illustrate program effectiveness while offering assurance on the rigor of how they are collected (i.e., based on pre-defined success criteria on which the quantitative data are dichotomized into success and nonsuccess groups, after which a random sample is drawn from each group). In the literature, SCM originated with organizational-wide initiatives (i.e., training, and coaching programs) run by Human Resources in private companies (Brinkerhoff, 2003) and their need for efficient ways to understand these programs' impact to improve them, scale up what worked, and create value for the companies. Over the years, the method has been adapted to social program evaluation because of its practicality and its connectedness between qualitative and quantitative data. BetterEvaluation has a collection of examples where SCM has helped programs. This workshop, led by evaluators with expertise at different ends of the qualitative and quantitative method spectrum, will provide practical examples of how SCM has been used in end-line and ex-post evaluations in Uganda, Kenya, Sri Lanka, and Honduras. The workshop will offer SCM case studies that illustrate success and nonsuccess cases in simple and elaborate theories of change. Together with participants, we will walk through how they were derived, how they informed the interpretation of findings about program impact, and how they ultimately have told complex stories succinctly. The workshop will equip those attending with one more evaluation approach that can help them bridge the divide on methods. Brinkerhoff, Robert. (2005). Training The Success Case
Method: A Strategic Evaluation Approach to Increasing the Value and Effect of. Advances in Developing Human Resources. 7. 10.1177/1523422304272172. Chianca, T. K., & Risley, J. S. (2005). Applying the Success Case Method as Part of the Institutional Evaluation of a Nonprofit Organization. Paper presented at the meeting of the American Evaluation Association/Canadian Evaluation Society, Toronto, Ontario, Canada. Coryn, C. L. S., Schro¨ter, D. C., Miron, G., Kana’iaupuni, S. K., Tibbets, K.,Watkins-Victorino, L. M., et al. (2007). A Study of Successful Schools for Hawaiians: Identifying that Which Matters. Kalamazoo: Western Michigan University, The Evaluation Center. Coryn, C.L.S., Schroter, D.C., & Hanssen, C.E. (2009). Adding a Time-Series Design Element to the Success Case Method to Improve Methodological Rigor: An Application for Nonprofit Program Evaluation. American Journal of Evaluation, 30(1), 80-92. White, H. & Phillips, D. (2012). Addressing Attribution of Cause and Effect in Small n Impact Evaluations: Towards an Integrated Framework. International Initiative for Impact Evaluation. Working Paper 15. MacFarlan, A., & McGuinness, L. (n.d.). Success case method. BetterEvaluation. Retrieved March 22, 2023, from https://www.betterevaluation.org/methods-approaches/approaches/success-case-method