Health Professions Education Evaluation and Research
Jessica Sperling, PhD
Director, Evaluation and Strategic Planning, CTSI
Duke University
Durham, North Carolina, United States
Ashley Grantham, PhD
Medical Education Specialist
Duke School of Medicine, United States
Rasheed Gbadegesin, MD
Wilburt C. Davison Distinguished Professor. Professor of Pediatrics. Associate Dean for Physician-S
Duke School of Medicine, United States
Erin Haseley, MA
Evaluation Research Analyst
Duke University, Social Science Research Institute, United States
Noelle Wyman Roth, MEM
Asst. Dir., Applied Research, Evaluation, and Engagement
Duke University, Social Science Research Institute, North Carolina, United States
Doreet Preiss, PhD
Senior Research Associate
Duke University, Social Science Research Institute, United States
Location: Room 102
Abstract Information: When one initiative hosts many individual programs, critical questions arise in aligning evaluation across these programs. Namely, we ask: to what degree should one align evaluation processes and measures across programs, particularly when these programs are complementary in overarching objectives but not directly parallel in their processes, participants, or even short-term outcomes? This Think Tank session explores this topic through the story of an evaluation initiative focused on physician-scientist development programs. During 2022-2023, the Duke University Social Science Research Institute partnered with the Duke Office of Physician Scientist Development, based in an academic medical center, to develop a mixed-methods evaluation addressing their suite of programming. Physician-scientist efforts support learning that intentionally links clinical care and biomedical research, rather than each existing in silo, to advance improvements in healthcare delivery and health outcomes. Specific Office of Physician Scientist Development programs include student training programs, faculty career mentorship, specialized funding and scholarship programs, and workshop series. Factors considered in evaluation of these programs include conceptual implications, such as defining opportunities for learning and tradeoffs in empirical design, as well as operational implications, including considering cost and time involved in evaluation across multiple programs. After introducing the context and our experience and tradeoffs, we will utilize small group discussion to reflect on distinct approaches, benefits, and drawbacks to this multi-program approach. We will also have groups share on ways that this reflects, or differs from, issues and considerations in a collective impact approach.
Relevance Statement: This session will be relevant for specific domain areas, to the AEA theme, and to the field of evaluation overall. We address each below. For the Health Professions Education, Evaluation & Research TIG, this session directly addresses ways to advance health professions; it is also relevant for Health Evaluation, Translational Research Evaluation, and STEM Education & Training TIGs. A clinical understanding, combined with the scientific skills to conduct state-of-the art research, are understood as essential for disease research as well as the bench-to-bedside translation that is necessary for true health impact (Harding, 2017). However, Harding et al. (2017) notes that “barriers to the successful pursuit of MD-only research careers persist at institutional and national levels and must be addressed in order to develop a critical mass of physician-scientists in the biomedical workforce;” evaluation is central to understanding and defining effective strategies for overcoming barriers. Moreover, the fact that our project focuses on multiple types of clinician-researchers development programming provides a lens on different education and support opportunities and the relative merits of each. In addition, this proposal is heavily influenced by the AEA Annual Meeting theme “The Power of Story.” First, our experience and session will highlight listening and understanding the story of individual programs as essential to evaluation design. In our case, understanding the programs’ recounting of these varied programs, how and why they were developed, and how they were intended to algin, was critical to determining if and how they could sensibly be addressed within one overarching evaluation structure. For this work, listening to program partners provided an essential basis for direction. A holistic understanding of the initiative’s story was further developed through the sequential mixed-methods approach—beginning with survey data collection helped us gain a broad understanding of program experience and outcome achievement (i.e., outlining the story), as well as highlighting potential areas for further exploration to add depth and nuance through qualitative data collection (i.e., fleshing out select narratives). Our work is relevant to the field of evaluation as it provides useful lens on looking across complementary programs while retaining the opportunity to understand individual programs in-depth through qualitative data collection and triangulation. The ability to understand both program complementarity and the unique attributes of a given program may enhance evaluation utilization, and could be particularly useful to those evaluating Collective Impact efforts. This is also relevant as we consider costs and scope of evaluation as in conversation with, and needing to be balanced with, evaluation utilization. Addressing many programs under one aligned evaluation design can provide more feasible scope and cost, but such a design must be able to appropriately inform action for each program, as well as the initiative as a whole, to ensure effective evaluation utilization. Harding, C. V., Akabas, M. H., & Andersen, O. S. (2017). History and outcomes of fifty years of physician-scientist training in medical scientist training programs. Academic medicine: journal of the Association of American Medical Colleges, 92(10), 1390.