Translational Research Evaluation
John Stevenson, PhD (he/him/his)
Professor Emeritus
University of Rhode Island, United States
John Stevenson, PhD (he/him/his)
Professor Emeritus
University of Rhode Island, United States
Clara Pelfrey, PhD (she/her/hers)
Evaluation Director, CTSC
Case Western Reserve University
Shaker Heights, Ohio, United States
Sue Giancola, PhD (she/her/hers)
Senior Associate Director, CRESP
University of Delaware
Newark, Delaware, United States
Ingrid Philibert, PhD
Director, Tracking and Evaluation, Great Plains IDeA CTR
University of Nebraska Medical Center, United States
Reagan Curtis, PhD (he/him/his)
Professor, School of Education
West Virginia University
Morgantown, West Virginia, United States
Nikki Lewis, MS
Program Manager
West Virginia Clinical Translational Research Institute, United States
Evana Nusrat Dooty, MS
Graduate Student Researcher
West Virginia University, United States
Mohammad Muntashir Raquib, MA
Graduate Student Researcher
West Virginia University, United States
Sarah Mason, PhD
Director, Center for Research Evaluation
The University of Mississippi, United States
S. Hope Gilbert, PhD
Manager
Deloitte Consulting, L.L.C.
Olive Branch, Mississippi, United States
Location: Room 301
Abstract Information: The National Institute of General Medical Sciences (NIGMS) funds 13 statewide or multi-state Clinical and Translational Research (CTR) networks. These networks are intended to build research capacity and enhance research infrastructure in eligible states, while focusing on the health needs of medically underserved populations within each network. Each CTR program is required to have a Tracking and Evaluation Core; CTR evaluators meet quarterly through the National CTR Evaluators’ Group. These quarterly meetings have been an invaluable opportunity for evaluators to share resources and ideas, network with fellow evaluators, and collaborate on projects and presentations. However, until now systematically collected information about the evaluation practices and challenges across the Tracking and Evaluation Cores of all CTRs has not been available. This panel frames evaluation challenges for CTR sites by presenting results from a CTR-wide evaluators’ survey, highlighting the challenges CTR evaluators have faced over time and ways they have worked to overcome these challenges. In 2022, CTR evaluators from Delaware, Rhode Island, and Nebraska partnered to conduct a National CTR survey. Using the 2021 National Survey of CTSA Evaluators as a model, the survey examined evaluation staffing, stakeholders, practices, and challenges. Evaluators from the West Virginia and Mississippi CTRs add their stories of how they have addressed evaluation challenges related to capacity building and the use of evaluation findings. Our discussant, a renowned clinical and translation science evaluator, will engage session participants in an exchange intended to generate useful information around evaluation challenges and strategies to effectively address those challenges.
Relevance Statement: The National Institute of General Medical Sciences (NIGMS) Clinical and Translational Research (CTR) program began in 2014. Since that time, 13 projects have been funded. In FY2023, awards per project range from $3.0M to $5.7M per year. Evaluators from these CTRs meet quarterly as part of the National CTR Evaluators’ Group and actively share, network, and collaborate. This forum generates productive conversations about the challenges faced by CTR evaluators, and ways of coping with them. As reflected in the proposed panel, it has also stimulated more systematic examinations of general and unique challenges, with three presentations adding a stronger empirical basis for future improvements. In the first presentation, evaluators from the DE-CTR ACCEL, Rhode Island ADVANCE-CTR, and Great Plains IDeA-CTR report on the results of a survey modeled after the survey administered by the National CTSA Evaluators’ Group in 2018 and 2021. Like the CTSA survey, the CTR survey was an independent, peer-led initiative designed to systematically understand evaluation practices, resource allocation, stakeholder involvement, and evaluation use, including all 13 CTR sites as the sample. The study protocol and survey instrument were submitted to the University of Delaware IRB in October 2022 and received an Exempt determination in November 2022. The lead evaluator at each site was asked to complete the survey. The survey had five sections: background information and personnel, evaluation users, meta-evaluation practices, evaluation challenges, and recommendations. The focus for this session will be on telling the story of challenges and how CTR evaluators have overcome these challenges, including a focus on the developmental nature of many challenges. In the second presentation, evaluators from the West Virginia IDeA-CTR tell their story of challenges confronted by a long-term evaluation featuring an established logic emphasizing evaluation capacity-building when a change of aims in a renewal and personnel turnover call for re-imagining the evaluation logic while retaining the core commitment to capacity-building. In the third presentation, evaluators from the Mississippi Center CTR share how they have adapted an innovative risk and protective factor framework to address the evaluation challenges related to evaluation use. This panel is intended to be interactive. All three of the papers highlight challenges and responses to them (surfaced most comprehensively in the National Survey of CTR Evaluators) that will be used to engage the audience regarding their own experiences with similar challenges and ways they have managed evaluation challenges. The discussant, a renowned clinical and translation science evaluator, will bring her own expertise and experience to this engagement process. This panel presentation is relevant because as evaluators it is imperative that we share not only the positive stories, but also our challenges and even failures. It is important that we build a supportive community, learn from one another, support each other, and widely share approaches that might be beneficial as we confront common (and even not so common) challenges. The panel is intended to be useful to evaluators in the CTR context, and also for evaluators of other large, complex, multi-institutional systems-change initiatives.
Presenter: Sarah Mason, PhD – The University of Mississippi
Author: S. Hope Gilbert, PhD – Deloitte Consulting, L.L.C.
Presenter: John Stevenson, PhD (he/him/his) – University of Rhode Island
Presenter: Clara Pelfrey, PhD (she/her/hers) – Case Western Reserve University
Presenter: Ingrid Philibert, PhD – University of Nebraska Medical Center
Presenter: Nikki Lewis, MS – West Virginia Clinical Translational Research Institute
Presenter: Evana Nusrat Dooty, MS – West Virginia University
Presenter: Mohammad Muntashir Raquib, MA – West Virginia University
Presenter: S. Hope Gilbert, PhD – Deloitte Consulting, L.L.C.
Presenter: Reagan Curtis, PhD (he/him/his) – West Virginia University
Presenter: Nikki Lewis, MS – West Virginia Clinical Translational Research Institute
Presenter: Evana Nusrat Dooty, MS – West Virginia University
Presenter: Mohammad Muntashir Raquib, MA – West Virginia University
Presenter: Sue Giancola, PhD (she/her/hers) – University of Delaware
Presenter: John Stevenson, PhD (he/him/his) – University of Rhode Island
Presenter: Ingrid Philibert, PhD – University of Nebraska Medical Center