Cluster, Multi-site and Multi-level Evaluation
Natalie Wilson, MPA (she/her/hers)
Assistant Director of Research Support
West Virginia University Health Affairs Institute
Athens, Ohio, United States
Edis Osmanovic, MHA (he/him/his)
Research Specialist
West Virginia University Health Affairs Institute, United States
Jayne Harris Esposito, MPH
Program Manager
West Virginia University Health Affairs Institute, United States
Thomas Bias, PhD, MA
Associate Professor Department of Health Policy Management and Leadership
West Virginia University Health Affairs Institute, United States
Location: White River Ballroom E
Abstract Information: Four American Rescue Plan Act-funded initiatives were outlined by the West Virginia Department of Health and Human Resources (DHHR) for the West Virginia University (WVU) Health Affairs Institute (Health Affairs) to implement with their collaboration and sponsorship, including one initiative that would evaluate the other three. On the surface, the initiatives had very different aims. It was the job of the evaluation team to tell the story of how they all worked toward a common goal of strengthening Home and Community-Based Services for Medicaid beneficiaries. It is likely that large funding streams may be used for differing initiatives more often given the precedence that the environment of the COVID-19 pandemic set in allowing governments and other entities flexibility to determine how funding would be most impactful. The role of evaluators could get more complicated with this funding structure, and the organization of this evaluation team has lessons-learned to share from a multi-site, multi-project evaluation. This work was supported under contract with the West Virginia Department of Health and Human Resources.
Relevance Statement: Four American Rescue Plan Act-funded initiatives were outlined by the West Virginia Department of Health and Human Resources (DHHR) for the West Virginia University (WVU) Health Affairs Institute (Health Affairs) to implement with their collaboration and sponsorship, including one initiative that would evaluate the other three. Three initiative teams worked on the following projects: 1) development of curricula for person-centered, trauma-informed care training and training on conducting assessments required in the West Virginia Statewide Transition Plan, both with a target audience of Home and Community-Based Services direct service professionals; 2) law enforcement officer training around safe interactions with individuals with autism spectrum disorder or other mental or behavioral health disorders, and 3) a public education and outreach effort to educate the community, particularly Medicaid waiver eligible individuals, around Home and Community-Based Services. All initiatives had remote work involvement, and two facilitated training courses held in-person and remotely across West Virginia. While the initiatives took vastly different forms, they all worked toward the singular aim of the funding source: strengthening Home and Community-Based Services for Medicaid beneficiaries. The AEA Competency Domains of Context, Planning and Management, and Interpersonal are all relevant to this topic. Evaluations of projects that are different from each other require an understanding of the context of multiple program purposes, stakeholders, and perspectives. Planning and Management are related to Context and important considering there will be multiple timelines and deliverables to track. Depending on how many evaluators are available, there is a balance in ensuring an understanding of each program. This was achieved in Health Affairs by embedding evaluators into the teams they were evaluating, with transparency to sponsors and team members that this would be the case. Evaluators may need to consider how this could work with smaller evaluation teams. This is a relevant topic for multi-site or multi-project evaluators as funders start to give more freedom in the types of projects that are allowable under a single funding stream. Questions that could be considered in this session: How can evaluators tell the story of how projects with seemingly different goals all work toward the same funder goal in a way that is engaging to the funder but still fair and relevant to each project? How can evaluators maintain an understanding of multiple projects at once with a small team? What frameworks are most fitting for multi-site or multi-intervention evaluations? Does participating in project work as an evaluator compromise objectivity, and if so, how can that be avoided?