Principal Consultant Schultz Patel Evaluation, United States
Abstract Information: The goal of this explanatory sequential mixed method study was to assess whether there were observable trends, associations, or group differences in evaluation methodology by settings and content area in published evaluations from the past ten years (quantitative), to illuminate how evaluation practitioners selected these methodologies (qualitative), and assess how emergent findings from each phase fit together or helped contextualize each other. In this study, methodology was operationalized as research tradition and method was operationalized as research design. For phase one (quantitative), a systematized ten-year review of five peer-reviewed evaluation journals was conducted and coded by journal, research tradition, research design, first author setting, evaluation content area, and publication year. These results were first reported descriptively and then considered for inferential modeling. For phase two (qualitative), interviews, which were informed by the findings that emerged in the quantitative phase, were conducted with a purposive sample of 15 practitioners to gain insight into how practitioners make methodological choices. In phase three (integration), findings were integrated to contextualize emergent learnings from each phase. Evidence of statistically significant associations between research tradition, design, first author setting, and content area were discovered. There were no statistically significant associations observed between either research tradition and publication year or research design and publication year. There was also evidence that evaluations conducted in the quantitative research tradition, as well as experimental designs, were overrepresented in the evaluation literature within the timeframe being reviewed. Finally, this study’s procedures generated a hypothesized grounded theory of how evaluators select methods that provided explanation for phase one findings; this theory should be tested by future researchers.
Relevance Statement: This proposed presentation has important implications for the field. The study relayed in this proposed presentation was conducted to interrogate whether there were observable trends in use of methodologies and methods, and to explore how these were selected by evaluators. Findings are relevant to the field in several ways, including description of methods trends, description of these trends across first author settings and content areas, the generation of a theory of practitioner rationale when selecting methods, and to illuminate whether certain methods seemed to be unduly privileged by the field. This study systematically collected evidence of trends in evaluation methodologies and methods over a ten-year period (2010-2020), as well as which were most commonly used; there was extremely limited research on this topic present in the literature at the time of study conception. This was problematic because it suggested that practitioners, theorists, and those conducting RoE were unaware of which methods were truly being used most in practice, particularly outside of their own anecdotal knowledge. It is difficult for a field to move forward when lacking this type of foundational knowledge. Next, this study examined whether these trends were stable across first author settings and evaluation content areas, as these variables were thought to serve as predictors of methodology and methods; similarly, there were no known investigations of this nature present in the literature at the time of study conception. This gap was problematic as well, since if there was bias lurking in various settings and content areas, this would need to be named and addressed. Further, this study took an in-depth approach to exploring practitioner rationales for how and why they selected methods; there were no other known studies of this nature present in the literature at the time of study conception. Documenting both the “what” of which methods were being used in the field as well as the “why” is a crucial step in improving the field, as it would be difficult to improve the practitioner thought process without understanding that process in the first place. This author’s theorized mechanism of change was that once this thought process was documented, practitioners would become more aware of their own habits and biases, and then gradually learn to select methods better-suited to the evaluation questions at hand. This research was also expected to be useful for theorists, as knowledge about practitioner decisions was expected to provide fodder for future theory development. Additionally, this study was expected to illuminate the types of evidence unduly privileged by the field. If certain methodologies and methods appeared to be used more often frequently than others, assuming a broad range of content areas and evaluation questions, this could suggest that practitioners were over-relying on select methodologies and methods. These insights were expected to be useful not only to practitioners, but also evaluation educators, evaluation clients, and professional evaluation associations. Understanding these insights was expected to help these stakeholders in the field of evaluation advocate for more systematic, equitable, and pragmatic selection of methodologies and methods.