Mixed Methods Evaluation
Bianca Montrosse-Moorhead, PhD
Associate Professor
University of Connecticut, Connecticut, United States
Amanda Sutter, MA, MSW, MA (she/her/hers)
Doctoral Student
University of Connecticut
Vernon, Connecticut, United States
Location: Room 309/310
Abstract Information: This skill-building workshop is back by popular demand! Have you ever wondered what strategies you can use to improve your surveys? Or, wanted to create or adapt an instrument that better captures difficult to measure concepts well? Or, wanted to make sure that your survey instrument is able to accurately shed light on the story you are trying to understand? This session will teach you an easy-to-implement mixed-method feedback process to build better surveys. The session will begin with an overview of the process, which includes both a review and interview component. Throughout the session, presenters will focus on the value and use of the process, walk through examples from real-life evaluation studies, and share resources for further learning. Attendees will also have the opportunity to practice using this process through hands-on activities. Attendees will leave the session with templates they can adapt for their own use and guidance to help them feel prepared to take action. Throughout the session, presenters will call attention to ways in which equity is and can be centered in the mixed-method feedback process.
Relevance Statement: Surveys are a common tool in an evaluator’s toolbox. While surveys can be incredibly useful, they can also be harmful if the evaluative claims generated from the data are not valid, reliable, and contextually and culturally aligned. This is particularly important for difficult to measure concepts (e.g., values, attitudes, beliefs), for concepts where no surveys exist, or where an existing survey must be adapted. Evaluators regularly face these conditions (Fredricks et al., 2011). Despite the ubiquitousness of surveys, evaluators are not well trained in survey methods. LaVelle (2018) notes that only 22.5% of graduate programs with an evaluation emphasis offer a survey design course. There are even fewer opportunities to gain this expertise outside of graduate programs. Among university-level certificate programs, a major source of professional development, only 9.5% offer training in survey methods. This means that evaluators, by and large, are not being exposed to the vast knowledge of measurement scholars on scale validation, reliability, and equity in survey design and implementation (DeVellis, 2016; Leach Sankofa, 2021). Moreover, established measurement best practices are rooted in a post-positivist approach (Kyriazos & Stalikas, 2018). In recent years, alternatives have surfaced that center social justice in survey design and implementation, such as QuantCrit (Castillo & Gillborn, 2022), culturally responsive assessment (Montenegro & Jankowski, 2017), and the transformativist approach (Leach Sankofa). The field’s continued call to increase equity and social justice in evaluation necessitates the introduction of these new approaches. Those that aren’t trained or trained well risk measuring something inaccurately (c.f., Braman & Azzam, 2023) and inequitably, and then telling the wrong story from this bad data. The consequences are harmful, especially for under-resourced communities. This 90-minute session addresses these issues head on. It builds upon a similar session offered at AEA 2022 with over 100 participants and many more trying to get in the room, which suggests evaluators want this skill. This session will focus on two mixed-methods procedures grounded in transformative measurement practices. The first content validation procedure asks experts, including those who are experts within the communities where the evaluation is taking place, to provide mixed-method feedback. The result is a description of whether the survey is defining concepts appropriately and in culturally affirming ways. The second qualitative interview procedure can serve multiple foci. For example, the interview procedure can serve a foundational purpose for learning from stakeholders about how they broadly think about the concept and how they define it, which can help inform writing or adapting questions in ways that are contextually and culturally appropriate. The interview procedure can also be done to understand how each survey question is understood, whether harmful biases or assumptions are embedded, etc. This is an opportunity to ensure the story told by the survey instrument is as it was intended. So participants walk away with improved knowledge and skills, the session includes lecture, real-world examples, group activities, and discussion. To maximize use, templates and general guidance will be shared so that participants feel prepared to apply this process under diverse circumstances.