Doctoral Student University of Connecticut Vernon, Connecticut, United States
Abstract Information: What do evaluators think about when they are asked questions about evaluation practice? What narratives are triggered by what words? This research study focused on understanding the thought processes and beliefs of evaluators as they reflect on their practice through cognitive interviews. Cognitive interviews are a powerful instrument design strategy that helps reveal the narratives and meanings behind survey responses. Building on last year’s field pilot of a new evaluation practice instrument, this research on evaluation study involved cognitive interviews with 20 evaluators recruited through purposive nonprobability sampling to gather diverse perspectives on the evaluation practice instrument. The semi-structured interview process followed the Willis Method, offering participants an easy process to explore their responses in-depth. This paper shares findings from the interviews and how the data will be used to improve the instrument, particularly to ensure that the phrasing of questions and subsequent interpretations align with the intended construct definitions and narrative of the instrument. The paper will be of interest to a variety of audiences. Practitioners and commissions will have opportunities to think about their own surveys and how cognitive interviews may be a useful tool for evaluation studies. Other researchers will also benefit from these opportunities, and they will learn about a potential instrument that may be useful in their own scholarly work.
Relevance Statement: Validation of instruments is an ongoing and multifaceted process. Cognitive interviews are one important measurement development strategy for testing new instruments in order to establish validity evidence related to content and response processes (American Educational Research Association, 2011; Bandalos, 2018). Cognitive interviews with individuals from the target population can help to understand where there may be issues in comprehension, recall, decision or response processes, and any structural problems that were missed (Willis, 1999). This is a common practice within measurement and assessment, but is not clear how common it is within evaluation. Despite how commonplace surveys are in evaluation, evaluators are not well trained in survey methods (LaVelle, 2018; Robinson & Firth Leonard, 2018). This means that evaluators, by and large, are not being exposed to the vast knowledge generated by measurement scholars on scale validation, reliability, and equity in survey design and implementation (DeVellis, 2016; Leach Sankofa, 2021). The goals of the session then are two-fold: (1) present a research on evaluation study where evaluators participated in cognitive interviews and (2) offer a case example of cognitive interviewing methods that can be integrated within evaluation practice. The session will extend a research agenda that began with a 2022 pilot study of a new instrument on evaluation practice beliefs regarding knowledge and participation. The cognitive interviewing process followed best practices and adhered closely to the Willis Method, using a semi-structured interview process to offer participants the easiest process possible (Willis, 1999). Twenty evaluators were recruited with a purposive, nonprobability sample to gather a diverse array of perspectives based on experience level, role in evaluation, field of practice, and more. The session will share findings from the interviews and how the data will be used to improve the design and development of the instrument, particularly to ensure that the phrasing of questions and subsequent interpretations align with the intended construct definitions and narrative of the instrument. But moreover, the session will have opportunities for participants to think about the applicability of cognitive interviewing methods in their own evaluation practice and how they can be applied to help them understand how their participants understand their survey questions and ultimately improve the validity of their instruments. For example, practitioners and commissions will have opportunities to think about their own surveys and how cognitive interviews may be a useful tool for evaluation studies. Other researchers will also benefit from these opportunities, and learn about a potential instrument that may be useful in their own scholarly work. In short, the paper will be of interest to a variety of audiences.