Extension Education Evaluation
Brenna Butler, PhD (she/her/hers)
Evaluation Specialist
Penn State Extension, United States
Michael Hamel, PhD (he/him/his)
Evaluation Director
Penn State Cooperative Extension
University Park, Pennsylvania, United States
Matthew Spindler, PhD
Evaluation Specialist
Penn State Extension, United States
Malinda Suprise, MA
Evaluation Specialist
Penn State Extension, United States
Location: Room 203
Abstract Information: In large organizations, such as Cooperative Extensions, impacts that derive from one-on-one interactions with clientele are often missed in “hidden stories” due to a lack of effective data tracking measures present within the organization. This session will describe the methodology used to create a survey tool that captures the outputs and outcomes of educator-clientele interactions at Penn State Extension, focusing on supporting data utility at multiple levels (i.e., educator, supervisor, and leadership council). How the online survey built a foundation in the organization for data analytics supported through consistent and structured data input and storage processes will be described. Those data input and storage processes will facilitate future data analysis for complex decision-making throughout the organization’s evaluative cycles of activities and programs. The importance of stakeholder involvement in the development process of this survey as a form of boundary-making will be discussed in relation to maximizing the utility of data collection throughout the organization. Data collection is inherently a form of boundary-making that determines which elements of a situation should be included in the informational picture constructed of a context and which elements should be excluded (Schwandt, 2018). Boundary-making was used in this instrument development process to utilize the important role of stakeholder involvement in defining what information should be collected and curated (Archibald, 2020). Participants should leave this demonstration with the knowledge and tools to employ a similar methodology at their own organization to track individual interactions using a structure that allows for data aggregation at an organizational level.
Relevance Statement: Within many Cooperative Extensions, organizational-level data have historically been undocumented or recorded in an unsystematic way that does not allow for data aggregation (Diaz et al., 2019; Lamm et al., 2013). Collecting evaluative data that supports activity, program, and unit-level decision-making is key to tracking organization-wide impacts to report to stakeholders and inform future decision-making. Therefore, this proposal will walk participants through a step-by-step methodology of how impact data can be collected and aggregated at an organizational level in a contextually relevant way, while still capturing data at the individual and team levels. Through this demonstration, participants will learn about the presenters’ approach to developing an organization-wide evaluation instrument, an online survey, for Penn State Extension. The survey was created by performing key informant interviews across various sub-groups to learn about team members’ unique experiences when participating in one-on-one interactions with clientele. Themes from the interview data that emerged at the team and unit levels informed the development of question content. Key informants interviewed held positions and had experience that gave them knowledge of the nature, magnitude, and distribution of the problem, thus providing useful information about the characteristics of the context (Rossi et al., 2019). This project required knowing what makes extension engagement opportunities and one-to-one helping episodes more accessible and valuable for clients, along with how those experiences are actualized and turned into future behaviors. Clarifying discussions with stakeholders about the data collection process must include asking the right questions about how they use their knowledge, skills, and preferred methods of working through engagement and helping episodes to ensure the constructed client data survey has a robust level of contextual validity (Hatry et al., 2015). The survey was created to keep the framework wholistic to capture organization-level efforts, while also allowing for the collection of data unique to teams and individuals. A mixed methodology of stakeholder interviews informing the survey development aided in capturing differences in impacts within the organization while still measuring commonalities at the organization level. The methodology behind the development of this data instrument helps address the potential gaps within many organizations in the lack of data collection of outputs and outcomes occurring at the organizational level (Lamm et al., 2013). This survey also helps capture “hidden conversations” that often aren’t collected within large organizations, such as Extension programming. Additionally, this proposal encourages evaluative thinking within organizations when designing data collection tools by emphasizing best practices in evaluation instrument development, such as stakeholder involvement as a form of boundary-making in the data instrument development process (Schwandt, 2018). This proposal adds to the body of knowledge around best practices on instrument development for tracking evaluation impact data for a large organization that contains several levels where data can be aggregated (i.e., individual, team, unit, and organization-wide level data). The context of the data instrument explored here is especially relevant to Extension systems which often have several units of programming around specific content areas, aiding the need to aggregate data on several levels to measure various impacts.