Organizational Learning & Evaluation Capacity Building
Kate Clavijo, Ed.D.
Associate Director of Program Evaluation
Military Family Advisory Network
San Luis Obispo, California, United States
Gabby L’Esperance, Ph.D.
Director of Insights
Military Family Advisory Network, Oklahoma, United States
Emily Warren, Ph.D
Program Officer
National Academies of Science, Engineering, and Medicine, United States
Piper Grandjean Targos, M.A.
President
EdgeEval, United States
Location: Room 104
Abstract Information: Logic models are among the most used tools in program evaluation work. They can help address complexity, context, competing agendas, and ambiguity in evaluation. The development of logic models in complex environments poses unique challenges, such as non-linear causality, interdependence of multiple actors and factors, and uncertainty and unpredictability of outcomes. Facilitating logic model discussion requires an evaluator to come prepared with various tools and experience. This demonstration will describe various tools and approaches for facilitating successful conversations with diverse stakeholders to optimize the logic model development process. Presenters will also discuss best practices from existing research and experience for creating logic model components. Participants will learn techniques to initiate discussions about the logic model process with stakeholders, helpful questions to guide conversations about outcomes, and how to facilitate logic modeling sessions with diverse stakeholders. Participants will walk away with new inspiration and knowledge to help them apply logic models in their work. By attending this session, participants will: · Learn when to use theory of change and a logic model in a variety of complex environments. · Learn how to facilitate conversations about program outcomes with diverse stakeholders that may have competing agendas. · Explore ways in which listening to diverse stories can shape the mapping process and create a richer understanding of the program. · Discuss how and when to revisit logic models to ensure all relevant voices, including those often marginalized, are heard and represented. Presenters will also share examples of how they navigated and adapted the logic model process with diverse stakeholder groups with competing priorities and perspectives.
Relevance Statement: Logic Models have been used for over 20 years by evaluators to describe the pathways for change and programs' effectiveness. As evaluators continue to work in complex and developing situations with their clients, it is important to use reflexive and adaptive techniques to represent the programs and to form the basis for evaluating them. Mark (2019) emphasized the importance of representing contingency in program evaluation, that is, the recognition that programs operate within complex and dynamic environments and that the outcomes of the program are contingent on a variety of factors, including the program design, implementation, and context. There is a need for engaging stakeholders in the evaluation process, including program and policy beneficiaries and others such as government officials, community leaders, and additional organizations that may be involved in the program or policy. Evaluators should be aware of and responsive to the complexities of the environments in which they are working and use a variety of evaluation approaches and methods to account for these complexities. Developing logic models in complex environments highlight the need to adopt a systems thinking approach, use participatory methods, and incorporate complexity-aware evaluation methods. Systems thinking recognizes that complex systems are composed of interdependent parts that interact with each other in non-linear and dynamic ways and that outcomes are emergent properties of the system as a whole. The use of participatory methods in logic model development involves engaging stakeholders from different levels and sectors in the design, implementation, and evaluation of the intervention. This approach acknowledges that complex environments involve multiple perspectives, interests, and power dynamics, and that the success of the intervention depends on the ownership, buy-in, and adaptation of the logic model by the stakeholders (Cornwall & Gaventa, 2001). Finally, developing effective logic models in complex environments requires incorporating complexity-aware evaluation methods, such as qualitative data analysis, social network analysis, and scenario planning. These methods allow for a deeper understanding of the context-specific factors that influence the intervention and its outcomes, and enable evaluators to assess the resilience, adaptability, and unintended consequences of the logic model (Patton, 2018). AEA allows evaluators to learn from each other when the stakes are low and the opportunity to be candid and straightforward is high. Numerous online resources explain the steps in creating a logic model, but few opportunities to practice and discuss with other evaluators. The value of this demonstration session will be in the real-life examples of how to initiate the logic modeling process, especially in complex environments with diverse stakeholders. The presenters will share candid examples of how they folded in diverse voices and conflicting perspectives into the development of logic models. This session will conclude with a summary of techniques to engage multiple stakeholders and create an adaptable model of the program that is useful in describing the program and designing the evaluation fitted to a complex environment.