Research on Evaluation
Rebecca Polivy, MA (she/her/hers)
Independent Evaluation Consultant & PhD student
Claremont Graduate University; Claremont Evaluation Center, United States
Rebecca Polivy, MA (she/her/hers)
Independent Evaluation Consultant & PhD student
Claremont Graduate University; Claremont Evaluation Center, United States
Stewart Donaldson, Ph.D.
Distinguished University Professor
Claremont Graduate University
Claremont, California, United States
Stewart Donaldson, Ph.D.
Distinguished University Professor
Claremont Graduate University
Claremont, California, United States
Location: Room 101
Abstract Information: Evaluation scholars and practitioners have argued that having more people who can think evaluatively is “essential” for successful evaluation practice, organizational learning and improvement, and even healthy democracies. However, empirical evidence backing those claims is scant mostly because there is no agreed upon definition of evaluative thinking and no widespread way to measure it. In this think tank, we will present the findings of a current study that is being done to clarify and define evaluative thinking (ET) for the purposes of developing a scale to measure the construct and, subsequently, build a body of empirical evidence concerning the contributions of evaluative thinking to the evaluation field and beyond. We will take the participants along the journey of how the thinking on evaluative thinking has developed over time - as the field of evaluation and evaluation approaches are rapidly evolving, and as we enter a post-pandemic, politically divided, (mis)information era. Following this background presentation, the participants will be broken up into small groups to reflect upon the findings of the study and offer their perspectives on questions often raised in the research such as: Is there a real difference between evaluative thinking and other ways of thinking, such as critical thinking? What do “higher levels” of evaluative thinking look like and what impact, if any, might higher levels of evaluative thinking really have on evaluation processes, organizational performance, and society at large? Are there other outcomes that might be predicted by higher levels of evaluative thinking? Is it worth investing in building lots of peoples’ capacity to think evaluatively or is it better to invest in individual, trained evaluators? Together, think tank participants will explore the role of evaluative thinking in our evolving field and society, and discuss how to advance the understanding and application of, and research on, evaluative thinking today and into the future.
Relevance Statement: While the idea that evaluation, or the process of coming to a value judgement, requires a unique way of thinking and reasoning has underlaid even the earliest writings in the field, the construct of “evaluative thinking” (ET) has become more prevalent in the Evaluation literature in the past decade (Patton, 2018; Vo, 2013). This concept is so much on the rise that it has not only been deemed “essential” for successful evaluation practice but also as imperative for the times in which we are living (Patton, 2018, Vo and Archibald, 2018) where we are faced with widespread misinformation consumption and threats to democracy (Patton, 2018). However, the relationship between increased capacity to think evaluatively and its potential outcomes has not been sufficiently empirically demonstrated. Without empirical research exploring these relationships, evaluation scholars, practitioners, and capacity builders, as well as, educators and other professionals are left to wonder what, if any, is the real value of building the capacity of individuals to think evaluatively. There is no agreed upon definition of evaluative thinking (ET) and no widely accepted way to measure it and thus fully explore its value. Essentially, ET is a type of critical thinking that takes place in evaluation contexts and emphasizes the role of evidence and valuing (Vo, 2013; Buckley, 2015; Vo, Schreiber & Martin, 2018). To better understand the impact and value of thinking evaluatively, the construct of evaluative thinking must be measurable. Currently two scales exist to measure evaluative thinking. The Evaluative Thinking Assessment Tool, created in 2005 by the Bruner Foundation, measures evaluative thinking at the organizational level but has not been validated nor peer reviewed. The Evaluative Thinking Index (ETI), created by Buckley and Archibald (2011), measures evaluative thinking on the individual level. The ETI has been peer reviewed and is deemed valid in probing two established factors of evaluative thinking: 1) believing in and practicing evaluation and 2) posing thoughtful questions and seeking alternatives (McIntosh, 2021). However, other factors hypothesized by Buckley and Archibald (2011), were not sufficiently probed by the items on their scale. Additionally, “valuing” was not included in the ETI’s working definition of ET, even though “valuing”, has been deemed an important element of ET (McIntosh, 2021). Neither of the two scales has been used widely to build a body of empirical evidence about the contributions evaluative thinking can make to the field of evaluation, organizational behavior, or society at large. The study we will discuss in this think tank is building off the work of the Evaluative Thinking Index to build a scale to measure evaluative thinking that is reliable and valid. The process of scale building has uncovered new possible dimensions of evaluative thinking which are emerging and which are interesting to juxtapose alongside recent historical events. A think tank forum with other evaluation scholars and practitioners, in which we present these findings and invite other community members into the conversation, will broaden and deepen the understanding of, and further the research on, evaluative thinking.