For those of us classically trained in statistics, “evaluation” is somewhat of a mystery. We are accustomed to a thought process that proceeds through five steps.
- Define the variable we want to explain – the dependent variable (DV)
- Define the variable that we believe explains variation in the dependent variable, referred to variously as the independent variable (IV) or possibly as a factor.
- Our goal is to assess how much of the variation in the DV is explained by the IV.
- We set up an experiment or field study and collect data.
- Then we compute the variation in DV explained by the IV and check that it is higher than might have been observed by chance alone, i.e., we test our hypothesis and state the significance or likelihood that what we are seeing is in reality . . .well nothing.
The thinking and goals of evaluation to hypothesis testing.
- Above all, we are not moving towards a significance test. We are not in the business of testing what is real and likely to be reproduced on different occasions. Put this out of your mind. We are in the business of asking who cares and why they care! At the end of the process we must provide an answer that is not ‘objective’. We must be able to say whether something is good or not.
- When we collect our ‘data’, we assume that something happened. Even nothing happening (e.g, lunch did not arrive) might be a relevant happening. We also assume that those in authority, often us, set out to make something happen. We are thus asking whether our efforts were worthwhile and above all, wanted.
- Those of trained in statistical methods might then want to quantify what happened. We may seek numbers but the DV is not our focus. For example, in asking whether this child pass their examinations, the examination mark is descriptive information. In statistical work, we are fundamentally asking what percentage of children passed their examinations and the probability that any one child in the future will pass. Evaluation work asks whether students took an examination anyone actually cared about and what it cost us to get there. Were children required to work day and night? Once they have completed the course, were the skills of any use? How did this examination preparation fit into the rest of their lives? We challenge the DV itself. We are not asking were the goals met? We are asking whether the needs were met. What did we learn about our values? What have we learned about our priorities and how the programme fits into the wider scheme of things?
- Instead of looking at people as IV, we look at them no as the end point — are they satisfied?
- And instead of looking at DV as something to be explained, we learn about values as what underpins satisfaction. I can imagine someone then defining satisfaction as the DV to be explained. My advice is ‘don’t’. We aren’t bringing matters to a close. We are bringing voices into a conversation and enriching what we might think of as important in the world.
2450047 Davidson 2015 items 1 apa default asc
Davidson, E. J. (2015). Question-driven methods or method-driven questions? How we limit what we learn by limiting what we ask. Journal of MultiDisciplinary Evaluation, 11(24).
2450047 McKegg 2014 items 1 apa default asc
McKegg, K., & King, S. (2014). What is Evaluation? ANZEA.