I have just finished the course Evaluation for Educational Technologists (Edtech 505). It was a long journey, with lots of readings and activities, and a beautiful final project, where I had the opportunity to involve around 200 people in the evaluation of an online discipline, Philosophy and Professional Ethics, that I have developed last semester.
The course helped me to deepen my understanding of the differences between assessment and evaluation. This is not an intuitive knowledge for me, because in Portuguese we use the same word for both concepts, avaliação.
During this course I learned several concepts and practices about evaluation that will be useful in my career as an author, professor, distance education professional, and educational technologist.
Owston (2008) discusses nicely evaluation, presenting the interesting Kirkpatrick’s model.
Boulmetis and Dutwin (2005) claim that efficiency measures the relationship between costs and results, effectiveness measures the relationship between goals and results, while impact measures how a course has changed behavior in an extended period of time. These are subtle conceptual differences that have profound effect on the practice of evaluation. From now on, when I think about evaluation, I will always come back to these 3 concepts.
Boulmetis and Dutwin (2005) also propose that evaluation should be embedded into a program since its beginning, that is to say, we should start designing evaluation when we start designing a course, not only when the course it is already designed or even being taught. As many times we do not evaluate a course even when it is ready, this is knowledge every author should have in mind when starting to design a course. It somehow inverts the way we think about project management.
Following this idea, I would always want to use phases of evaluation such as feedback of an expert, one-to-one, small group and field evaluations. This course was so rich in exploring these (and other) phases, that I will want to return to the readings when designing evaluations in the future.
The attention we should devote to the elaboration of a rubric is also something I will take for the rest of my career from this course. The experiences of elaborating rubrics were rich learning moments in this course, even because some questions I proposed, when started to be answered, showed up to be inadequately designed! There is a huge challenge in designing adequate rubrics. I also learned (by the practice, not only readings) that a rubric well designed and available online for answering is a powerful instrument, and for that purpose, Google Docs forms are wonderful resources.
Madaus and Kellaghan (2000) is a very interesting text that explores metaphors of education: high-tech assembly lines versus travel (Kliebard, H. M. Metaphorical roots of curriculum design): the curriculum is a root over which students travel; the professor is the guide; each traveler will be affected differently by the journey; the effects depend more on the contours of the route than on the characteristics of the traveler himself; no effort is made to anticipate the exact nature of the effect on the traveler, but a great effort is made to plot the route so that the journey will be as rich, as fascinating, and as memorable as possible; and this is a marvelous and desirable variability. This is nectar not only for designing evaluation, but mainly for my journey against classic instructional design and banking distance education, to which I will certainly return many times.
Next semester: Instructional Message Design!
Boulmetis, J., Dutwin, P. (2005). The ABCs of evaluation: timeless techniques for program and project managers. 2nd ed. San Francisco: Jossey-Bass.
Madaus, G. F. & Kellaghan, T. (2000). Models, metaphors, and definitions in evaluation. In D. L. Stufflebeam,
G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd Ed., pp. 19-32). Boston, MA: Kluwer Academic Publishers.
Owston, R. (2008). Models and methods for evaluation. In J.M. Spector, M.D. Merrill, J.J.G. van Merrienboer, & M.P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 605-617). Mahwah, NJ: Lawrence Erlbaum Associates.
Interessante diferença entre os termos, não havia percebido. É do mesmo quilate que “precision” versus “accuracy”. Outros não emergem neste instante. Mas é interessante perceber que para melhor caracterizar o que percebemos da realidade há a necessidade de aprimorar o vocabulário. This makes me wonder.