Search This Blog

Followers

Monday 23 February 2009

Dynamic Evaluation

Is evaluation research or not? This is a complex question, the answer to which is that it can be but need not be. Let us inquire into what might turn an evaluation into research.

The simplest form of evaluation is the feedback questionnaire. They are given out at the end of staff-development courses, as a client satisfaction form. They might give an indication of quality, and ideas for future improvement, if they are seriously filled in. In reality some are and some aren't, some give comment and others ticks in multi-choice boxes. If a quality session is recognised as such by one perceptive person, whilst ten press-ganged attendees offer low scores, the result will be too low a score. Thus it is fundamentally not reliable. These questionnaires tell us more about the attendee than about the session. To be sure of reasonable feedback, a session needs to ensure that people are not taken out of their comfort zone; this also means that they will not move on. I remember race awareness training in the 1980s. This took people so far out of their comfort zones that anger inhibited rational learning. A session needs to push people out of the comfort zone but not too far - staged knowledge construction rather than revolution.

An evaluation might try to find out whether pupils or students have progressed during an intervention. Thus a pre- and post- questionnaire on the topic under consideration, establishing baseline and summative data, can chart progress. Knowledge and skills are simpler to handle than softer aspects such as emotions, behaviour, attitudes and values. Broadening the questioning to take in different stakeholder perspectives (e.g. pupils, teachers, parents, therapists) produces more broadly based feedback data, and spreading that data over time strengths it further. Studies like this are routinely published as research. However, their ability to generalise may vary, since there are many factors which affect the success or failure of a programme. Which processes are effective and which not may be very hard to discover.

A research might look elsewhere for ideas about such processes and seek confirmation in the data. These ideas are contained within the notion of 'theory'. For example the processes of learning draw on the work of Lev Vygotski, on how adult teaching and mentoring help to boost and structure knowledge. The word 'scaffolding' is now commonly used. Social process might draw on the work of Victor Turner, on how social performance or ritual smooths the way for social progress. The processes within individual well-being might use Abraham Maslow's work. New lines of theorising rarely come from a single piece of work but are associated with the life-work of particular individuals. Typically, a piece of research will gather its unique data, and theorise about it using appropriate models from others.

Dynamic evaluation takes us one stage further. Often evaluation simply records what has happened and seeks to grade its effectiveness, using instruments which range from crude to over complex. A crude feedback questionnaire may get many returns but contain no useful information. A 20 page questionnaire may get very useful information but have few returns.
I call this a static evaluation. After the event there are recommendations on how to do it better next time. Over the time of a long project there might be annual recommendations to allow for some formative development, if the project structure allows it. The relationship between the project and the evaluation is mechanistic - recommendations are ticked off during the following year, when the next phase of the evaluation makes further recommendations.

Dynamic evaluation seeks to improve and develop processes day by day. The project needs to allow for growth in the flexibility of its targets. The growth is produced not by the recommendations of the evaluator but by the quality discussions that the evaluator facilitates. The project team, by responding to these, find more effective ways of achieving their goals. The evaluation instrument is likely to be the policy discussion group; if questionnaires are used it will be to feed into this discussion. The facilitator will seek to open up perspectives in the group, and look for inhibitions and negative practices to counter. The group will be encouraged to focus on the real objective of the programme (e.g to help pupils progress in specified ways) rather than on numbers and mechanistic box ticking. The evaluation should therefore get to the heart of the issue and seek ways in which the team from an early stage establish positive strategies to meet their real objectives.

Because of the nature of the relationship, the evaluator is intimately involved with all aspects of the work in hand, attending and actively participating in every significant meeting. This therefore cannot be done by skimming over the surface of the project. There is a value for the project in the quality of internal debate and strategy setting; there is a benefit for knowledge generally (that is by turning this relationship into research) if the change processes are made explicit and investigated more theoretically.

No comments: