Our recent Ofsted inspection happened to coincide with the speech Dr Rebecca Allen gave on 7 November, titled ‘Making teaching a job worth doing (again)’ [https://beckyallen.files.wordpress.com/2010/07/2017-11-becky-allen-on-workload.pdf]. If you’ve read the speech you’ll appreciate the irony of this.
Whilst I can’t comment (yet) on the outcomes of the inspection, I am able to talk about my experience of the process, which I intend to do. Dr Allen’s speech has informed my reflection, as has Daisy Christodoulou’s marvellous book ‘Making good progress’, which I am currently reading. I am also indebted to my colleague Steve Shaw who’s thinking on everything pedagogic is wise; if I mention flowers then it is him you have to thank.
My recent experiences have caused me to think long and hard about how we challenge practice in our schools. Ofsted’s current pre-occupation with looking at books as a proxy for whether students are making progress has intrigued me, and I am divided on the matter if I’m honest.
On the one hand, it feels like Ofsted are clutching at straws to find a reliable source of evidence for judging teaching and learning. Now graded lesson observations have been shown to be invalid and unreliable and it has been accepted that you can’t ‘see’ progress within a twenty minute observation, Ofsted’s attention has turned to assessing ‘progress over time’ by looking at the only consistently written artefact available; books.
What exactly do they hope to find? Firstly, it would appear, the inspectors will look at the ‘quality of work’ being produced. A student’s work might be looked at ‘over time’ (i.e. front of book and back of book) to see improvement. This appears to be a highly dubious endeavour and seems to result in drawing conclusions about whether students are ‘acting on feedback’, particularly in regard to whether they punctuate correctly.
A student’s work might also be compared to another ‘similar’ student (which appears to mean someone who got the same KS2 score a number of years before). Popular is the practice of looking at similar students who are or aren’t on free school meals to see whether they are achieving similar standards. Again, pretty dubious practice, particularly given the sample size.
Finally, some attention is given to ‘feedback’ (which means marking in this context) and whether this leads to improvement. There is some caution by inspectors about this as they have been told only to comment on whether this feedback adheres to the school’s marking policy. However, the fact that they can’t comment on what they think about marking in the report doesn’t seem to stop them from drawing conclusions which colour their overall judgement (in our case, they didn’t actually ask for the marking policy anyway… for all they knew it could have said ‘don’t mark books’).
Any evidence seen that comments by the teacher lead to students doing something different (‘better’) is seen as proof that ‘feedback is effective’. Worse, cursory marking, rather than just being seen as encouraging or a waste of teachers’ time, is ‘evidence’ that feedback is ineffective. This scant evidence is scaled up to a conclusion about the department or even the whole school.
As no observation of the teaching is taking place (even when they are in the classroom the inspectors are looking at books or talking to students, not paying any head to what the teacher is doing) there is no judgement made about whether there is effective verbal feedback which would probably be more effective and certainly be more efficient than hours taken writing detailed comments in books.
However, I do have sympathy for Ofsted as they search for some reliable way of judging whether teaching is effective and whether students are making progress. They can’t even rely on data any longer as inspectors are told to treat the school’s progress data with caution (rightly; who is to know what dodgy assessment methodology it is based on?).
I questioned @harfordsean (Oftsed’s National Director) via Twitter regardng how confident he was about books being a reliable proxy for learning. His reply was to say that they are currently researching just that question. Well, let’s hope the research says it is reliable for the sake of the schools whose judgement depends on this assumption in the meantime.
What should be our response?
Given the scrutiny described above, and how much there is riding on it, it is not surprising that schools adopt practices which ensure that Ofsted see what they are looking for. Dr Allen explains the effects of this coercive force well in the paper mentioned above by employing DiMaggio and Powell’s work on ‘institutional isomorphism’. I won’t repeat the argument here (read the excellent paper), but will summarise it as essentially saying that schools prioritise ‘looking good’ over ‘being good’.
I am interested in this response and contend that it detracts from a more appropriate response. The attempt to look good results, I would argue, in action which I will call superficial challenge. The attempt to actually be better involves a process which I will call deep challenge.
How might a school respond to the criticism that written feedback is not leading to actions by the students to improve their work? Usually, schools will focus attention on the type of written feedback given and what the students do with it. Anyone who works in schools will recognise this response, which takes forms such as triple-impact marking and D.I.R.T. and usually involves different colour pens.
This challenge to professional practice says ‘don’t do this’ but instead ‘do this’. It is superficial but, if adopted, will result in generating the type of behaviours desired.
Here comes the flower
What superficial challenge fails to do is to question why the teacher is adopting the practices in question. By failing to address this question, the intervention addresses the symptoms but not the underlying condition.
At this point I need to draw on Daisy Christodoulou’s book, referenced above, and employ Steve’s flower analogy. Given we are both reading the book I think the author’s name must have subconsciously inspired the choice of biological inspiration. For this reason, I will adopt the daisy as my flower analogy of choice.
When we look at a daisy our eyes are drawn to the flower and its beautiful petals; it is designed to catch our attention. However, the flower sits on a stalk and the stalk grows from the roots below, which are hidden from view. If the petals are wilting, the problem probably isn’t with the petals but somewhere further down, probably in the roots; a lack of water or nutrients perhaps?
The visible aspects of teaching, like feedback, also rely on an entire support system. If feedback is poor then the problem probably isn’t just with the feedback; it is a sign of a more deep-rooted problem.
Daisy (as in the author, not the flower) outlines some fundamental flaws in our common understanding of learning which might explain our ‘poor feedback’ symptoms.
One such flaw is a poorly designed assessment. Without a reliable and accurate assessment methodology feedback is bound to be poor; the teacher simply lacks the data and insight to identify what the next steps in the student’s learning should be. In my experience, there are significant weaknesses in assessment methodologies in schools which has been made worse by externally imposed assessment systems and national strategies which have de-skilled the workforce.
Sitting below the ‘stem’ of assessment is the ‘root’ which is the model of progression. This too is often not well thought through. The model of progression is a clear conception of the ‘current state’ and ‘goal state’ desired at the end of a period of learning, and the route by which the student will get there. These terms were coined by Wiliam in his book ‘Embedded formative assessment’.
Without a model of progression, an effective assessment methodology cannot be designed, without which useful feedback will not be forthcoming. Wiliam says it better:
“To be effective as a recipe for future action, the future action must be designed so as to progress learning. In other words, the feedback must embody a model of progression…”
So poor feedback may be a symptom of something after all. Perhaps this is a proxy we should pay attention to?
Rather than this being a proxy for learning over time, however, the best we can conclude from poor feedback is that there may be a more deep rooted problem (although it could indicate a lazy teacher). What it doesn’t tell us is what that problem may be; that would need much more exploration than an Ofsted inspector has time for.
What we do know is that superficial challenge will not be effective in resolving the issue; treating the symptom, not the underlying condition.
Deep challenge would involve digging down to the roots and uncovering whether the model of progress, assessment expertise, subject knowledge or something else is at fault. It would not be a quick fix, or solvable by a one-size fits all policy. It would take time, trust, expertise and sensitivity to explore the roots of professional practice to find out why the flower is wilting.
None of this seems very compatible with the high-stakes, quick turn-around accountability culture of the English education system.
My conclusion is that I prefer to nourish the routes of professional learning rather than adopting superficial methods to make us ‘look good’. I just need to find some quality manure.