Well, w’dya know?

It wasn’t so long ago that pupils making rapid progress in lessons was a big thing. Fueled by Ofsted’s 2012 framework, teachers were encouraged to prove this progress during lesson observations and school leaders were encouraged to go looking for it.

In 2015, ‘rapid’ was replaced by ‘sustained’ as a criteria for outstanding teaching. By this point, flight paths were de rigueur and all sorts of data trickery was employed by schools to demonstrate students were on track.

By 2019, Ofsted’s curriculum enlightenment was in full swing and Ofsted’s National Director, Sean Harford, sought to redefine progress saying ‘By progress, we mean pupils knowing more and remembering more’. There were many reasons for this ideological swing, but fundamentally it was due to the simple observation that for something to be learned, it must be secure in long term memory. Once this is accepted, progress within a lesson, or even over the course of weeks, is shown to be an illusion. All that is being observed, we must conclude, is a transient appearance of understanding (at best) or mimicry (at worst).

But what were pupils knowing and remembering more of? The answer: the curriculum. It was a short step therefore to proclaim that, with a Eureka-like gasp, the curriculum IS the progression model!

Of one thing we can be sure, which is that we haven’t solved the problem of progress yet. The simplicity of the conception of progress as ‘They didn’t know this before, and now they do’ is attractive, but like all attempts to simplify a complex thing, flawed. The attacks are coming from many sides. There are those who question whether the curriculum, often a description of the declarative and procedural knowledge that pupils are expected to acquire, is reductive, so claiming that our conception of progress is narrow and insufficient. There are those who critique the storage model of memory upon which this conception of progress rests, who suggest we have dismissed ‘performance’ too readily and perhaps the ability to do something with the knowledge acquired is all we should really value and seek to measure. Then there are those who question our ability to make valid inferences about learning gains given the time and resource constraints within which schools operate.

If we step back from these arguments for a moment, we might pause to question what drives our obsession with progress in learning and whether it has been a healthy pursuit. Clearly, we want to know whether our education system is having an impact: it would be remiss of us to make no attempt to establish how much children learn as a result of going to school for 13 years of so. It is also reasonable to ask whether individual schools, teams within schools, and even individual teachers, are delivering the learning gains that we should expect. But recent history demonstrates that attempts to answer these very reasonable questions have been rather clumsy, somewhat damaging, and disputed. Has it all been worth it?

What is the alternative? Well, before progress became the education systems’ drug of choice, we had attainment. Attainment – a snapshot of what a pupil knows and can do at a point in time – is often regarded as inferior to progress. Progress provides a narrative; it allows for different starting points; it conveys momentum; it suggests a possible future. By comparison, attainment appears informationally poor.

However, attainment has certain advantages over progress. Firstly, we broadly agree on what attainment is. Secondly, measuring attainment is a far, far less demanding task than measuring progress. Thirdly, attainment is what most people with a vested interest in a pupil’s learning are particularly interested in, including parents, teachers, and the pupil themselves. Fourth, attainment doesn’t tempt us to make dodgy predictions about the future trajectory of a child’s learning but instead equips us with information upon which we can go about building that learning. Lastly, attainment is more respectful of the architecture of the curriculum. Let’s unpack these a little.

Imagine that you are tasked with assessing the attainment of pupils in your subject to establish how much they know and can do in relation to the objectives of your curriculum. You have been given a choice as to how to do this. Option 1 is to set a robust test paper. Options 2 is to collate the data from multiple assessments which have taken place over the last year. Which do you choose?

Your answer may in part depend on what subject you teach. If you teach a subject with a hierarchical knowledge structure, where students ‘get better’ at things over time, you may lean towards assessing what they know and can do right now. What they knew and could do six months ago is irrelevant as it has been superseded by what they now and can do now. This includes how well a pupil can serve in tennis, the complexity of grammatical construction in a foreign language, their ability to hold a note in singing, the quality of their portrait drawing, the difficulty of simultaneous equations they can solve, or their reading speed. However, if the knowledge in your subject builds cumulatively, you may opt instead to take what Professor Rob Coe termed ‘multiple, inadequate glances’ over time to build a picture of the pupil’s knowledge. This approach may be better in history where pupils have studied three separate periods of history over the year, in assessing pupils’ grasp of two taught texts in English, or when you are interested in the accumulation of French vocabulary. That is not to say that a terminal assessment is not useful – we may want to assess how well pupils have made links between topics or whether they have retained knowledge, for example – but that it is possible to aggregate multiple assessment and this provides the opportunity to build a greater weight or (albeit imperfect) evidence.

Given this fundamental difference in what it means to get better in a subject, or even in different aspects of the same subject, attempts to measure attainment must be respectful of disciplinary distinctiveness.

Knowledge architecture presents a challenge for measuring progress for flat, cumulative subjects. If knowledge is hierarchical, we can look at the progress made between one assessment of attainment and another. For example, we could put two portrait drawings made by a student one year apart side by side and make judgements about the progression in skill. However, where knowledge accumulates in a flattish structure, we must have some idea of the starting point of the pupil in order to judge progress. In most cases, it is not sufficient to assume no prior knowledge. For example, if we teach pupils about the geography of the British Isles in Year 7, the prior knowledge of pupils will likely vary considerably, depending on what they have learnt in primary school, to where their families go on holiday, to whether they enjoy browsing an Atlas that is on their parents’ bookshelf. Progress measures in such cases require a before and an after assessment, which is much more informationally demanding.

One might question, given the opportunity cost of taking time out to measure starting points in subjects with flat knowledge structures, whether it might be better to settle for just measuring attainment at the end of a period of learning rather than attempt to infer the distance travelled. After all, it is the current attainment which is of interest to the pupil, the teacher, and their parent.

This is not the only sense in which progress measures are more demanding than attainment measures. To understand what makes progress so complex, we must consider the comparative nature of attainment and progress.

Both attainment and progress measures lead us towards making first order and second order judgements about pupils. First order judgements are ‘positive’ (in the sense of describing what is going on, not making a moral judgement about it). For both attainment and progress, we make a first order judgement about what the pupil has learned in relation to the curriculum. The curriculum provides our units of measurement, e.g., they know that the Battle of Hastings was fought in 1066. Progress also makes a first order judgement about about how much knowledge has been accumulated since a previous point in time, which adds the complexity described above.

Second order judgements are ‘normative’ (in that they make a moral judgement about whether the learning is ‘good enough’). Both attainment and progress enable a second order judgement of how learning compares to a pupil’s peers. For attainment, this judgement is effectively the same as the first order judgement of whether the curriculum has been learned as the expectation is that all pupils will learn what is taught. However, for progress the comparison to peers is not the same as the comparison with the curriculum as each pupil will have a different starting point. We therefore start to ask the question ‘Have pupils made equivalent progress?’: an ultimately nonsensical question. As these peer comparisons are problematic, we then invent an ‘expectation’ that is quite separate from the expectation that pupils will learn the curriculum. This supplementary expectation is personal to the pupil: have they made the progress we expect of them? We ask this because we have learnt that some pupils progress ‘more quickly’ than others and this leads us to expect that these rates of progress are somehow a reflection of the capabilities of the pupil, therefore predictive of future progress.

The level of complexity and the opportunities to tie ourselves up in knots as soon as we start talking about progress are obvious. It is no wonder that we have gone down various rabbit holes since measuring progress, rather than attainment, has been our goal. In order to make sense of this at a managerial level, we have invented codes, targets, abstract pathways, and crude assessment systems which have done considerable harm.

And yet, the concept of progress is incredibly important. A ‘sense of progress’ is necessary for pupils to feel motivated. For this they need clear goals and useful feedback as to whether they are moving towards these. However, a sense of progress is located within the pupil, not within a spreadsheet. Pupils know when they ‘get’ something they didn’t conceive of before, when they remember something they first encountered months before, and when they can do something that they previously imagined was beyond their ability. Our assessment system should feed this sense of progress.

We should also value the ‘personal best’ attitude that comes with progress measures. Attainment measures are harsh on those who find learning more difficult as they will almost always rank poorly against their peers. However, we should also avoid falling into the trap of expecting less of pupils merely because they have achieved less in the past. These are tricky waters to navigate.

It would be a retrograde step to turn our back on progress, but we do need to proceed with care. National curriculum levels were corrupted by false conceptions of progress; huge workload problems resulted by well-meaning initiatives such as APP; schools have been placed in special measures because progress measures were taken as a proxy for school quality rather than as an indication of cohort characteristics; many hours of management time have been wasted creating and maintaining great big spreadsheets; careers have been ended by conditional formatting rules that have turned too many cells red; pupils have been labelled as slow learners; headteachers have lost sleep over their league table position. Our definition of progress, and the industry that feeds off it, may be about to shift again. Let’s learn the lessons of the past and treat the concept of learning progression with due caution.

Leave a comment