Since the introduction of school league tables in 1992, the exam results ‘achieved’ by a school has grown in significance tremendously. It is now one of the main, perhaps the main, preoccupation of most secondary schools. The objective of ‘improving results’ is embedded in the psyche of school leaders, politicians and parents: it is the defining criteria for success.
I have a document from 1987, the year I took my O’ levels/CSEs, which was passed on to me by my Dad, who was a governor at my school at the time. It is a report on the exam results that year. There is an overall percentage of students getting 5 or more O’ levels (the gateway to A’ levels), and a rough breakdown of the grades awarded in each subject. The analysis (if you can call it that) makes a simple comparison to the previous year, and some vague comment about how ‘pleased’ the school are with the results. There is no value-added (not a concept at that time), no comparison to other schools (not even local ones), and no concern as to whether the ‘school’s results’ were going up or down. This was an era when everyone went to their local school, there were no Ofsted grades, and if you didn’t do well then your parents assumed you weren’t that bright, or were plain lazy. I scraped my five O’ level ‘equivalent’ qualifications (actually 4 O’ levels and 1 CSE grade 1 in Drama, which was reckoned to ‘count’ as a grade C O’ Level). However, it was made clear that I wasn’t welcome into the Sixth Form as I did not possess the academic pedigree they were looking for. ‘Bums on seats’ was not a target prior to per-pupil funding. So, I shuffled off to college with my clutch of semi-valuable qualifications.
I might be over-stating the lack of school scrutiny and responsibility placed on the shoulders of school leaders at that time, but this is how the document, combined with my own experience of schooling, presents that period of history. I suspect it is not far from the truth.
Contrast this with today’s grade-obsessed system. If you work in a secondary school, I won’t have to describe this for you, but it is worth reflecting on just how ingrained the narrative about ‘improving results’ has become. If you want evidence of this, look to our inability to comprehend examinations not going ahead this year: no published results, no comparisons, no league tables. Rather than the jubilation of being freed from these shackles, schools at every point on the spectrum of examination success are filled with anxiety and concern. For the high performing schools, the realisation that their superiority and dominance will not be evident to the world. For the ‘struggling schools’, a frustration that their turn-around efforts won’t be evidenced and recognised. For the vast majority – those middle-ground performers who one day aspire to be among the greats, if only they could find the secret recipe – they are left floating in mediocrity, just when they thought they may get their break-through year. Like the addict denied their hit, the more central to our identity exam results have become, the more we will perversely crave their return.
Some commentators see this period as a great opportunity: an epiphany moment when we will realise that the structures and strictures we have created must be destroyed. But who would really want to return to a 1987 level of visibility and ambition? Progress is never made by turning one’s back on the knowledge society has accrued, but can be made by a more critical engagement with this knowledge: a wise assessment of how the tools we have created might be used to build a better world.
I gave up trying to improve my school’s results a long time ago (Shhh! Don’t tell my governors). Rather than reckless abandon, this is a calculated decision. I am not in the throws of existential questioning caused by a pandemic, but the perspective afforded by current events does help clarify my thoughts somewhat, and might make them more palatable to those who see the questioning of the orthodoxy as heretical.
Let me see if I can articulate this with brevity.
Every agent should focus on the thing they have most influence over
My first argument is about the locus of control. Who has the power to make the most difference to academic achievement?
Another way of thinking about this is to ask why students get the results they do, and how much of this is within the power of the school?
Or another way is to ask what accounts for the difference in results between schools (see here)?
Whichever way you come at this question, the answer is the same: ‘school effectiveness’ is relatively unimportant in the results students achieve. Now, I want to be clear about what I am, and am not, saying here before anyone gets upset or starts arguing against something I am not claiming to be the case.
What we know is that the most significant factors in the variance between the results students get at KS4 are family background and differences between individual students (for example, prior attainment and specific learning difficulties). Of course, schools may try to mitigate these factors, but overall and over time they are only marginally successful in doing so. There are plenty of examples of individual students who break the glass ceiling with the help of brilliant teachers, and there are schools who buck the trend in terms of raising attainment for disadvantaged students, and we should learn all we can about how this has been achieved, but the fact remains that the macro picture of schools’ counteracting these effects is not encouraging. Schools are not the answer to eliminating disadvantage at a system level, no matter what politicians would like us to believe.
We also know that, after accounting for the differences caused by intake, the variation in school outcomes is caused more by variation between subjects within the school, and between the grades achieved in different subjects by individual students, than by a ‘school effect’. Now, this argument is nuanced and prone to be misunderstood, so let me be clear. Schools do have some influence over these factors, and would be far batter off working at reducing in-school variation than working on general ‘school effectiveness’. If schools raised achievement in the poorest performing subjects to that of the average performing subjects in their school, and if they increased results for students in their weakest subjects to equal their average performance, then the overall school’s results would improve.
But neither of these goals are without difficulty. Consider first the variance between subjects. We might (and leaders often do) jump to the conclusion that this variance is due to the quality of teaching. However, it may also be due to the ‘difficulty’ of the subject (thereby being a function of curriculum choices), the choice of exam board, or (for optional subjects) the nature of the students who take the subject. Schools may try to raise results by influencing these factors, but is the goal of raising results a morally justifiable reason for changing the curriculum for students?
And even if the variability between subjects is due to the quality of teaching, to what extent is this a function of recruitment difficulties, the quality of training, or a deficit in CPD and guidance experienced by teachers throughout their career to date? A system in which every school is mitigating the negative effects of a failing labour market and under-investment in teaching training and career development is a grossly inefficient system.
Trying to tackle the variability of results for individual students between their different subjects is perhaps even harder, as it is likely to be a function of interest and aptitude, innate characteristics which the school has little influence over.
If you want a more quantified picture of this argument, see the FFT article linked above. If we go by these figures (and there are other sources which paint a similar picture), we can estimate that around 50% of differences between schools’ results at KS4 are accounted for by their intake profile, about 20% by variances in results at the level of the individual student, just shy of 20% by inter-subject variance, and about 12% due to ‘school level effects’.
Looking at the task of raising results objectively, what ‘level’ in the system has the best chance of positively impacting exam outcomes (setting aside the fact that our current system is norm-referenced, so won’t even show national effects)? The most powerful policies are likely to be:
- Reducing social inequality generally (by far the highest impact way to raise achievement nationally)
- Improving teacher recruitment and training
- Investing in a coherent and well-resourced professional development framework for teachers
- Maintaining a high quality and fit-for-purpose qualifications framework.
Relatively small gains will be made if we expect individual schools to be the main drivers of improvement in results. Worse than that, a system in which schools see the raising of exam results as their primary purpose will mean that other things that the school can and should focus on will be down-played and neglected. There is plentiful evidence that this happens, and is very damaging for young people: a reduction in the number of school trips and extra-curricular opportunities, an increase in revision programmes and interventions which crowd-out other activities, a deterioration in relationships, and a neglect of learning for learning’s sake. And yet, schools have an ability to impact positively on students’ lived experience of school, the quality of their peer relationships, their personal development, and the memories they form. These are concrete things – meaningful things – which it is not within the gift of those higher up the system to significantly influence. If schools don’t provide this for societies children who will?
If every level of the system focuses on the things it has the best chance of improving, we would see an overall improvement in educational outcomes due to a more efficient allocation of resources. There are some factors which affect exam results which are within the school’s control, but the majority are not. The school is not the most powerful agent in the system, and over-egging school level effects is counterproductive.
Exam results only really mater at thresholds and the extremes
A counter-argument to the above is to make the case that exam results really matter to students, and schools should therefore do all they can to maximise these. I would agree this is true, but in two cases: at thresholds and at the extremes.
For most schools, and for most students, there are only marginal gains to be had in improving results, and the consequences of not doing so are insignificant.
Most schools (about two-thirds) are clustered around the average for progress (between -0.5 and +0.5 Progress 8). As explained above, the primary determinant of a school’s progress score is its intake, and most secondary schools have a fairly balanced intake (due to having a comprehensive system across most of the country). These mid-range schools tend to move around the rankings and see small changes in P8 from year to year. It is rare to see a sustained increase in P8 year-on-year, and eventually schools gravitate back to their long-term average. Given the amount of time and effort schools put into ‘improving results’, they tend not to achieve much, and not for very long. Perhaps they would be better focusing on a metric they have some hope of changing?
Even if these school-level differences were entirely due to the ‘quality of education’ provided by the school (and they aren’t), how concerned should we be? Even at the extremes, students in the worst performing school would only achieve a grade less than those in the best performing schools. For a student achieving either a set of grade 6s or 7s, the long term effects are negligible. It is desirable that they achieve the higher grades, but at what cost? Now remember that this difference is mostly not due to school effectiveness, but to pupil-level effects, so if this child went to another school with a higher P8 score, they would not magically get better results. We would expect their results to be impacted to a only a small extent, perhaps a grade higher in a couple of subjects, if at all.
For most students, we should not be concerned about which school they are in or what this school’s P8 score is. We should focus on the child, not the school. We should also not be overly concerned about marginal gains, unless (and here is my first exception) they are close to a threshold e.g. getting a grade they need to study something they want to study post-16.
Now this is all well and good for the school’s occupying the middle-ground, but what about those at the lower end of the P8 rankings? There are two reasons a school may reside there: either the school is very weak or they serve a very disadvantaged community. Really weak schools should of course receive considerable attention, however these are few and far between, and it is not the case that a ‘results focus’ is what they need in order to improve. Schools serving disadvantaged communities will also need additional resources and support, but such schools arguably have even less influence over ‘results’ than those with more balanced intakes. The low P8 score is a clustering effect: the result of a concentration of students from disadvantaged backgrounds. If you took all the disadvantaged students from a number of ‘average’ performing schools and placed them all together in one school this school would have a poor ‘performance’. Furthermore, you would probably exacerbate the problem by having these students all in one place, rather than spread out across schools. Again, we see that the ’causes’ of lower exam results are at the level of society and individuals. It is only within the gift of the school to influence this to a small extent, and school-level ‘performance data’ disguises this fact.
Please do not mistake this argument for complacency about improving educational outcomes for disadvantaged students: there is a moral imperative to reduce inequality of opportunity. My point is that promoting a narrative about closing the gaps in results between schools is wrong-footed. Exam results do matter (and this is my second exception) to students experiencing social disadvantage, whether they are in a ‘poor performing’ or ‘high performing’ school, but it is far from certain as to whether the power to change this is within the school’s gift.
The warping effect of targeting exam results
I’ve written elsewhere about the effects that an explicit focus on ‘improving results’ has on a school, so I will not re-hash the argument. Put simply, there is the ‘crowding out’ effect described earlier (where more achievable goals are foregone) and a distorting effect (where the daily business of the school starts to lean towards the pursuit of an abstract measure of success). Exam results are an approximation of academic success, which itself is only one (albeit important) purpose of schools. When the measure becomes the goal, all sorts of unfortunate things happen.
Ironically, focusing on meaningful things that the school has a reasonable influence over – the daily experience of school, behaviour, the quality of resources, school culture, making stuff interesting – will have a greater impact on exam results than a focus on exam results. Forget exam results: stop talking about them; don’t set targets; worry about the things within your influence. You’ll feel better for it.
Stop trying to improve your school’s exam results. In doing so, you might actually improve your school’s exam results. And if that doesn’t work, nothing within your control will.