In a humble cafe, at the corner of a street, sat a young man and his young lady.
In front of the young man sat a pile of papers which contained year’s of carefully collected data. The young man’s task was to determine the best restaurant in the country. He had set about this task diligently by visiting every establishment in the land to sample their food. With him he took the tools of his trade: test tubes in which he broke down the food into its nutritional parts; a clip board on which he recorded every aspect of the experience with a score from 1 to 10; a thermometer to take the temperature at which the food was served.
And at last, having visited every restaurant he could find and having collected the most comprehensive database of restaurant metrics ever collected, he could determine which of these thousands of restaurants offered the very finest dining experience.
And yet, the young man was in despair. How should he begin to make sense of the data he had collected?
The young lady, seeing his distress, placed the pile of papers to one side and took his hands in hers. She distracted him with memories of the times they had spent together and her hopes for the days they had ahead of them. Soon they were lost in conversation. When their food arrived they ate with relish, hardly noticing that the chips were a little undercooked and that the salad had seen better days. Time passed imperceptibly so lost were they in each others’ company.
As the last customers left and the cafe owner brought an unsolicited bill over to their table, the young lady gazed into the young man’s eyes and told him that she had had the most wonderful night.
‘But,’ the young man replied, ‘the food was mediocre, it is so chilly we kept our coats on the whole time, and the cafe owner is… well, impatient to say the least. Overall, this place would be lucky to get more than 4 out of 10.’
And then, in a devastating and liberating moment, the young man realised his error. As they left the cafe, he deposited his papers in a nearby bin for they did not tell him anything he needed to know.
This story reminds us that the quality of something is often more than the sum of its parts. When we isolate the components of human experience and measure them against a scale we lose something. Accuracy and objectivity may be gained at the expense of meaning.
The difficulty is, of course, that the ‘real world’ is not entirely formal, in the sense that we cannot model it with precise mathematical relationships. The best we can hope for is engineering approximations.
Martin John Sheppard
The story also warns us against romanticising. When the young man throws away his carefully collected data he is being foolhardy. The data may tell him nothing about falling in love, but it may stop someone wasting their money on a terrible meal.
In schools, we have an uneasy relationship with measurement. On the one hand, measurement is problematic as the things we value in education often do not lend themselves to measurement and the things we can measure distort our values (know as the McNamara Fallacy explained brilliantly here). We want to know if children are learning, but when we attempt to measure this we affect what and how they are taught. We want to know if children are being taught well, but can only do this by inventing proxy measures.
On the other hand, without measurement we leave education to chance, not knowing what practice may offer a better education. We can hope that every student falls in love with our subject, but it is hopelessly romantic to believe this is all we should aim to achieve.
In walking this line, it is hardly surprising that we make mistakes.
In recent years, measurement has become a managerial tool under the banner of ‘quality assurance’. Jonathan Mountstevens writes about this here. He suggests that quality assurance, in the way it is practised in English schools, is an evaluative exercise aimed at determining how good a school is in aspects of its work. This process came to the fore as a result of an earlier version of Ofsted’s inspection framework which required headteachers to be able to present a ‘self-evaluation’ of educational standards. The tools of the trade – graded lesson observations, work scrutiny, and lots and lots of data – remain in common use in many schools.
Quality ‘measurement’ is pervasive and persistent in schools, despite Ofsted distancing itself from such practices more recently. Mountstevens makes a case, as others have, for the damage which can be inflicted by crass efforts to measure standards and argues that these practices persist as they provide the illusion of certainty and a veneer of rigour. The alternative is to admit what we don’t know and to set out in a spirit of enquiry rather than in an attempt to control. Nothing in our system rewards this so it is not surprising that it rarely happens.
But I would like to make the case that measurement is not the bad guy, we’re just using it for the wrong purpose.
We have become very dependent on a particular form of measurement which is akin to ‘weighing’. When we weigh something we determine how heavy it is. Our reference point for this measurement is either to judge whether the item has become heavier or lighter, or to compare its weight to something else we believe should weigh a similar amount. So it is when we measure ‘standards’. Our intention is to discover whether things have got better or worse than before, or how they compare to the standard achieved elsewhere (or to some arbitrary benchmark of standards). Our intention is to determine how good something is.
What happens when we change the question from ‘how good is it?’ to simply ‘how is it?’
Setting out to learn about the nature of something, rather than with the sole intent of judging the standard of something, changes the relationship with the phenomenon in question. It is still an inquiry into quality, but asks ‘what is going on?’ rather than ‘how good is this?’ It becomes about understanding rather than judging. It assumes the object in question is complex, not simple: that it cannot be measured along one axis or quantified in a word.
When a standard is attributed (to a person, a lesson, an achievement), simple reasons often follow. Behaviour is not good in this lesson because the teacher is not controlling the class. Students are not making progress because teaching is substandard. Simple causal explanations are attractive, but as Mountstevens observes, we would be wiser to look ‘upstream’ to understand the complex reasons for what we observe. Setting out to judge is not a mindset that encourages such inquiry.
A more useful measurement metaphor may be mapping, not weighing. When we map territory we are interested in gaining a true and accurate picture. We set out with a range of tools, each designed to chart what we observe in particular ways. We know that there is not just one useful map we could draw; mapping comes in various forms, each revealing a different aspect of the environment. Road maps show us how to drive from one place to another, weather maps tell us what conditions we are likely to experience, and topographical maps reveal the physical features we will pass. There is no definitive map, only different ways of viewing a complex reality.
Maps are tools. They help us achieve our goals. Unlike the weighing scales that give us absolute answers, mapping encourages us to explore.
But there is a more important reason why school leaders should spend less time judging standards and more time enquiring into how things are, and that is that they invariably have a loose grip on the reality of what happens in their schools.

Perhaps the biggest risk for headteachers is not being ignorant about standards, but being out of touch with the reality of day to day life in their schools.
The data above from Teachertapp shows just how out of touch leaders can be, in this case in relation to the views of teachers on the ground. We have to be careful with this data because the headteachers and senior/middle leaders surveyed will not be all in the same schools as the teachers who responded, so it is not revealing specific in-school gaps in perception. However, the overall impression is that headteachers have a far rosier view of the firmness, clarity, fairness, usability and effectiveness of their behaviour management policies than teachers do!
These results are no surprise to me at all. The system is configured to create the illusion that things are better than they are among those who are responsible for improving it. For a full explanation of why, see our analysis of the ‘imagined school’ problem here or this excellent post on why the truth doesn’t rise to the top! For our purposes in this post, I want to focus on three factors which promote this illusion in particular:
Observer effects
When a headteacher walks into a classroom students and teachers invariably change their behaviour. This makes it very difficult to get first-hand experience of what it is typically like in classrooms across the school. This is made worse by formal lesson observations which encourage performance, particularly when there is a graded judgement at the end of it. To overcome this, schools examine lots of metrics (such as the number of discredits or detentions given, or how often students are removed from lessons). These metrics provide a measure of what behaviour is like, but not a feel for it. Data is distorted by teachers’ inconsistency in applying policy, perhaps because they know that leaders are examining the data and making judgements.
High-stakes accountability
Feeling judged is one thing, but when our career progression and even our pay depends on it, this judgement has all the more influence over our behaviour. At its worst, the pressure of accountability can promote masking behaviours from those not wishing to draw attention to their ‘mistakes’ or weakness. Schools which direct a great deal of attention towards measuring standards and adopt unyielding accountability approaches breed systemic incentives to create illusions of success. But it is not just punitive accountability systems which promote this behaviour. There is a natural human tendency to be seen to gain approval and organisations must actively encourage members to show vulnerability by admitting error and reward this behaviour. To achieve this requires high levels of psychological safety which will not happen by accident.
An adaptive response to being overwhelmed
You may think school leaders would want an accurate picture of what is going on in their schools. But being a school leader can be overwhelming. The scale of the challenge and limits on control over hundreds of independently minded and unpredictable individuals presents an undesirable reality; one which many would rather not confront. It is an adaptive response to be, at least to some extent, in denial of quite how difficult the task ahead of you is. To invite this reality in requires significant mental strength. If we want school leaders to get to know their school warts and all, we must create psychological safety for them too!
How do we put school leaders in touch with the reality of their schools? The answer is we need measurement, but in the sense of mapping, not weighing.
We don’t need perfect formulas, but imperfect heuristics to get the information we need.
The Valuable Dev blog
What maps might be useful in putting school leaders in touch with the reality of their schools?
When leaders attempt to measure standards they are attracted by the visible and known. When employing measurement to create accurate and multi-layered maps of a school we should be more interested in the things that remain unknown and rarely seen. Mapping should expose the hidden places and blind spots.
For example, we might check on whether our school is a safe place by observing how students conduct themselves when they move around the school – do we hear abusive or derogatory language, is there pushing and shoving in corridors, and how many bullying incidents are reported? From this data we can draw conclusions about the standards of safe conduct. However, what about the unseen places where undesirable behaviours are most likely to happen – at the far corner of the school field at lunch, in the student toilets, or at the back of the classroom? Students know these places. They can provide valuable mapping data which will enable us not to judge safety, but to make the school safer.
In the wake of the Everyone’s Invited website coming to the attention of schools, and the revealing research carried out by Ofsted into sexual harassment and abuse in schools, we asked students about where they felt least safe in our school. The results opened our eyes to the places and times where students felt most vulnerable: the narrow corridor where others were uncomfortably close, the stairwell which was not ‘blocked in’, or on the way to school when van drivers would make inappropriate remarks. Mapping the territory was much more important than gathering evidence to prove how safe our school was.
In what other ways might we map the reality of our schools? Perhaps we could understand work pressures better by analysing staff absence patterns or missed deadlines. In a school in which I used to work, where student behaviour could be difficult, senior leaders developed a ‘hotspots’ map of the more challenging classes so that they could drop in and offer support. One of my colleagues at that school had his own way of mapping the early signs of burnout. He would habitually walk the school each Friday before leaving work to see which staff were left mopping up from a hectic week and check in with them about how they were. None of these ‘measures’ are perfect in isolation, but together they may start to reveal the hidden truths about what it is like to work in a school. In a variety of ways, school leaders can develop a mental map of work pressure which helps them improve the design and timing of school improvement efforts.
Getting a true sense of what the school is like takes time. Also, challenging one’s own assumptions and biases is hard. But a school leader who works on making their imagined school as close to reality as it can be will focus their efforts on the things which most need their attention, make better decisions about what needs to be done, and will know quicker if their plans are not having the desired effect. Spending time weighing standards is not only at the cost of knowing the school better, but it might actually lead to a false belief that things are better than they are. Furthermore, if your time is spent judging, you will reduce the time available to do the things that improve standards.
Of course we want to know ‘how good’ our school is. But, paradoxically, by setting out to measure this we may become less informed. That is not to say we should just rely on intuitions or feelings. We should be deliberate about what data we collect and intentionally build a multi-layered and nuanced understanding of ‘quality’. Thinking of this as map-making rather than weighing will be more likely to lead us, I suggest, to greater insights and therefore more productive action.
Weighing gives the (false) impression of certainty while mapping tentatively seeks to reduce uncertainty and ignorance. Weighing provides answers while mapping raises more questions. Weighing prioritises the easily measurable and the visible while mapping makes us inquisitive about uncharted regions. Weighing uses a single scale to measure a single factor while mapping positions a feature in the context of its environment.
Weighing seeks uniformity; mapping looks for exceptions. We should set out to find diversity and hope to learn from it. Weighing tells us if a single metric is getting better or worse. Mapping gives us a helicopter view of the landscape so that we may navigate it better. One is judging, the other learning.
The young man in our story needs to learn that there is a middle way between judging everything with simple metrics and lapsing into subjective experientialism. We don’t have to rate, label and compare things to know whether they have innate quality. If we want school leaders to truly understand their schools and make better decisions about how to improve them perhaps we should also stop weighing them and their schools like pigs on a scale. Getting a measure of our schools isn’t as simple as some think.
6 thoughts on “Getting the measure of a school”