Exam Results, Fairness, & Trust: A (Final?) Response

By Karen Lancaster

This post is available as a PDF here, and as an audio recording below:




This is the fourth post in a PESGB Blog series focussed on educational assessment and the UK Government’s handling of England’s national examinations system in the wake of Covid-19. The first post, by Karen Lancaster, is available here; the second, by Mary Richardson, is here; the third, by Emile Bojesen, is here.


Mary Richardson and Emile Bojesen have raised some interesting points in response to my argument that awarding students teacher-predicted A-Level grades in 2020 was unfair. Richardson suggests that politics was the problem in 2020, and Bojesen examines the myth of educational meritocracy. Exams will be cancelled again this year, and so there will be (at least) two ‘pandemic cohorts’ who benefit from having teacher-ascribed grades. Below I shall defend my original suggestion that teacher-ascribed A-Level grades are unfair to non-pandemic cohorts, and I will suggest that pandemic cohorts’ qualifications are not equivalent to the A-Levels of non-pandemic cohorts.

As Bojesen points out, and as others have suggested, the strong correlation between family wealth and educational attainment often results in the reproduction of class inequality. Although the education system may not have done enough to narrow (or remove) the gap between the attainment of rich and poor, at least the A-Level exams involved anonymised marking by someone who did not know the students, resulting in a fair assessment of what the student demonstrated in the exam.

I am not, however, arguing that exams are necessarily the best way to assess students’ abilities. A-Level exams test recall and application of knowledge under pressurised conditions: they only demonstrate who is good at A-Level exams. So long as students, employers, and universities grasp that, everyone is singing from the same hymn-sheet. I am not entirely convinced that exams are a good indicator of student ability or university-readiness – but I am convinced that they are impartial: everybody is assessed in the same way.

In my original post, my primary objection was not against teacher-ascribed grades per se, but against awarding overly-optimistic teacher-predicted grades and supposing that they were equivalent to exam-attained grades. The former is far more generous than the latter. However, even if teachers aren’t overly optimistic again with their grade “predictions” this year, there will still be no parity between pandemic cohorts’ grades and non-pandemic cohorts’ grades. If, in a normal year, we decided to assess half the A-Level students by an exam, and the other half were simply awarded a grade based on the opinion of their teacher, this would be grossly unfair. Yet this is the situation when we compare the pre-pandemic cohorts’ results with pandemic cohorts’ results. The same qualifications (A-Levels) have been awarded to pandemic cohorts and non-pandemic cohorts despite the former having not met the same assessment criteria as each other. Exams are not necessarily a better measure of ability than teachers’ opinions, but they are different. It is a mistake to call the pandemic cohorts’ qualifications ‘A-Levels’ when they assess ability in such disparate ways compared to previous years.  

As if this weren’t enough to be concerned about, consider the fact that predicted grades (just like those awarded in 2020) do not, in many cases, go through any processes of standardisation or moderation. The teachers can predict as they see fit. Teacher-ascribed grades can be a worthwhile and just alternative to exams. However, such a system can only be robust if teachers are appropriately trained and prepared to ascribe grades. For centre-assessed coursework, there are robust processes of standardisation, moderation, and external verification; making them fairly resistant to conscious or unconscious teacher bias. However, teachers were not adequately prepared for how to ascribe grades for students during 2020, and there was no standardisation or moderation – instead, teachers (over-)predicted grades as they had done in previous years, and these aspirational wishes were universally granted. It is worth noting that many PGCE courses suitable for A-Level teaching do not contain any modules on grading examinations or predicting grades, meaning that a large number of A-Level teachers have no formal assessment training. Experience can go some way towards mitigating this lack of formal training, but with large numbers of teachers quitting teaching each year (only to be replaced by NQTs) experience is in short supply too.

Mary Richardson hits the nail on the head when she writes that it is our confidence in educational assessment which matters, moving forwards. Some students in 2020 were given grades far beyond what they would have achieved had they sat the exams. These students may have progressed to Russell Group universities and may ultimately live more successful lives than they would have done if they had had to sit their exams. This may give us confidence in teacher-ascribed grades, as they may give greater opportunities to students who would otherwise have left education straight after their A-Levels. If this is true, then I am right to claim that dishing out teacher-ascribed grades to pandemic cohorts gives them significant advantages over pre-pandemic cohorts.

Only time will tell whether the pandemic cohorts go on to achieve greater, equal, or lesser success at university than the pre-pandemic cohorts. Their success post-A-Level will be the barometer by which we can see whether the anonymised A-Level examination system really was a good indicator of university-readiness. If it turns out that teacher-ascribed grades are a better measure of university-readiness than exam results, then we should change our assessment systems accordingly. However, we will still require formalised standardising and moderation procedures to guard against the potential bias and widespread over-prediction which epitomised the 2020 A-Level debacle.  

About the Author

Karen Lancaster

PhD student, University of Nottingham

Karen is a PhD Philosophy student at the University of Nottingham. Her thesis examines some of the ethical issues surrounding the use of care robots in residential homes for the elderly, including whether a robot can care, and how robots should conceive of consent. She is interested more broadly in social, political, and moral philosophy, and philosophical issues arising from our use of emerging technologies. Prior to PhD study, Karen taught A-Level Philosophy and Sociology for 13 years.

Website: https://karenlancaster.weebly.com/

By this Author