Improving the quality of GCSE mathematics examinations

GCSE mathematics examinations have been criticised for comprising short, procedural questions. This has a detrimental effect on mathematics classrooms, which are characterised by rote learning facts and procedures. The objections to using longer, more conceptual questions are that they could not be marked reliably, and that pupils would perform very poorly.

recent study funded by the Nuffield Foundation and the Royal Society addressed the marking reliability objection. It tested an alternative to marking, called Adaptive Comparative Judgments (ACJ), in which examiners compared pairs of exam scripts and decided, for each pair, which candidate was the most able mathematician. The outcomes of many such pairings were then used to construct a rank order of scripts. The project showed that ACJ correlated strongly with marking, and produced very high reliabilities, even for scripts comprising long, conceptual questions.

The current project, also funded by the Nuffield Foundation and the Royal Society, will test whether ACJ can be scaled up cost-effectively, and whether it can cope with poor pupil performance. Four experienced GCSE examiners have been commissioned to develop an innovative, unstructured exam paper that is suitable for being assessed using ACJ. This will then be administered to 1000 Year 11 pupils. GCSE markers will be trained and commissioned to assess the pupils' scripts using ACJ. Scalability will be investigated by comparing the costs of ACJ with the estimated costs of marking such an exam. We hypothesise that ACJ will cope well with discriminating between pupils who perform poorly, producing a highly reliable rank order that correlates strongly with pupils’ predicted grades. This will demonstrate the potential of ACJ for overcoming the "performance dip" associated with innovations in assessment.

Project details



Dr Ian Jones and Dr Matthew Inglis, Loughborough University

Grant amount and duration


21 Nov 2011 - 20 Nov 2012