Comparable outcomes, comparable performance
Comparable outcomes, comparable performance

Comparable outcomes, comparable performance

Given it is exam season is approaching and back channel conversations will no doubt seen revisit the issue of comparable outcomes and comparable performance, I thought that I would refresh my think and understanding. I am sure it will be a topic of conversation at next months SISRA data conference and again following each results season infinitude.

Here is how I had defined comparable outcomes and comparable performance – in my own thinking and in ‘Making sense of data’ info document.

Comparable Outcome – Proportion of students achieving each grade stays the same.

This approach means that if the cohort of students taking the qualification is similar in terms of ability, then we would expect the outcomes – the proportions of students achieving each grade – to be similar. The aim of this approach is to minimise any advantage or disadvantage for students who are the first to sit a new qualification, given the difficulty of maintaining standards through a period of change. – Ofqual

Comparable Performance – Demand of the knowledge, skills and attitudes that students must show in exams stays the same.

Here is what Ofqual’s regulatory approach is designed to achieve over time:

  • assessments are valid, reliable, comparable, manageable and minimise bias9
  • the content of assessments which comprise knowledge, skills and understanding is fit for the statutory or specified purposes of National Assessments and ensures appropriate coverage of the curriculum and, for the Early Years Foundation Stage, also addresses children’s attitudes and dispositions
  • confidence in the reported outcomes of National Assessments
  • so far as possible, all pupils should be able to access assessment arrangements on an equal basis
  • responsible bodies’ processes lead to continuous quality improvement and ensure valid outcomes at each point of the national assessment delivery process.

It would therefore follow, that if exam boards set standards that following those principle, that if the group of students entering a qualification is similar, in terms of ability to the previous year’s cohort, then the results ought to be similar to the previous year’s results? You would think so.

As far back as 2001/02, exam boards and regulators were discussing how to ensure standards remained constant, as reformed A levels were being sat for the first time. That said, according to Glenys Stacey, chief regulator at Ofqual, the watchdog “started using” comparable outcomes for AS levels only in 2010, with the approach applied to new A levels and GCSEs a year later. Grade inflation promptly slowed in Advanced Level AS/A Level qualifications. And so it continued “we expect that outcomes for any particular subject in summer 2012 will be very similar to the outcomes for that subject in summer 2011.” (Setting and maintaining standards in GCSE and GCE qualifications in summer 2012 – letter to awarding bodies*) The pressures of maintaining standards?

As I understand it, the comparable outcomes approach inevitably leads to a norm-referenced approach to grading, which ultimately means students will not be awarded the grade that their ability justifies, in as much they will be awarded the grade that being a member of a particular cohort determines. Surely most school leaders would adhere to a meritocracy and not a demography?

Furthermore, any school leader flirting with floor standards, which assumes that large national year-on-year improvements in results are possible, comparable outcomes are somewhat contradictory? Not to mention we face Ofsted inspections where exam results and the expectation of continuous improvement impacts on deciding an inspectors’ overall verdict.

Here’s the rub, there are good reasons to aim for comparable outcomes in the first year of a new syllabus. Students taking new specification, in any particular year, will be competing with those from other years for access to higher education and employment. It gives some students an undeserved advantage if they get better results simply because they were taking an exam that their teachers were used to preparing them for. This business of exams regulations is not a simple process.

This August

I first encountered the “comparable debate” following the 2012 English #GCSEfiasco. New to leading school data, this was a baptism of numbers. Four months of professional focus, summed up in a solitary sentence in the judicial review – June 2013.

106.  Ofqual also acknowledges that the comparable outcomes approach “is not well understood by schools and colleges, and not generally trusted.”[146] – Education Committee – First Report – 2012 GCSE English results

Working closely with our English department I remain unsettled about the English qualifications. Re-emphasising the terms of Ofqual regulatory responsibility (valid, reliable, comparable)  the summer 2014 English GCSEs  will now be linear, with speaking and listening marks removed. The design to achieve comparable outcomes for students, yet again open to criticisms of inconsistency or blurring.  As a school leader this concerns me.

Next August?

“I am opposed to norm-referencing,” Gove said when discussing his GCSE reform plans earlier this year. Comparable outcomes, currently used to set standards, are off the menu. The Government has expressly demanded that outcomes not be comparable. Gove made it clear that he wants the new, more demanding qualifications to be able to recognise the rising standards he expects to come from improved teaching. Given what I have learnt this morning, this is be no means an easy task.

* Where key examination criteria where met.

[qr_code_display]

Leave a Reply