Baseline versus aptitude
Baseline versus aptitude

Baseline versus aptitude

Measuring and analysing achievement and attainment Key Stage 2 to Key Stage 4 is one part of my role. Raising ‘achievement and progress’ is another part. (I am still not sure why we have different wording, but there you go). Like many teachers I have used targets, which I now try and refer to as estimates, and chance graphs. What didn’t understand was the background to contentious, historical and confusing issues surrounding the use of Key Stage 2 data that I could infer from the #SLTdata conversions I engaged in. Subsequently, I was aware of the contesting debate exploring the the relative benefits of Key Stage 2 data versus the use of cognitive aptitude tests (CATs or Yellis) for setting those estimates. And further still, I have researched the skepticism of creative and design colleagues and the use of FFT targets.

Here is what I have learnt over the past two days. To cut to the quick – first, National Curriculum (NC) levels were originally considered ‘inappropriate’ for use as baseline data, that level should not be used in value-added studies by The School Curriculum and Assessment Authority (SCAA, 1994). The DfE (1995) concluded that progress from one NC level of attainment to the next could be ‘too large a step’to be used for accurate measurement of value added; more fine-grained measures were likely to be needed. Then, ‘right hand turn Clyde,’ and it was suggested that ‘from 1998, schools should be provided with value-added scores based on KS2 data predicting KS3 outcomes’ (SCAA, 1997, p98). In particular, professional concern focused on what was termed ‘a discontinuity in the assessment of ‘level 4’ at the end of the two key stages.’ Basically a lack of agreement on what a Level 4 looked like, that and the absence of confidence bands around the reported value-added measure. Hardly a reassuring starting point and this lack of confidence, one assumes, encouraged #SLTdata colleagues to seek alternative predictive measures or combination of measures. In fact later investigation showed that by the average of the test levels (KS2 APS) a sufficiently refined measure could be obtained, and this is what we work with to date. And to be fair, there is a strong correlation reported here between KS2 APS and both KS3 and GCSE outcomes.

In the red corner, weighing in at £400-£850 (pronounced lbs)* the current, UK and LA data champion, ….. Fischer Family….. Trust. And in the blue corner, our challenger. Weighing in at a confusing mix of pupil booklets, answer booklets, guides and reports and technical manuals, (not to mention costs of administering the test)….. CAT3.

Of course, I am not discounting Yellis or others tools, or even academic grit measurements? Interestingly,  Yellis also offers an Attitudinal Questionnaire (not unlike GLs PASS), which covers attitudes to school, particular lessons and homework, quality of school life, feeling of fear in school, home background and support for education from parents/guardians, career plans and aspirations for the future. With Yellis – the survey costs £1.55 per pupil (subject to a minimum cost of £130 and a maximum of £395 per cohort.)

Cost (Computer) – 101+ students per year group. £370 + £2.30 per pupil

Cost (paper based) – 101+ students per year group. £695 + £2.60 per pupil

I have been reading around the topic and listening to #sltdata conversation. The relative predictive reliability of prior attainment versus cognitive reasoning tests is unclear. As I see it, we are debating baseline testing versus or cognitive aptitude.

*The costs are for access to data and support direct from FFT. For details of costs via your LA, please contact them directly.

[qr_code_display]

Leave a Reply