What every teacher needs to know about assessment (Q2)
What every teacher needs to know about assessment (Q2)

What every teacher needs to know about assessment (Q2)

The first post, covered the first half of “What every teacher needs to know about assessment.” Unsurprisingly, this post covers the second half. What practices schools should do ‘more‘ or ‘less’ of.

What assessment practices should schools do more of, and what should they do less of?

Fewer decision making tests. Maybe gate-keeping tests. What Dr Ruth Dann (Associate Professor, Curriculum, Pedagogy and Assessment at UCL Institute of Education) framed as bottleneck tests, bottle next tests that result in some children falling back into the bottle.

We need good tests.

Assessment is not a one-off event. A test is a sample. Assessment is an aggregation. (Dr Becky Allen has shared some clear posts to uphold this point.)

With regards to “more of.” Focus should be on generating evidence. Incidentally, with her pre-service teachers, Christine has banned the term “marking.” As a biologist,

…marking is what animals do, to define their territory. And we shouldn’t be doing that to children’s books.

I can see a few assessment leads borrowing that soundbite.

Phil Stock questioned the purpose and usefulness on heavy summative assessment schedules. Schools should be doing less summative assessment. Not just the summative assessment process itself, but the writing assessment, marking and acting upon assessment. Few assessments would enable greater focus on the quality of the remaining assessments with time and resource, better directed towards developing formative assessment practice.

Let’s not overlook the growing swell of criticism flowing towards testing at 16, where education is required until 18.

Is once a year sufficient? There is an interesting debate. (What impact would that have on accountability?)

Amie Barr echoed much of Phil’s commentary. Amie focused on marking, feedback and grades. Less marking, more precise, directed feedback, with directed time for students to respond, redrafting or revisiting topics to improve the quality of their work.

Jon Hutchinson was sequential in his explanation.

  1. Set a clear learning goal.
  2. Give pupils a chance to put that learning into practice.
  3. Check to see how pupils performed on the task.
  4. Respond to this information.

One final point was that for Jon, formal assessment should not be constructed to try and prove learning, but rather to reveal misconceptions.

Becky Allen argues for more, aggregated, extended tests. As she has in her recent blog posts.

She was more animated when proposing that teachers use assessment to help students learn, (metacognitively and motivationally). As a prerequisite for teachers to co-plan the curriculum – know what is to be assessed, know what you need to teach.

What do we want students to learn? What does success look like? What do we need students to know before this lesson start? What do we want students to remember one month, six months, one year, five years? Importantly, constructing, planning and writing assessments offers an explicitly opportunity to define what we want and intend the curriculum to do. (Paraphrased from Becky Allen comments)

Clare Sealy walked the line of granular-fine assessment and unwieldy tracking and monitoring systems. Not an easy task.

Clare’s ‘less of’ was more a stop list. More definitive. Don’t do tests that don’t match the curriculum.

Anything that purports to ‘measure’ progress. Anything that averages results of different children together. Anything that simplifies rich but unwieldy, complex data into a neat chart with coloured boxes because it gives the illusion of rigour. Reading comprehension tests – as these are a test of general knowledge rather than of reading. Tests that don’t reflect the curriculum. (ebook)

One final swing of the sword. Stop being lead by accountability. There it is again. The elephant in the room.

The session close validated Prof Becky Allen’s opening statements. Assessment is as diverse as the environment of it is employment. Age range, domain, contact time – all influencing on what, how and when, we, as teachers, choose to assess. The panelist responses framed by their expertise, their subject domain, the age range of the students they taught or lead, the contact time with those students and to a lesser extent the role they fulfil. All influencing on what, how and when, and why they assess.

The second observation, an increased reference of assessment methodologies, particular comparative judgement.

Finally, the panelists are all connected teachers – does that infer there is a benefit to being connected?

Leave a Reply