I have been interested and invested in assessment from for as long as I have been a teacher. As a teacher, initially as part of the “assessment, marking and feedback” debate that encircles pedagogical practice. The tendrils of assessment are vast; the purpose of assessment itself, reliability and validity, moderation and standardisation practice. Curriculum planning and the balance of teaching and assessment, marking-feedback cycles, the role of meta-cognition, the role of students in assessment, the use of gap analysis by teachers. Forms of assessment, (long form, short form, MCQs, comparative analysis, evidence based, verbal. assessment cycles, retrieval practice. Building momentum and motivation through assessment (success-motivation-success). More recently reviewing assessment as learning, pre-testing and retrieval practice within a lesson cycle. Spaced algorithms and use of Anki app.
Moving into school leadership, I became interested in assessment as a core component of “teaching and learning,” and the leadership of teaching, learning (quality assurance) and reporting. Staff workloads. The use of prior attainment, target grades, class and year group forecasts. As part of school improvement, school priorities and targeting support and intervention. Assessment validity and reliability (forecasts and predictions), moderation and standardisation practice and comparability at a whole-school level, across a MAT or groups of school, government level, accountability and performance measures. I am also interested in global comparisons, OECD and PISA – when there is time and space to think about such things. The unintended consequences of performance measures.
Assessment as part of reporting and servicing parents with information with which they can support their child’s learning remains a professional interest. Recent presentations from Dr Becky Allen and conversations with Matthew Benyohai on whole-school standardised assessment, has got me thinking very hard about how, what and when we assess and what we report. What is more, on Question Level Analysis (QLA) Matthew and I have different views and that is fertile ground for debate.
Question Level Analysis (QLA) or “gap analysis” is now available with many learning platforms, exam services and AI learning platforms. QLA explores and presents the analysis of student’s responses to individual questions; typically against that of the mean of the cohort. Now I am not advocating QLA for single questions as a diagnostic tool. Rather sub-sets of questions would be more reliable, and of course, the overall test score itself.
QLA dovetails with my interest in the design and use of diagnostic or multiple choice questions (MCQs) and my ambassadorial work with exciting assessment tool QuickKey.
But what if you wish to use this form of metacognition and feedback as part of your own teaching and feedback practice? Back in 2016 I was introduced to Peter Atherton’s (@dataeducator) excellent Exam Feedback Tool #EFT. It is available for you to use too.
Following extensive trials in 2016, in various curriculum areas, we started to explore how the #EFT could be used to investigate not only QLA but also class level analysis and teacher effectiveness and to assess the impact of remedial interventions. Since then we have gone on to develop the #EFT to include automatically generated targets within the personalised pupil feedback sheets, directing their actions. It is these metacognitive aspects of using the tool, identifying strengths and prioritising areas for improvement, that lead to outcome improvements for students and their active requests for feedback.
You can review our 18 month investigation and conclusions here.
If you are interested in hearing more, join me at a conference session or access the learning resources below.
Data is not information, information is not knowledge, knowledge is not understanding, understanding is not wisdom. – Clifford Stoll
Of course, you know that already and it is how you use the #eft that makes it powerful.