More on MARCKS and concurrent metacognition
Metacognitive monitoring concerns learners’ ability to assess the progress of their learning. The accuracy of their metacognitive monitoring influences study choices and, consequently, how well information is learned and retained.
Learners misinterpret momentary accessibility of knowledge as a marker of long-term storage strength, swayed by the concurrent illusions of familiarity and fluency pupils prefer massed and blocked practice and restudy over more effective strategies. Moreover, these beliefs and feelings can be difficult to overcome with students “insensitive to their own performance,” even when presented with the contrary results.
Thus, identifying and understanding the conditions that promote accurate metacognition is critical for promoting efficient and effective learning. ‘End of Year Assessments’ offer an excellent opportunity to encouraging retrospective judgements of learning, or even, better concurrent judgements and marking with MARCKS.
Let’s tackle those terms.
Judgements of learning are either “predictions of future performance” (prospective) or reflections of learning, (retrospective) how well something was thought to be learnt. Both are helpful, because they can help learners to identify learning topics that need further clarification. Monitoring accuracy improves with practice and feedback on those judgements.
Improvement is exactly what is required as regrettably, “students” predictions are almost always higher than the grade they earned and this was particularly true for low-performing students” (Miller & Geraci, 2011). Knowing that, as teachers, is helpful.
Second, the accuracy of judgements of learning can play a large role in determining how adaptive (or maladaptive) study decisions end up being (Kornell & Metcalfe, 2006).
One of the biggest metacognitive monitoring inaccuracies surrounds learners perception of the effectiveness and efficiency when restudying! It is one of the key reasons that test-enhanced learning is so valuable.
A recent area of interest is metacognitive concurrent judgements.
In between prospective and retrospective judgements, are real-time concurrent judgements (although technically retrospective, they are taken immediately after each item or question, as opposed to capturing beliefs formed before and after an exam).
In the moment judgements
Nietfeld et al, (2005) found that real-time (concurrent) confidence judgments were strongly associated with objective accuracy in a Multiple-Choice test. Couchman et al, (2016) reported that confidence ratings for each individual question accurately predicted performance and were a much better decisional guide than retrospective judgements and, as such, that the best strategy for learning is to “record confidence, as a decision is being made, and use that information when reviewing”.
And while we are here – on self-assessment, two meta-analyses (Graham et al, 2015; Sanchez et al, 2017) demonstrated a positive association between self-assessment and learning. On average, “students who engaged in self-grading performed better on subsequent tests than did students who did not”.
Tell you pupils this and develop a self-assessment routine for marking assessments. That is exactly what Paul Spenceley discussed with Ollie Lovell here.
First, let me remind you that, broadly speaking, metacognition is our beliefs about learning, monitoring one’s own learning (adjusting accordingly) and the decisions about when, what, why, and how to study. Here we are assessment marking as a monitoring.
Paul talks about asking his pupils to forecast their individual question performance at the end of the exam (a retrospective judgement). It is worth noting that concurrent judgements (in the moment) are even more accurate and this task could be easily carried out during the exam. (It takes minimal time).
The important and metacognitive process is comparing the forecast and the actual marks awarded. Not only is that itself informative but the discussion it potentially elicits is rich and directive. He talks about the outcomes on the podcast. Second, in writing this follow up – it could also encourage a quick check against the MARCKS acronym itself – particularly “How many marks did I target? How many statements did I make?” S – Statement per mark.
“Both the teacher and students can find out so much information. It is absolutely astonishing.”ERRR #064 Paul Spenceley on Formative Assessment
Coded marking with ‘MARKS’ 1h25:10
When marking, add a code.
- M – Maths/Graphs or Marks per min
- A – Application of knowledge.
- R – not-Reading the question (Circle the command word, underline the keyword.)
- C – Clarity (subject specific vocab / drop it)
- K – Knowledge
- S – Statement per mark
Get more from your ‘End of Year Assessment’ marking. Get more learning. Get better decision making for the future. Get recipients doing more with the assessment than the marking donor.
Couchman, J. J., Miller, N. E., Zmuda, S. J., Feather, K., & Schwartzmeyer, T. (2016). The instinct fallacy: The metacognition of answering and revising during college exams. Metacognition and Learning, 11(2), 171-185.
Graham, S., Hebert, M., and Harris, K. R. (2015). Formative assessment and writing. Elem. Sch. J. 115, 523–547.
Nietfeld, J. L., Cao, L., & Osborne, J. W. (2005). Metacognitive monitoring accuracy and student performance in the postsecondary classroom. The Journal of Experimental Educational, 7-28.
Miller, Tyler & Geraci, Lisa. (2011). Training metacognition in the classroom: The influence of incentives and feedback on exam predictions. Metacognition and Learning. 6. 10.1007/s11409-011-9083-7.
Sanchez, C. E., Atkinson, K. M., Koenka, A. C., Moshontz, H., & Cooper, H. (2017). Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: A meta-analysis. Journal of Educational Psychology, 109(8), 1049.