A ‘good’ reason for students to read the entire exam paper, as you asked them to.
A ‘good’ reason for students to read the entire exam paper, as you asked them to.

A ‘good’ reason for students to read the entire exam paper, as you asked them to.

Having searched through a truck load on chapters on metacognition, looking to deepen my understanding of accuracy of confidence based assessment, the past few months have been enlightening. Much of that has to do with the generosity of Dr Justin Couchman, Associate Professor of Psychology at Albright College.

Following a series of emails, a handful of research paper recommendations, here is what I now know, and what I think could have a significant impact of students learning, preparation for, and performance on their exams.

First, a quick summary of the three main confidence based assessments or metacognitive judgements (prospective, concurrent and retrospective), defined by when these judgments are being made in the learning process.

  • Prospective judgments are elicited before studying or testing and these provide insights on how students self-assess their ability to perform, retrieve information from memory, manage the time/effort needed for the task.
  • Concurrent judgments are elicited during the task at hand and because they are recorded during the task, they are usually fine-grained and refer to specific items within the tasks, eg item-by-item or question-by-question judgments.
  • Finally, retrospective judgments are elicited after the task at hand and, therefore, usually refer to the whole task and are sometime referred to as composite judgments.

Why are we interested in metacognitive judgements?

Metacognitive judgements can be predictors of academic success, however learners are often unable to accurately monitor their own knowledge and a range of studies have shown that the type of judgments used can also affect predictions on academic performance. This self-regulatory ability can be refined and improved, most notable through self-assessment.

The other reference term you need is – calibration. Calibration describes the relation between metacognitive judgments and actual performance, or in other words, the monitoring accuracy of one’s metacognition. Furthermore, calibration can refer to absolute or relative accuracy. Absolute accuracy measures the agreement between metacognitive judgments and performance, while relative accuracy measures the relation between correct and incorrect judgments or a set of judgments against a performance set.

That is a lot of information digest – prospective, concurrent and retrospective judgements, and calibration.

The focus of this post from here on in is concurrent metacognition and absolute accuracy messures.

Knowing how to assess and manage one’s own learning is critical for becoming an efficient and effective learner. However, research on self-regulated learning suggests that learners often develop incorrect beliefs about how people learn best and are prone to misassessing and mismanaging their own learning, Bjork et al., (2013). Rivers, (2020) reported 58% of students adopt ineffective / low utility learning techniques (rereading 43%, copying notes 11%, highlighting 4%). And that, is why I am interested in this area of cognitive psychology. Self-regulation plays an ever growing significance in the academic lives our our pupils as they move through to high-stakes terminal exams – at the end of secondary school and onwards. (I would be interested to hear from Primary colleagues also).

Students often gauge their performance before and after an exam, usually in the form of rough grade estimates or general feelings. Are these estimates accurate? Should they form the basis for decisions about study time, test-taking strategies, revisions, subject mastery, or even general competence? Cue Dr Couchman.

Dr Couchman tracked the metacognitions of undergraduates, also recording their performance for each question, and noted any revisions or possible revisions. Beliefs formed after the exams were poor predictors of performance. In contrast, real-time or concurrent metacognitive monitoring (measured by confidence ratings for each individual question) “accurately predicted performance and were a much better decisional guide,” than retrospective judgements (post exam performance).

The most simple of simple revision strategies

Students are always advised to read the entire paper – IMHO they rarely do. Now we have more evidence leverage.

Advise students to rate the individual questions or items on an exam paper. (Now, I have discussed metacognitive scales more than anyone should have due to my involvement with RememberMore – two, four or five points, sliding scales, criterion referenced… for easy and speed, I would recommend using a simple four point scale.)

After reading and rating the individual questions, students answer:

  • the high confidence, high mark questions or items first, then,
  • the high confidence questions, then
  • the low cofidence, low mark questions, and finally
  • the low confidence, high mark questions.

More than happy to discuss or debate that order.

Following the mock exam, it is very easy to then calculate or calibrate the concurrent metacognitive ratings with performance. If nothing else, it would provide a very insightful post mock discussion.

The best strategy for learning is to “record confidence, as a decision is being made, and use that information when reviewing.” Developing this metacognition accuracy is important. Coutinho et al., (2020) demonstrated that students who scored higher in monitoring accuracy performed better on the exam than those who scored lower. Why? “It prompted the students to engage in an analysis of knowledge. They evaluated their knowledge.”

Couchman, J. J., Miller, N. E., Zmuda, S. J., Feather, K., & Schwartzmeyer, T. (2016). The instinct fallacy: The metacognition of answering and revising during college exams. Metacognition and Learning, 11(2), 171-185.
Coutinho, Mariana & Papanastasiou, Elena & Couchman, Justin. (2020). Metacognitive Monitoring in Test-taking Situations: A Cross-cultural Comparison of College Students. International Journal of Instruction. 13. 10.29333/iji.2020.13127a.
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013).  Self-regulated learning:  Beliefs, techniques, and illusions.  Annual Review of Psychology, 64, 417-444.

Leave a Reply