Self-assessment – yes and…
Self-assessment – yes and…

Self-assessment – yes and…

It is rarely a good start when, in the opening ten minutes, you read

“Without exception, reviews of self-assessment (Sargeant, 2008; Brown and Harris, 2013; Panadero et al., 2016a) call for clearer definitions: What is self-assessment, and what is not?”

A Critical Review of Research on Student Self-Assessment – Heidi L. Andrade (2019).

Typically, I have used self-assessment as part of the teaching-learning-meso-assessment cycle (fortnightly responsive teaching task with feedback tagged to the criterion which students reflect upon) much as Brown and Harris (2013) define: “a descriptive and evaluative act carried out by the student concerning his or her own work and academic abilities” (p. 368). More recently, I have focused my investigations on the validity of those students self-assessments, or “confidence,” frequently referred to as “judgement based learning.”

We know, for example, that complex tasks or “stuff” is more difficult to assess than easy tasks. Students self-assessments can be self-serving when attached to grades, whereas they are relatively consistent a framed within a learning-oriented purpose. We know that males are more likely to overrate and females to underrate and that more academically competent learners tend to be more realistic and consistent, older learners more conservative Butler (2018). Familiarity, expertise or competence are self-assessments friends.

As Professor Gavin Brown states, the more experience you have, less optimistic and more realistic your self-assessments are.

“The less you know, the more you think you know.”

Critically, I wanted to know how valid these judgements are.

I found myself swimming in muddy waters: two statements stood out like a sore thumb. The first was a challenge to the assumption that “accuracy is necessary for self-assessment to be useful.” Second, that “Do students whose self-assessments match the valid and reliable judgments of expert raters (hence my use of the term accuracy) make better decisions about what they need to do to deepen their learning and improve their work?”

Or should we, as educators, support the subsequent decisions or actions of learners? Is there, or not, a relationship between the accuracy in formative self-assessment, and students’ subsequent study and revision behaviors, and their learning?

Is self assessment time well spent?

There is a wealth of research and supported position that between self-assessment and learning, on average, “students who engaged in self-grading performed better on subsequent tests than did students who did not” (Sanchez et al., 2017). Is it time well spent? I still think it depends on the teachers support for the process and guidance on what to do with the assessments.

Onwards. Still looking for evidence on the validity or accuracy or confidence of students self-assessment.

Brown, G. T., and Harris, L. R. (2013). “Student self-assessment,” in Sage Handbook of Research on Classroom Assessment, ed J. H. McMillan (Los Angeles, CA: Sage), 367–393. doi: 10.4135/9781452218649.n21
Butler, Y. G. (2018). “Young learners’ processes and rationales for responding to self-assessment items: cases for generic can-do and five-point Likert-type formats,” in Useful Assessment and Evaluation in Language Education, eds J. Davis et al. (Washington, DC: Georgetown University Press), 21–39. doi: 10.2307/j.ctvvngrq.5
Sanchez, C. E., Atkinson, K. M., Koenka, A. C., Moshontz, H., and Cooper, H. (2017). Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: a meta-analysis. J. Educ. Psychol. 109, 1049–1066. doi: 10.1037/edu0000190

One comment

  1. Pingback: Retrieval, working memory and Fribbles! – Edventures

Leave a Reply