Low, low, low-stakes bests desirable difficulties
Low, low, low-stakes bests desirable difficulties

Low, low, low-stakes bests desirable difficulties

Students have limited time to study, and thus the total time it costs is a vital consideration. In fact, we found that practicing efficiently had an even larger effect on memory than spacing usually does!

Dr Luke Eglington

Last week I happen to stumble upon the work of Luke Eglington. If you are interested in retrieval practice or test-enhanced learning, then his work is fascinating. In a nutshell, Luke develops and tests computational models of learning to estimate the optimal difficulty at which to practice retrieval to maximize learning of complex, educationally relevant content.

It is a big nutshell.

Before we go any further, I am not an expert of computation modelling, nor am I a complete novice. I have read widely on the personalisation of retrieval practice and I am able to access both the theoretical and the applied concepts of test-enhanced learning. So here goes.

Optimizing Student Practice Scheduling

Buckets full of research has shown that spacing retrieval practice over time can improve later memory performance. Eglington’s work challenges that most of this one-size-fits-all research methodology ignores, or at least is, insensitive to, the time costs and time costs over time of retrieval. What we might refer to as the efficiencies of learning, that:

  • easier items are recalled faster, permitting for more trials, if maybe provide less learning,
  • more practice leads to faster responding, – which leads to more practice,
  • harder items may provide more learning, but take longer (especially if the student fails to remember), permitting fewer trials,
  • harder items are more likely to be answered incorrectly, which is frequently more time consuming due to reviewing corrective feedback, permitting fewer trials.

In teaching with RememberMore, lower failure rates following initially encoding, meant we would practice more retrieval cards in the same allotted time. Relearning is highly efficient. Previously, Vaughn et al, (2016) demonstrated that just seven minutes was required to relearn 70 pre-learned word pairs and Rawon Rawson et al, (2018) commented that “Relearning had pronounced effects on long-term retention with a relatively minimal cost in terms of additional practice trials.” So Eglington is in good company.

In class, we found that, following regular practice, pupils could identify 10 from a possibly 34 characters from Shakespeare’s plays, comfortably in less than 20 seconds. In a one pupils versus the clock, verbal quiz time-trial.

Lastly, any teacher will tell you, addressing incorrect answers is time consuming, (Eglington’s does not address motivation in the paper).

What amount of difficulty balances speed, learning gains, and failure risk?

His work uses spacing schedules to manipulate item difficulty of paired associate (Japanese–English) learning. It also investigated “how much difficulty should be imposed on the learner, and how can we enforce a particular difficulty level?” To determine what difficulty was optimal, Eglington developed a quantitative model of learning to track student learning that accounted for the effects of test-enhanced learning and spacing.

Low failure rates struck the best balance between successes (that can be efficient) and failures (that can be time consuming) with the model generating an expanding schedule in the simulation unique to student and item attributes. The amount of difficulty that is desirable was less than typically thought with up to 40% more items recalled in conditions where practice was scheduled optimised.

How low? Optimal efficiency threshold of ≥0.8 provided 39–55% better memory at final test than the next best conventional schedule.

Of course, with test-enhanced learning, it is rarely simple. There are always going to be moderators.

Moderators

The optimal difficulty threshold differs across learning contexts and where learners are on their journey of expertise. He was also keen to point out that:

…the optimal difficulty likely varies according to the type of content and how feedback is implemented.

Sometimes, corrective feedback may be especially beneficial for mastering a topic – thus getting it wrong may be more efficient. Metacognition reflecting on why you got something wrong matters to differing degrees, and depends on the topic.

Finally, unlike word pairs, learning from one example may help “transfer” over to learning another, or it may impeded it (what we might refer to as retrieval induced interference). Of course, low, low, failure rates deter interference, as at any given moment, most items that could interfere with retrieval are already well learnt and that is not necessarily the case in traditional practice schedules.

My thanks to Dr Luke Eglinton @L_Eggo_ for his conversation. You can read more about his fascinating work here.

Takeaway

And it is a big takeaway. As teachers do we consider the economic impact of pupils getting questions wrong?

Eglington, L.G., Pavlik Jr, P.I. Optimizing practice scheduling requires quantitative tracking of individual item performance. npj Sci. Learn. 5, 15 (2020). https://doi.org/10.1038/s41539-020-00074-4

Yan, V.X., Eglington, L.G., Garcia, M.A. (2020). Learning better, learning more: The benefits of expanded retrieval practice. Journal of Applied Research in Memory and Cognition. doi:10.1016/j.jarmac.2020.03.002

Leave a Reply