In the SRS community, there are a few commonly repeated bits of advice that fall into the category of "received wisdom" - things that are assumed to be true, but may not be. This article calls some of these things out, and explains why we think that they are wrong.
The article has two sections:
These are beliefs that are just wrong. We have to say so definitively, because these myths have had profoundly negative consequences - discouraging, or even preventing, the majority of SRS-aware people from successfully learning via the technique.
This one gets a whole article on its own. TL;DR:
We elaborate on each of these in Students Cannot Make Their Own SRS Decks.
You can broadly group learning tasks into a few categories:
Associative learning tasks are about learning lists of things (e.g. vocab, foreign languages, Presidents, state capitals...). Concept-intensive tasks are subjects where you're mostly learning concepts (e.g. just about all of STEM). Procedural/process tasks are about learning, um, procedures or processes (e.g. how to query a database in Python, or how the Krebs Cycle works).
One of the reasons that this myth persists is because associative tasks are really easy to write decks for. (As a consequence of that, it's often easy to find free decks for associative learning tasks.)
Concept-intensive subjects, OTOH, are really hard to write decks for, because teaching/learning concept-intensive subjects is inherently hard, regardless of the medium or technique.
Procedural/process tasks fall somewhere in the middle, sharing characteristics of both associative and concept-intensive subjects.
All of these tasks can be accomplished via SRS. But they require different techniques. As you increase in task complexity, you need to have a much deeper understanding of SRS as an instructional medium and, ideally, a deep understanding of how students learn, in general.
These are things that are more nuanced - they depend on context. They apply in some situations, but not in others.
The reasoning behind this rule is the same as that for not using True/False questions in classroom tests - the student can guess the answer, with a 50% probability of guessing correctly. The student will also quickly gain recognition of the question, after repeated viewings, without necessarily achieving either understanding, or retention.
So far, so good.
The nuance here is that there are lots of situations where a student needs to answer definitively whether something is, in fact, true or false. An example of this is to present the student with an example Python class definition and ask "Is this a valid class definition?" If it is not valid, the student must be able to recognize that that is the case. This is a situation that a programmer may encounter on a daily basis in the course of debugging their own, or someone else's, code.
So the problem is not with the question pattern, but rather with the implementation. It is not sufficient to question whether a situation is true or false. You must also test whether or not the student can justify their answer. Continuing with the prior example, the card should ask "Is this a valid class definition? Why or why not?" This forces them to articulate why it is, or is not, valid.
(There is almost never a place for multiple choice cards. (I really want to say, definitively, that there is never a place. But there may be an edge case that I have not yet considered.))
The reasoning behind this rule is that clustering cards (for example, successively quizzing all of the bones in the foot) is less effective than seeing the cards in random order. Randomizing the order forces the student to jump around to different areas of a subject, which precludes the student from using knowledge of prior cards as context for answering the current card.
There is indeed a benefit to this rule when learning an associative subject (vocab, foreign language, lists of things) via SRS.
But concept-intensive subjects need to be learned in a strucured, ordered way. If understanding concept C depends on a prior understanding of concepts A and B, you simply must teach/learn A and B before you can teach/learn C.
So SRS decks for concept-intensive subjects must be structured, and the cards must be delivered to the student in the necessary order.
Michael Reichner is the Founder of SRSoterica (this site), and the author of spaced repetition system (SRS) flashcard decks for absorbing the concepts that underlie complex programming subjects.
Discuss this article on the site that it was linked from, or at the SRSoterica subreddit.
Last updated: April 18, 2020
Tell your friends and colleagues about this article:
Would you like to be notified about new articles?
SRSoterica makes peerless Spaced Repetition System (SRS) flashcard decks for learning complex technology subjects. With our Python decks, you don't just memorize facts, you learn the concepts that are described by the facts. You learn when, why and how to use Python functionality, in everyday situations.
Our products are targeted at the Absorption stage of the learning process. We bring subject matter expertise, SRS expertise and instructional design expertise to every deck that we craft. You will come to understand Python at a much deeper level than most other programmers.