I was reminded recently why my near obsession with precise definitions haunts much of my writing. Twitter kindly directed me to this paper from 2019 on teacher-led versus student-led learning. My curiosity was tweaked not necessarily by the actual study but by the discussion of the term constructivism, one that I have somewhat grappled with since I first encountered it more than thirty years ago (see the extract below for some context).
At first I believed it to be a relatively straightforward idea, that learning is a process of knowledge construction. But, while I viewed constructivism as a theory of learning that invariably involved a rather tight teacher-learner dynamic, I was also aware that many others believed it to be a process whereby learners build their own knowledge through activities that are ostensibly student-centred.
Constructivism, therefore, seems to have been claimed by both those on the traditional extreme of the scale and those on the progressive extreme (although, I hasten to add, I have little love for this dichotomy - you can read my thoughts on it here). Such debates often play out on Twitter and rarely manage to resolve themselves. Indeed, much of the conflict on Twitter involving teachers and other educationalists appears to stem from differences related to a single question: What is the purpose of education? If we can’t agree on that, it’s unlikely we can agree on anything related to it.
The topic of education is fraught with such conflict, less so the topic of learning, although many remain sceptical of the role cognitive psychology (and the wider cognitive sciences) has to play in either. I understand this scepticism while not sharing it entirely (I am sceptical by nature, I think). It doesn’t help that much of the research cited by advocates of the cognitive approach to learning (myself included) have a tendency to rely on studies that involve the learning of lists of items, both real and invented, a branch of psychology that is generally referred to as verbal learning, although not a term that’s used too often nowadays but was popular in the United States during the middle of the twentieth century.
Understandably, cognitive science in education has become synonymous with memory, yet even definitions of memory differ. I’ve lost count of the times I’ve read an education book or blog and found memory described as the residue of thought, with little further elaboration. This definition comes from psychologist Daniel Willingham, whose highly successful book Why don’t students like school? is often the first port of call for teachers who want to know a little about cognitive psychology (I’d highly recommend you read it because it’s good, but don’t necessarily make it the only book on cognitive psychology you read). Of course, this quote is incomplete and Willingham goes on to explain that the more you think about something, the more likely you are to remember it. The residue of thought part is poetic and makes for a good soundbite, but means very little without the second, explanatory note.Â
How, then, does this compare with other definitions of memory? Willingham’s definition is very specific and related to a very precise type of memory, that is, memory as it applies to formal learning. I can remember lots of things without thinking too much about them, and I can learn certain behaviours through implicit methods such as various types of conditioning, so perhaps this definition doesn’t describe memory more generally.
Definitions of memory
The Oxford English Dictionary defines memory as the faculty by which the mind stores and remembers information, while William James, writing in the late 1800s, defines it as a knowledge of an event, or a fact which is out of conscious awareness currently, implying that our memories of things exist even if we can’t or have little desire to recall them. Endel Tulving and Fergus Craik describe memory as the ability to recollect past events and bring learned facts and ideas back to mind (Tulving and Craik, 2000). Other explanations are less detailed, as in …memory means stored information: nothing more and nothing less (Murray, Wise, and Graham, 2017).
The generally agreed definition amongst researchers goes a little like this: the faculty of encoding, storing and retrieving information (Squire, 2009). This latter description includes the processes by which people remember; that information is encoded (transformed into a form that can be stored) then stored in this newly created form which can then be recalled (or retrieved). The process of encoding, storing and recalling information lends itself to the information processing tradition, likening the cognitive system to a computer and highlighting the flow of information through that system.Â
Now, other explanations (often ascribed to neuroscience) suggest that this isn’t the end of the road. Once we’ve encoded and stored, there comes a period of consolidation, where new information and prior knowledge merge (this process is hypothesised to take place during sleep and wakeful rest). After consolidation the new or updated knowledge can be retrieved, however, retrieved items are vulnerable and go through a process of re-consolidation. This means that, in most cases, the re-consolidated memory isn’t the same as the retrieved memory - the act of retrieval changes it. This can be bad (such as in the creation of misconceptions and false memories) or good (in that any initial errors or misconceptions can be corrected). Re-consolidation might take place only once or several times and may even continue for years after the original event was encoded. The by-product of this memory cycle is the emergence of schemas.
Back to the residue of thought.
In the 1970s, Fergus Craik and Robert Lockhart described memory as the by-product of processing - very similar to Willingham’s residue of thought. However, in their Levels of Processing (LoP) theory, Craik and Lockhart hypothesise that retention depends less on the time we dedicate to the material and more on how deeply we process it. Although their ideas arose from work on selective attention carried out by Anne Treisman, it doesn’t reject the importance of rehearsal, but it does imply that rehearsal is only part of the process.
But what do we mean by depth of processing?Â
We can think about things in different ways, or at different levels. If I were to present you with the word HOUSE, for example, you might think of it in terms of it being written in capital letters, or that it rhymes with mouse. But house in more than just a word (capitalised or not), it also means something. If, on the other hand, I presented you with the nonsense word YAT (which is, incidentally, an Ebbinghaus-type vowel-consonant-vowel trigram) you could process it in terms of it being written in capital letters, but not much else - it can only be processed in a shallow way. We can learn a string of trigrams by rote, but does this represent knowledge? HOUSE, on the other hand, has meaning and context; there are different types of house, we have episodic memories of houses and, more importantly, we can carry out elaborative interrogation on the word: what are the types of house? What is the purpose of a house? Can I draw a house? We have a house schema, into which we can assimilate new instances of house.Â
The popularity of the Levels of Processing approach has waxed and waned since its initial introduction, but has remained a highly influential theory within memory research. It’s much less precise than traditional models of memory, perhaps emphasising the complexity of the human cognitive system. While the theory retains the long-term-short-term memory distinction of Atkinson and Shiffrin (and James and Galton before them), Craik and Lockhart actually view short-term working memory as activated portions of long-term memory (similar to Cowan’s Embedded Processes Model), this means that there is no box and arrow schematic that could adequately model their ideas in the same way as Atkinson and Shiffrin’s model or Baddeley and Hitch’s Working Memory model. This, perhaps, makes it less attractive to individuals from a non-psychology background and by those hoping that cognitive psychology can explain learning in one, simple soundbite with a couple of diagrams thrown in for good measure. Intriguingly, Levels of Processing does tend to fit the evidence better than either of the two most popular memory models. It also has many more practical applications than either when it comes to learning, such as the use of elaborative interrogation. Â
Definitions of memory, like many definitions, are often bound up within the lens through which we wish to view what they define. We can, therefore, view cognitive load through the lens of Cognitive Load Theory (which is very precise) or a more general cognitive resources model; memory through the lens of formal learning (e.g. how we learn new things in school) or through a wider episodic framework (e.g. the processes by which we create and maintain memories of significant life events), or even schema theory (e.g. from a cognitive, self, social or socio-cultural perspective).
This does make psychology a particularly flexible discipline, but as someone overly concerned with definitions, it’s also very frustrating.
Other stuff
New on the Blog: Can drawing enhance learning?
I’m on Twitter @marcxsmith
Or my Bookshop recommended books.
More free learning resources at The Emotional Learner.
Free EBook - How To Get Stuff Done!
If you like what I do, you might like to buy me a coffee :) Â