I have a problem with boxes. No, it’s not because it’s getting close to Christmas and every delivery person in the UK is expecting me to keep hold of packages for my absent neighbours. It’s because there appears to be genuine confusion over the boxes that don’t exist in our heads.
Let me explain.
Cognitive theories involving boxes (and quite a few arrows) have dominated my learning and teaching since I became involved in psychology in the mid-90s.
Multi-component models of memory, for example, assume the flow of information from one component to another. Information can flow back and forth between short-term (or working) memory and long-term memory. A very simplistic explanation is that we manipulate information in working memory and store it permanently in long-term memory.
But we also access information in long-term memory, temporarily transfer it to working memory, do something with it and then re-consolidate the slightly altered information back into long-term memory (the same is true for models of selective attention).
There’s an awful lot of to-ing and fro-ing - and so many bloomin’ boxes.
Of course, the boxes and arrows are part of a conceptual model - you won’t find boxes and arrows in the brain no matter how hard you search. This means it’s pretty difficult to map these models onto specific brain regions, but then that’s not really what the models are for. However, not all memory models are quite so boxy, indeed contemporary accounts appear to be moving away from the box and arrow models, at least a little, while neuroscience is filling in many of the gaps.
Nelson Cowan’s Embedded Processes Model of memory differs from the usual box and arrow models, as is his greater emphasis on the role of attention. Cowan’s theory can be explained a little like this:
Imagine all the memories you have as stars in a pitch black universe. At the moment, you’re not bringing any of them to mind, so they are quite faint against the dark background. Now, imagine the faint one on the far right of the black canvas is a memory you wish to recall; as you focus your attention on it, it becomes brighter and clearer, as if you’ve shone a spotlight onto it. In the language of Cowan’s model, what you’ve done is activate a portion of long-term memory, transforming it temporarily into an item in working memory.
Working memory, therefore, is activated long-term memory, held in our focus of attention while we do whatever it is we need to do with it. Rather than components being discrete boxes, they are embedded within each other, hence the Embedded Processes Model. The flexibility of the model and the emphasis on interconnectivity also supports recent research implying that working memory may not always be necessary for retrieval of long-term memories (Li, Theeuwes & Wang, 2022). If future evidence supports this notion, many of the principles on which we base our assumptions about learning may begin to crumble.
It’s clear that, although most theoretical models of memory share many basic principles, they don’t agree on everything. These models are certainly influential - but they are still models that attempt to explain how memory works by employing graphic representations that do not exist in the world of the living brain.
Guy Claxton (2021), drawing on earlier comments from Ulric Neisser (one of the founders of cognitive psychology), is critical of this ‘boxology.’ According to Claxton, ‘The boxes and the arrows beloved of the information processing generation look like they might be due for a trip to the dump - and that… includes long-term and short-term memory (p101).
I’m not as convinced as Claxton about the demise of long- and short-term memory. Admittedly, however, advances in brain imaging, along with a greater understanding of how memory and learning work, might lead to a radical re-think in the not-too-distant future. So Claxton might be right to be critical.
And he’s in good company. According to Cowan, ‘Practically nobody literally believes that there are boxes inside the head doing the work,’ even poking fun at his own model by adding, ‘let alone boxes within boxes.’
I’m not convinced of this either. It’s not that I’m claiming that people genuinely believe in boxes in the head, only that it’s often hard for some to get their head around how these models map onto actual brains - because they generally don’t.
Yet Cowan goes on to defend these models in terms of their ability to graphically represent processes that are often abstract, relating them to domains that are easier to think about such as plumbing, electricity and computer science (Logie, Camos and Cowan, 2021, p57).
So boxes serve a purpose, even though they explain little about what is actually going on in the brain. What the models do attempt to explain are the findings from laboratory research and case studies of people with severe memory impairments (individuals like ‘HM’ and ‘KF’). How long these models remain relevant perhaps depends on how quickly we move towards brain-based models of memory and learning.
I’m on Bluesky.