We often don’t know how little we know. Let me explain…
On a scale of 1 to 10 how confident are you in your ability to answer the following questions (10 = very confident; 1 = I haven’t a clue):
What is the capital of France?
What is the world’s largest land mammal?
What was the middle name of Wolfgang Mozart?
What year was the battle of Hastings?
The Velocipede was a nineteenth-century prototype of what?
Which actor starred in The Matrix?
It’s likely that some of the answers will come instantly to mind. However, some you might not know at all, while others you might know you know but still be unable to access the information - you might be able to see the face of the actor from The Matrix or list some other films they’ve been in but still fail to recall their name. In other words, there is a clear destination, even if that destination is ‘I don’t know.’
Now, again, on a scale of 1 to 10, how confident are your that you can explain the following topics:
Climate change
Vaccines
Depression
Memory
This is a different type of task because it requires you to explain rather that simply answer. You may have opted for 10 on some of these, but I suspect your confidence is lower than for the first list.
It might also surprise you to know that you’ve probably over-estimated your ability to explain the topics in the second list. You may, of course, have specialist knowledge in one or more of these subject areas - I’m quite confident in my ability to explain memory (or at least specific aspects of memory) because I’ve been studying it, teaching it or writing about it for around 25 years. I thought I knew a lot about climate change, until my 76 year old mother asked me to explain it to her and I was gobsmacked by how difficult I found the task to be. If I had to score my confidence in being able to explain climate change to my mum before I actually tried, I would have probably given myself a 7 or 8. After the attempt, this confidence would have taken a nose-dive.
This act of trying to explain a phenomenon reveals to participants how little they actually understand about the workings of that phenomenon, resulting in a pre- to post-explanation reduction in self- reported belief confidence.
The illusion of explanatory depth
In studies, most people believe they know more about a topic than they actually do - a phenomenon known as the illusion of explanatory depth. Facts (the first list) have a specific end point - we know what the answer will look like and we immediately know that we can answer it. However, when we are expected to explain something, we often have little idea of how that explanation will look like in the end, so we overestimate our knowledge of it.
The illusion of explanatory depth is a good example of the differences between shallow and deep learning. Shallow learning creates the illusion of knowledge, we begin to believe we know more than we actually do. We certainly witness this in others, but rarely stop to think about our own levels of understanding. During the COVID-19 pandemic it did seem that there were suddenly an awful lot of virology experts, especially ones who didn’t appear to have any training or experience in virology. They truly believed that they had deep knowledge of diseases and global pandemics, vaccine manufacture and testing, yet their own understanding was far removed from what those who had spent decades studying and researching these areas. Indeed, so high was this confidence that they would enter into arguments with people who were overwhelmingly considered to be experts.
According to Rozenblit and Keil (2002)
Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand. Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions. They believe that they can explain the world they live in fairly well. They are novices in two respects. First, they are novice “scientists”—their knowledge of most phenomena is not very deep. Second, they are novice epistemologists—their sense of the properties of knowledge itself (including how it is stored) is poor and potentially misleading (p.522).
The science of cycology
But the illusion of explanatory depth doesn’t only apply to potentially complex topics. Rebecca Lawson at the University of Liverpool asked her participants to complete a drawing of a bicycle. Over half of her volunteers were unable to complete the task correctly, mainly because they didn’t seem to know where to put the chain in relation to the pedals, despite being very familiar with what a bicycle looked like. Some drew the pedals attached to the front wheel, while others had the chain connected to both the front and rear wheel (see the images below, taken from Lawson’s study). It’s worth noting that over ninety per cent of the participants were very familiar with bicycles but, despite believing they knew what one looked like, they couldn’t get the drawings right.
They were then given a series of pictures of bicycles and asked to indicate the most accurate representation. They did better, but still made errors. More interestingly, perhaps, when non-cyclists were shown a real bike and asked to draw it, twelve percent of the them still made errors. Regular cyclists, on the other hand, did much better. When asked about the task, Lawson’s participants where genuinely surprised they knew less about the workings of bicycle than they actually did.
Puncturing the illusion
This puncturing of the illusion (drawing attention to the gaps in our knowledge) may be useful in reducing the illusion of explanatory depth. However, puncturing seemed to have little impact when applied to vaccine sceptics during the COVID-19 pandemic. Even as adults, we often find the differences between evidence and opinion troubling. Rozenblit and Keil, for example point out that:
Individuals will… discount high correlations that do not conform to an intuitive causal model but overemphasize weak correlations that do.
In other words, we have a tendency to cherry-pick the evidence that most closely reflects the views we hold anyway, a phenomenon we tend to describe as confirmation bias. If, for example, we believe that electronic devices are having a detrimental impact on wellbeing or academic attainment, we focus on the evidence that reveals such correlations without a great deal of critical analysis. On the other hand, we are more likely to approach evidence that fails to find such a connection with greater scepticism and may even make a greater effort to critique its methodology. We stress that correlation doesn’t imply causation only to the evidence that doesn’t match our beliefs, but we are more likely to accept it when it supports them. We may even fall back on statements such as ‘it’s obvious’, ‘it’s just common sense’, or even describe particular studies as ‘silly.’
If psychology has taught us anything over the past century, it’s that what we think is obvious often turns out not to be.
We then accuse others of confirmation bias when they don’t agree with us, while they will be accusing us of the same.
Professional researchers are encouraged to critique their own findings, which is why you’ll usually find a section in academic papers that addresses weaknesses and areas for further investigation.
Fast-food learning
In the main, we are all guilty of shallow thinking and drawn to explanations that require little mental effort. We want to be told the facts or be given a quick memorable definition or brief soundbite. Perhaps we are asked: What is learning? We can approach this question in several ways. First of all, we can pick out a ready-made definition that we have managed to memorise. So we might answer: ‘Learning is a change in long-term memory*.’ This is accurate but shallow, because learning encompasses much more than this. I could memorise a series of nonsense words which would result in a change in long-term memory, yet learning is also about utility. Being able to recite a list of nonsense words serves little purpose other than proving they have been memorised. This is useful for memory research, but not so much in the real world.
Would the answer have been different if I were asked to explain what learning is?
Definitions of learning rarely include notions of utility, so it’s up to us to use other aspects of our knowledge to elaborate. Also, there appears to be an assumption that changes in long-term memory are always permanent, which is not alway the case.
Some definitions are certainly more complex, such as this from psychologist and computer scientist John Anderson (1995) who states learning is:
the process by which relatively permanent changes occur in behavioural potential as a result of experience
This is a better definition, I think, because it identifies changes as being ‘relatively permanent’, yet for the layperson it might be less attractive.
Elaborate
Deep learning requires elaboration and, I would argue, if we can’t elaborate then we haven’t learned anything. Yet, not all learning requires such elaboration (I can memorise a telephone number, which has utility and leads to relatively permanent changes in long-term memory, but there’s not much here to elaborate). Perhaps we could then adjust this to, some aspects of learning should progress from shallow to deep processing via elaboration?
(But see this from Daniel Willingham on definitions of learning if you’re curious about such things).
Elaboration arises from the consolidation process (when raw information becomes knowledge). The new information merges with previous learning and forms networks of interrelated ideas, words, facts and so on. We can call these schemas, or neural networks or whatever (they are, after all, hypothetical constructs). However, consistency is useful and I usually refer to these as schemas. The activation of schemas allows us to explain complex concepts within an interconnected framework. But schemas can also reinforce confirmation bias (see this article for a more elaborate (!) explanation).
We need to think smarter
There are perhaps wider consequences to this shallow learning. Global challenges are becoming more complex, from pandemics to global warming and geo-political instability. If we are to face these challenges we all need to become smarter. While advances in technology have allowed us to access information quickly, it doesn’t necessarily make us smarter.
Indeed, in a 2008 edition of The Atlantic, technology writer Nicholas Carr controversially suggested the Internet is, in fact, making us all stupid. I’m not sure I’d go that far, but there may well be some truth in it.
*This definition of memory is from Kirschener, Sweller & Clark (2006). See also David Didau’s piece on the definition debate here