The Factors that Affect Learning: 4 ways in which evidence can get in the way of learning happening
Evidence is essential to learning happening, but it's not always helpful.
4 WAYS IN WHICH EVIDENCE CAN GET IN THE WAY OF LEARNING
Before we begin, here’s that definition of learning again. (Repeated experiences is everything!)
Learning, getting better, happens successfully when our brains, through repeated experiences, engage in a good struggle to a) create or extend an existing neuronal chain b) make a neuronal constellation more complex or c) hardwire an existing chain so that it fires automatically and becomes sticky. When this happens we acquire knowledge, develop our skills and deepen our understandings in different ways and over different periods of time.
What this means is that we are looking for evidence that lets us know whether:
A neuronal chain has been created, extended or made more complex;
Whether the chain has been made sticky or not and, if not, why?;
Whether we have acquired different knowledge, developed different skills or deepened our different understandings;
Learning has happened over different time periods.
The tag line to these current posts is that ‘evidence is essential to learning happening but it is not always helpful.’ In last week’s post we looked at five ways in which evidence can help learning happen. In this week’s post we are going to look at four ways in which evidence can be unhelpful. Here goes.
Evidence is not helpful when:
1
It only focuses on what is easily testable
We said in last week’s post that one of the ways in which evidence helps learning is that it helps to focus minds around what is important. In an ideal world, schools would decide what academic, social, emotional and physical learning is truly important at different ages and stages of schooling and then look for evidence to see whether getting better is happening, or not.
In other words, in an ideal world, the evidence we look for through assessment and evaluation comes after a choice of what is important and not before it.
But this isn’t as straightforward as it might sound.
Schools are under immense time pressure through lesson length, the length of the school day, the complexities of classroom life, the multiple demands placed on them and more. Finding space to do what should be done is an unavoidable victim of those pressures.
One of the ways schools (and governments) respond to the pressures is to reduce the amount of time they spend looking for any evidence that takes time to gather. In doing so, these schools inevitably put less emphasis on more complex kinds of evidence.
As a result, knowledge evidence will tend to take precedence over all other kinds of learning. It’s simply quicker to gather. This allows evidence to fulfil a small part of our definition of learning (knowledge) but nowhere near all of it and often not the most important parts.
There’s nothing wrong with evidence of knowledge acquisition, of course. It is important. But there is something very wrong from a learning perspective when this is the only - or most common - evidence collected, for two reasons.
First, because many of the demands on schools are not only about knowledge acquisition. We want children and students to be developing their skills and deepening their understandings, in conventional academic subjects and elsewhere. In subjects such as music, physical education and art, it is difficult to imagine what learning would even look like if it only focused on knowledge. Focusing on evidence about knowledge in these subjects obviously takes away evidence about their very essence. It also takes away the less obvious essence of mathematics, literature, science, the humanities and so on.
Second, because schools also exist to help children and students develop positive attitudes to their work, are challenged to make sure that children and students find learning enjoyable, and to help students learn how to live as a community and more.
These are all things that many societies value; they are important. But in many schools, the actual evidence about learning (getting better) in these important areas is not gathered well, if at all. The temptation (or pressure) is to focus only on easily gatherable evidence about knowledge learning and, at best, pay lip service to the development of skills and deepening of understanding. When this happens, when schools, teachers, students and parents don’t have an evidence base from which to discuss these kinds of learning, the learning inevitably becomes less important.
2
When it only focuses on the demands for quick and easily analysable evidence, available in digestible form.
The situation is made worse by the fact that not only do schools want data that is quickly and easily collected, they are also under time pressure to provide data that is quick to analyse. Not only that, stakeholders often want the results of that analysis in an easily digestible and often publishable form.
(That need for simplicity is why, in my country, the conclusions of our Government’s OFSTED school inspections were until a few weeks ago, unhelpfully reduced to a few short words: Outstanding, Good, Requires Improvement and Inadequate. These simple categories feel useful and easy to comprehend. The absence of complexity is why they were popular with governments, inspectors, parents and others, but it came at a price. Almost no school is totally outstanding, totally good, totally requiring improvement or, usually, totally inadequate. These easy, quick judgements hid complex evidence that would have been more helpful in many different ways.
There’s a good reason for this. Many of the receivers of this evidence - governors, Board members, parents, government representatives, inspection teams and others - are themselves time poor, and don’t have the time available to look at complex analyses of complex issues. Four year and five year governments, keen to show that they have been effective as the next election period looms before them, simply don’t have the inclination for complex data and, by default, more complex learning.
You can guess what happens as a result of all this. The most analysable and easy to report evidence is the same evidence as that which is simplest to collect - Knowledge. The least easily analysable and reportable evidence is that which it is more difficult to collect - Skills, Understanding, Attitudes and so on. Schools can end up with analysable evidence about just one small part of all those important things around which we want children and students to get better.
Right now, this is almost an unresolvable conundrum for many schools. What children, students, teachers, parents and others are left with, though, is a set of repeated experiences of mostly assessments (as opposed to evaluations) that helps them learn that:
Some supposedly important things are actually nowhere near as important as knowledge;
We don’t need to concentrate on these apparently important things too much because evidence is not going to be asked for or implemented;
There is little evidence available to analyse that will inform how to help children and students get better in important areas that matter to them and their parents.
A sidebar moment
I have had close connections with a school that had been described as ‘outstanding’ in its previous UK government led inspection. It is a well liked and over-subscribed school. I would be thrilled if my child went there. When it was re-inspected, post-Covid and in the first month of the school year, it was downgraded to ‘Requires Improvement’, a much worse description and the second worst description available to inspection teams.
How did this happen?
First, the whole inspection took four inspectors just two days. There simply wasn’t enough time to gather or analyse much of the evidence.
Second, the government changed the rules and required more ‘evidence’ of knowledge acquisition. You may be thinking that this is because the government valued knowledge more highly than skills and understanding. You may be right. But it is also because in two days, there is only time to gather the slender evidence of knowledge learning and no time to look at skills, understanding or the learning that happens over time.
Third, in faithfully trying to carry out the inspection criteria, the team asked 7 year-old children about factual information they had ‘learned’ four months earlier before the summer break and about which they had had no recently repeated experiences to make their knowledge sticky after those four months. Obviously - because this is what happens even with sticky knowledge if left untouched for a time - its stickiness diminished. Students simply couldn’t remember facts that had been learned months earlier and before the summer break. Because they couldn’t remember random facts asked of them by the inspection team the school was downgraded.
This downgrading of the school caused the staff and others all the issues you might imagine. Had the evidence been a) sufficient, b) wide ranging, c) reliable and d) analysable over a big enough sample, the staff might have had to accept the conclusions of the inspection and deal with them. But, poor data that could not be properly analysed led to very bad evidence and some unpleasant consequences.
This is a brief aside about one school. It could also be a brief aside about what happens in some classrooms. We’ll come to that soon.
3
Too much focus on evidence can get in the way of creating excitement about subjects rather than outcomes
There’s an old phrase beloved of teachers who query whether evidence gathering is really important. It’s this. You can’t make a pig grow by weighing it. The sentence isn’t, of course, entirely accurate. Weighing a pig occasionally might give us clues about whether its growth is on a normal path. If it isn’t, it will likely cause us to think why not. (You will notice that this is not too different from using evidence in schools and classrooms.) The sentence would be more accurate if it read: You can’t enable a pig to grow by only weighing it. Too much weighing gets in the way of all the other activities that make up pig rearing.
Another sidebar moment.
I am in a secondary classroom in an international school, just a few years ago. The students, around 15 to 16 years old, are in the two year programme leading to their GCSE examinations, a system common to schools that work within an English system of education. The school wants its students to do well in these exams, partly because it cares for them and partly because it is in a competitive space with other schools and needs to demonstrate how well its students perform in exams. Parents want their children to do as well as they can in these exams. Their children, too, mostly want to do well because their exam scores will have some affect on the future years at school.
I am in a chemistry lesson. The students are well-behaved. It is well organised. The teacher is taking the students through the writing up of an investigation that had, apparently, taken place in a previous lesson. In the first few minutes of my visit it all seems to be going well.
But…every comment (and I do mean every comment) the teacher makes to the students about their learning is along the lines of ‘If you write your report like this, it will get you a Level 5, but if you write it like this it will get you a Level 6’. It’s not bad advice (and there’s nothing wrong with exam preparation) in itself but it’s the only kind of comment the teacher makes for pretty much the whole 45 minutes of the lesson.
After the lesson, I ask some of the students if the lesson I was a part of was typical or just a one-off. ‘That’s how it usually is’, they tell me. ‘In fact, in these two years before our exam, that’s how most lessons are in most subjects.’
Let’s just think of repeated experiences for a moment. As these students move through this two-year programme, they aren’t really having chemistry lessons at all. They are having exam prep lessons. There is nothing wrong with exam preparation; there is something wrong with only exam preparation, week after week after week. The potential excitement about chemistry (or any other subject) that I hope my child might experience and perhaps get attracted by is pushed into a siding and ‘chemistry’ simply becomes a vehicle for learning how to pass this particular exam.
What’s happened in this lesson and possibly across this school is not that evidence is not being focused on but a certain kind of evidence is being focused on too much. There’s too much pig weighing going on and not enough pig feeding and rearing happening.
4
When teacher judgements are inadequately supported and moderated, poor evidence is the result.
Evidence about learning comes from different places and in different forms. We have discussed how the most straightforward - in terms of time to administer, ease of collection, marking and analysing - is the assessment of knowledge.
But we also saw in the first part of this book that assessment overlaps quickly into evaluation, into a place where judgement and not just confirmation becomes important. You’ll remember that skills assessment is a mix of both knowing and judging. It is judgement that provides the evidence of whether skills learning is at Beginning, Developing or Mastering levels and whether those levels are differently appropriate for different age levels. (What is Mastering for a 7 year-old may only be Beginning for a 16 year-old.)
You’ll also remember that the very nature of ‘understanding’ makes the sort of assessment we use to gather evidence about knowledge learning pretty useless. Understanding requires judgement over time about the personal sense someone is making of something. .
This sounds obvious and relatively unproblematic. It isn’t. Knowledge assessment is unproblematic precisely because most of us agree that the formula of water is H2O or that the 37th President of the USA was Richard Nixon. But judging whether a film has been ‘good’ is very different, as any conversation in a bar after a film can attest. The sentence - ‘What do you mean, you thought it was weak?’ and my response: ‘I loved it?’ - has surprised me on many occasions.
In a bar, this might not matter. Nothing much depends on you and me disagreeing about the quality of a film. But it matters in schools because for them to make any sense, the evaluations must have some kind of validity and reliability, whoever has made them. The judgements a teacher makes about a student’s art, writing, abilities as a scientist or performance on the basketball court can’t be based on wildly different criteria than those used by her colleagues. If they are, the judgements are just a view. We can’t use them as evidence because we can’t be confident that they mean the same for other teachers and other students.
Teachers need two kinds of support in order to make evaluations a reliable source of evidence that can be used to make decisions about how much a student has learned and what can be done to help them learn better.
The first kind of support is descriptors about what skills performance looks like at different ages and stages of a student’s school life. In other words, what does being at Mastering level look like in a particular skill if you are aged 5, 7, 9, 11 and 15 and how does it progress and differ? As we have already seen, riding a bike ‘brilliantly’ at age 5 is going to look different from riding a bike ‘brilliantly’ at age 15.
The second kind of support teachers need is moderation. Moderation is a process through which groups of those responsible for evaluating a student’s learning get together to make sure that they are interpreting the descriptors similarly. In the jargon, they are trying to make sure that their judgements are reliable, increasingly consistent across the school whoever is making them.
When good descriptors and effective moderation are in place, the evidence based about student learning in skills in particular and in understanding becomes a very powerful tool in helping to both see where learning is now, decide what needs to be done to help it get better and to see whether it has got better later.
When good descriptors and programmes of moderation are not available then evidence about skills learning and the deepening of understanding is at least problematic and almost always unreliable.
What happens is that even in schools that are trying to gather evidence about a wide range of learning, they can waste an enormous amount of time that they can ill afford to lose. We can end up in the unhelpful position that a school accepts the importance of assessing and evaluating broadly, including skills and understanding. This is great news. But without descriptors and moderation the evidence that is gathered and analysed becomes invalid and unreliable across the school. What gets passed from teacher to teacher as students move through the school - and what they might base their own planning for learning around - becomes unhelpful. Ironically, a school that has taken time to gather and analyse more evidence is in danger of misusing it so that instead of helping learning happen, it gets in the way.
Let’s conclude this week’s post. The need for enough good, appropriate, analysable evidence of each of the components of our definition of learning should be baked into the process. When we have good evidence, students, teachers and others are able to influence what happens so that learners get better more often.
When we have ‘evidence’ that is too narrow and isn’t good, appropriate or analysable, we a) engage in a huge amount of wasted time, b) disable students, teachers and others from influencing the amount of learning that can happen and c) communicate so many repeated messages to students and others about our lack of commitment to some areas of learning and their learning in particular.
Next week, we’ll be looking at the crucial evidence of learning that many classrooms and schools don’t have but should have. It’s really important and a game-changer.
See you next Monday.
Martin
AN UPDATED EMBRYONIC CHECKLIST
Here is a revised embryonic checklist that takes into account both last week’s and this week’s posts. You can now reflect on the extent to which the evidence you have and the way you use it is likely to be helping or not helping learning happen in your classrooms and school. (Please look at our definition at the top of this post as a reminder of what we mean by learning.)
Whatever kinds of assessments and evaluations you use, do they:
The good news
Compare then and now in some way?
Provide help in identifying where learning breakdown is happening and why?
Enable you to provide specific feedback to your students, parents and others?
Focus on and signify what really matters or are they trying to produce evidence of learning about everything?
Allow you to compare the learning in different subjects, in different classrooms, in different year-groups and between different schools?
The unhelpful news
Focus only on that which is easily testable?
Risk devaluing other important aspects of learning?
Focus on that which is easily analysed and reported?
Risk only having evidence on a limited number of important things?
Take up too much time to the detriment of learning?
Risk giving students inappropriate repeated experiences of what matters?
When not only about knowledge, fail to be based on good descriptors and moderation?