When I was training as an accountant and running audits we were taught that there were different levels of reliability of evidence. The least reliable was oral evidence from the the client. The most reliable was written evidence provided by an external, independent organisation in response to a specific request made by the audit team. So, for example, a bank statement given to us by the client was deemed to be fairly reliable, but not as reliable as a bank statement sent directly to us by the bank in response to a written request made by us.
During the course of an audit you would find yourself relying on all sorts or evidence, but a large part of the audit judgement (particularly for the smaller client) was wrapped up in that bank statement. If oral representations made by the client weren’t able to be linked back in some way to the statement then they were given much less weight than evidence that could. Our job as auditors was to take a body of evidence and judge whether or not it enabled us to say that the accounts presented by the client represented a true and fair picture of their business. No one item made the case. And if the client said something was true, and the bank statement said it wasn’t, then it was game over, no contest, the bank statement won.
It’s not that much of a leap from there to looking at evidence about education, what works and what doesn’t. Independent, verifiable evidence about specific activities will clearly hold sway over a single classroom teacher who claims “it works for me” with no other evidence to support the claim.
But it is more complex that that. For example, is a peer reviewed piece of research into cognitive phenomenon which can indirectly be used to support a certain type of teaching more reliable as evidence than a non-peer reviewed piece of work into the specific type of teaching itself? If ten thousand teachers say “this works for me” and one peer reviewed piece of research says it shouldn’t, which should we trust as evidence? The answer is usually, of course, no one piece of evidence should make the case, but we do need to think about how we can assign different weights to individual pieces of evidence. A good example here is in the field of learning technology. I tend to give much less weight to research carried out before 2000 than after, mainly because most of the types of technology have changed so much. This isn’t a hard and fast rule, but does provide some help when considering the mass of evidence. Quality and relevance must be weighed alongside any preponderance of evidence.
Almost as an aside, I would like to touch on the idea of the “expert”. It is usually suggested that it is wrong to make any claims relating to the quality or otherwise of the evidence based on who is providing it. I’m not convinced that this is always possible nor is it always desirable. Going back to the audit example, automatically you would give more audit weight to a bank statement from the Bank of England than you would to one from the Bank of Outer Nowhere. Similarly, I would place more weight on a statement by Stephen Hawkins about Black Holes than I would do on a post-grad paper from a student at the University of Whereever. This can never, however, be a simple decision, and requires you to have more than just passing knowledge of what is being claimed. Great mistakes can be made this way, a good example being what happened when people listened to a supposed ‘expert’ over MMR vaccines. The issue here was that the people using the information did not have sufficient knowledge in order to discriminate between the experts being offered to them. One would hope that most teachers would have the capacity to make the right choices here, but I do recognise that not everyone would agree with that assertion. So there is perhaps a clear need for more professional development here.
There is also the issue of where the onus of proof lies. If a teacher has for many years been using a particular approach, with great success, then the onus must be on anyone who suggests the approach is the wrong one to provide a sufficient level and quantity of evidence to convince the teacher that they are in error. This will always be a hard thing to do. For example, to step on the third rail of educational research, how do you convince a teacher who for years has successfully taught children to read without using a systematic synthetic phonics approach that they should do so. And as a profession, at what point should we say that failure to accept the evidence is unprofessional.
So, I’m finally going to get to an opinion (which is almost entirely un-evidenced). Too often research into educational issues is not remotely scientific. Too often it lacks a hypothesis. Too often (and this does continue, even with EEF funded research), research is based around a commercial product, with the aim being to show the efficacy of the product. That’s not science, it’s marketing. I think we are going to see a lot more of over the coming months and years – “Use our assessment system, proven to make greater progress’ – I can read it now.
We are on the cusp of knowing a great deal about how people learn, the basic mechanisms, which can give us a much greater insight into how then to teach, to take best advantage of this knowledge. I think there is a consensus around the need for research. Before rushing headlong into it, I would like to see a discussion about evidence.