Through my life many a person has asked me in an often exasperated tone “why don’t you just pick something and stick to it?!”. I’ve always found everything a little too interesting though and vaguely try to justify this position based on my high school chemistry teacher’s assertion that all the interesting stuff happens in the cross-over between disciplines. That ultimately inspired me to study biochemistry at university which maybe shows the incredible amount of influence teachers can have if nothing else. Or that I’m easily led…

Two of my many passions in life are startups and education, occasionally intersecting as edtech. In the startup world there is an enormous movement called “The Lean Startup” which I have blogged about before. This is an idea and book by a guy called Erik Ries which has had enormous influence on the way technology companies run but also on projects within large corporates and even at a government level, notably under Barack Obama. 

People who are applying lean methodologies are generally trying to launch products. These products are generally technology based and often are a little crazy! Sometimes though, the crazy ideas are the best ones the trouble is you just don’t know. What Lean does is says “let’s accept that we don’t really know all of what will happen”. That is a pretty hard thing for some people to do but if you can state your assumptions you are at least a step closer to knowing what you don’t know. Now you hopefully know what you don’t know (still following?) you can try to test your ideas and get on with the enevitable failures so you can start to succeed as soon as possible. 

So, in the Lean Startup the idea is that you take your key, most basic requisite assumption and you test it by doing an experiment. That might be to answer “do people want to share images of kittens with moustaches?” for example. So, how do you test that? Well, what you need is something which gives you a fair test. That might be something simple like an image or social post for people to react to or it might be a full-on mobile app. Whatever form it takes though it is a “minimum viable product” in that it is the most simple viable test case for your product idea. Then you move on to the next question as you work towards “product-market fit”. Or “pivot” if indeed people don’t want to share images of kittens with moustaches; maybe you’ll try poodles in sunglasses instead…

Anyway, the key thing amongst the Silicon Valley jargonese is that you are being rigorous and scientific. You are testing your assumptions by gathering evidence in a dispassionate way. One important thing to note is that “Lean” doesn’t mean stingy, just efficient in your route to market by finding out what doesn’t work and why. 

How does this relate to education? Good question. One of the great thing about teachers and the education world in general is that everyone genuinely cares massively about what they do. The slight unfortunate consequence of that sometimes is that people will argue vehemently for what they see as “best” in an often very subjective way. I was reading on Twitter how some have misinterpreted the new College of Teaching’s statement that it wants to encourage “evidence-based practice” amongst teachers. In trying to understand what that actually means I realised that this shares a lot with the Lean Startup’s ideas. 

In Evidence-Based Practice a teacher (or health worker in that context) essentially uses evidence to validate the approach they are taking. For example a teacher might be using number-lines to teach addition. They might think this is the best approach but unless a comparison of pupil progress is made to a similar cohort who were taught maybe with column method that is purely opinion. If on comparison there is a significant difference in progress then there is some actual evidence that for that teacher and those pupils this is a fair assumption and an awesome way to teach. 

What we need is the Lean Teacher. The Lean Teacher will identify the assumptions they have about the way they teach a subject. They will seek evidence from colleagues and professional bodies. They will aim to understand why what they do works and why it might not for certain pupils. The Lean Teacher will look dispassionately at new teaching practices and compare their impact in a scientifically rigourous fashion, ensuring a fair sample size and adequate control. The Lean Teacher will not have an opinion on the best way to teach a given subject, they will know why the chosen pedagogy is effective and be able to prove it!

The challenge of course is that no two groups of pupils are the same. Also, differences in impact can take a long time to show up and no-one wants to do the control and teach an a way they don’t believe to be the best. This is where a culture of Evidence-Based Practice comes in, and I guess the new College of Teaching. Were the whole teaching profession to pool their evidence somehow they would be able to “fail faster” together and build a collective knowledge of which approaches work for different teachers and pupil groups in order to gain “pedagogy-pupil fit” in the best and fastest fashion. 

UPDATE

I’ve been thinking about all this a bit and I guess the challenge is that education is a pretty complicated place. In most cases there probably isn’t a best way to teach a given subject across the board as different groups within different classes for different teachers will respond differently to different approaches. 

There are two situations we can definitely avoid though. The first is the situation where a teacher thinks a new idea will be amazing but it actually isn’t. In that case it is the pupils who ultimately suffer and it can take a very long time to identify a down-shift and rectify it. With an evidence base it would be easier to look before you leap to make sure unproven faddy approaches don’t creep in without merit. 

The second situation is where a teacher knows they need to change something but is nervous about trying something new so doesn’t as they don’t want to make things even worse (as above). If there was a rigorous body of evidence then I think it would be far easier for a teacher to risk-assess a new approach and have some evidence that it was actually worth a shot. 

A good example for my school would be the introduction of setting for maths in KS2. We knew we needed to change something to raise results for more able pupils but really struggled to find any evidence that spoke clearly in either direction. We decided to trial setting for two terms and monitor the results over the period. It turns out that we managed a 25% increase in APS score in 1.5 terms so it seems setting has worked for us. If we had been able to access data like that (including the cases where things didn’t go so well) it would have been far easier to make that decision. I’m sure there are plenty of schools sitting on the fence who would benefit from that kind of change who need that evidence base to help them take the plunge. 

What I’ve decided we need is a database of some kind that compiles approaches to teaching key concepts against outcome. If you could corelate pupil progress outcomes to pedagogical changes in a statistically significant way (lots of schools) you would have a really valuable resource that would play a major part in driving improved educational outcomes for a load of children. So, any volunteers?