A question about the “53% of *all* teachers plan to quit” business.

It’s been popping up on Twitter quite a bit today, and all over the national press it seems. The Guardian claims that “Half of all teachers in England threaten to quit as morale crashes”, whilst similar headlines can be found from The Telegraph, The Independent, The Daily Mail and the BBC.

Whilst the NUT press release does not use the word “all”, it certainly implies it.

Now, I’m certainly not suggesting that there isn’t a morale crisis in state schools at the moment. And I’m not suggesting that I know for certain that 53% of all teachers aren’t thinking of leaving. But this survey and, more specifically, the way in which it has been reported are seriously misleading.

Before I go any further let me make something clear: I am not a statistician. I don’t really do numbers. In fact, numbers and I cross the road to avoid each other. Numbers is the language that Satan uses to confuse mortals. So, please do help me to see things differently if I’ve got all this terribly wrong.

Using the numbers given from the NUT (this is an .xlsx file) we see straight away a problem. The total number of participants is 1020. This may sound like quite a lot, until we see that, according to the most recently available data from the government,  “in November 2011, there were 438,000 teachers in state-funded schools in England on a full-time equivalent basis”

I realise, of course, that that figure will have changed. But even if the teacher shortage crisis means we’ve lost 38,000 teachers since then, that would still be 400,000 teachers in England. Using the published number, the NUT survey is dealing with a sample of 1020 out of a population of 438,000. This is 0.23% of the teaching population who responded to the survey. Less than 1 whole percent.

Of those 1020 participants, the NUT data table tells us that 53% said they were thinking of leaving the profession. That’s 53% of 0.23% of the total teaching population in England. Incidentally, the number given in the data table is 536.

The data also tell us that of those 536 teachers who are planning to leave the profession, around a third of them are aiming to retire (34%).

I’ve been told, and have read, that such a sample size for such a population is deemed to be quite good. I find this, in itself, quite staggering, and reason enough to doubt the efficacy of such surveys and such an approach to social science. But, it is only deemed good if the sample is random.

The NUT website gives no indication that I can find about how the participants were recruited to the survey. And so far I’ve been unable to locate any reference to it on the YouGov website. My suspicion is that the survey was conducted online and probably via a link sent to members of the NUT via email, or perhaps through other member communication. This in itself would raise questions for me about how securely warranted any claims that this survey reflects “all teachers” might be. There is an assumption, if I’m right, that the NUT membership is genuinely reflective of the general teaching population. Furthermore, this survey only reflects the views of those NUT members who could be bothered to take the survey. What views are such teachers likely to hold?

I’m not saying there isn’t a genuine issue at the heart of this. There probably is. But can anyone out there tell me how this survey can really tell us anything? Is it typical of social science? If so, educational research is buggered.

Advertisements

4 thoughts on “A question about the “53% of *all* teachers plan to quit” business.

  1. There are two issues here, as I see it.

    The first, is where I think you are mistaken. The size of participants does not, on its own invalidate a study. The discipline of sampling and randomisation of a smaller number of a population in order to make a calculated statement has a long history in one way or another in pretty much all fields of research. There are various statistical calculations which allow you to apply a probability to your sample to see if it is likely to be representative (to a given %, usually 1%, 3% or 5% in surveys of this nature). lots of online stuff on history, application and criticisms of these methods.

    But what it does mean, in effect, is that the 0.23% of the teaching population you mention might be seen to be representative of the overall teaching population, despite the small absolute number.

    The second, is the really critical point, is that the above only holds as long as we can be sure that appropriate means have been taken to eliminate bias. I.e. to ensure that the people who responded aren’t going to respond in a certain way not representative of the population as a whole. What methodology there is published suggests they have at least attempted to weight for certain factors, which means they understand this principle.

    However, failing to publish any more detailed methodology than that in their press release, the NUT has failed to assure us that the base sample (i.e those who were given an opportunity to respond)

    But yes, regardless of this subject, educational research is buggered and has probably always been fairly buggered.

    • Thank you for taking the time to reply.

      I can’t help but feel that the statistical tools applied to this type of work are spurious. But that’s just a hunch.

      As for the method – yes, without more details about how people were recruited to the survey, it’s difficult to judge the efficacy of the results. Indeed, the very lack of an account of method, for me, leads me to dismiss the claim.

      • My instinct as I read this is to scream at you out of sheer frustration. On several points you are badly misinformed and I would really urge you to ask a statistician about your uncertainty rather than publishing a blog which has the clear implication that this is not a valid statistical study when you have no evidence for this other than your own instincts.

        To reply to your specific points:
        1. You gov survey methodologies are very well established and standardised across surveys of this type. 1000 respondents is a perfectly respectable number if the methodology is solid, and you gov methodologies are better than most.

        2. In surveys of this type you gov have complex weighting mechanisms so that the weight given to a particular respondent’s answers is in proportion to how many of those type of respondents there are in the relevant population, in this case the teaching profession. Unless specifically requested, you gov will send out the link to the survey not the nut – and I am sure (having commissioned surveys from you gov in the past) they would refuse to allow the nut to send out the link for the obvious bias this would introduce.

        There is enough misinformation in relation to educational research without you contributing to it with a poorly informed blog. Have you asked you gov or nut to clarify? I’d suggest you do that before imputing something for which you have no evidence.

  2. Feel free to scream away; that’s pretty much how I feel about this kind of work.

    There is an issue with the fact that the method of recruitment isn’t at all clear. I have asked the NUT via Twitter, but admit I haven’t asked YouGov nor emailed the NUT. The weighting is irrelevant if the sample is only drawn from NUT members participating in an online survey.

    As an aside, how do you define “methodology”?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s