First assignment: What have I learnt?

I’ve had the result from my first assignment. I got a merit. This is ok, but clearly some way to go. The assignment was to conduct a literature review – a critique of a single article reporting on a piece research. I chose this:

Top, Yukselturk and Cakir (2011) Gender and Web 2.0 technology awareness among ICT teachers, British Journal of Educational Technology 42(5), E106-E109

This seemed to be right up my research interest alley, but I found it flawed in a few respects. I’m not going to repeat my assignment here, but would like to summarise what I have learnt from doing the review, and some more general thoughts that spin out of it.

Firstly, having a clearly defined design is crucial, especially when it comes to being clear about how, and why, you are selecting samples. There seemed to be no consideration of this here, and the report raised a number of questions for me about the participants of the study. The biggest issue is that, when comparing gender results, it’s probably best to have an equal number of males and females.

The second thing that I have learnt is that using Likert scales is problematic, especially if (a) you don’t provide the items of your scale in your report, meaning that any findings you claim cannot be scrutinised effectively, and (b) the items are all worded in such a way as to lead repsondents to a particular perspective. The article under review used two Likert scales, neither of which were provided in a table or appendix. One of the scales had been “adapted” from elsewhere, and the original source did list the items. There were two additional issues that I could see here. Firstly, the authors did not say how they had “adapted” the scale. Secondly, the items in the original scale were couched in such a way that presupposed the respondents would think positively about the topic. The conclusion for me here is that, as part of the design of your research, it is pretty important to have appropriate tools for the job. And it’s nice to be honest and clear about your tools in your reporting.

I think the biggest lesson for me, though, is a more fundamental one: why are we so reliant on quantative data? My lesson here is that I simply don’t know. People will say that quantative data is more valid than qualatative data, that it is measurable and verifiable. There is a notion out there that “science” has to manifest in numbers for it to be real, somehow. This is horribly obvious in education itself – how do we measure success? What places a school high on a league table?

I’m going to use a sporting analogy here. This is something which I generally don’t make a habit of doing because sport and I cross the street to avoid each other. However, here I think I might get away with it.

Imagine two football matches. They both end with the same score, let’s say 2-2. What can you tell me about each match, other than they were both games of two halves? This is, of course, what we do with children and young people. We clump them together in order to generate a string of numbers which are, ultimately, meaningless. And then we judge the effectiveness of teaching on these generalised, meaningless numbers. The new league tables do, at least, add an extra layer of numbers to the picture with the “value added” measure which, it could be argued, show progress of learners. But that still comes down to a numbers game.

And this is what a lot of people seem to prefer in science. A bunch of meaningless numbers. For me, the really interesting stuff would come out of what the participants might actually say about the topic. What is their perspective? What can my research actually learn from the participants? One of the issues I have is that attempts to quantify such things in social science risk losing the “social”; we de-humanise the subjects of the study. And this is true in the education system.

But I understand the need for something concrete, something measurable. I can see how pretty graphs can look. I’m a sucker for a 3D pie chart.

So, where am I on this particular issue? I think I might be advocating a mixed-methods approach.

Now, I know I’m being naive here. So, I am prepared to see my views change as I progress through the course. The journey, I guess, is never ending.



2 thoughts on “First assignment: What have I learnt?

  1. Often researchers try to use statistics to prove their point and many are selective as to which information they provide (lies, damned lies and statistics). Incorrect interpretation of results can cause more damage than ignorance of them. However, correct interpretation of statistics can be invaluable eg Florence Nightingale’s correct analysys of military deaths in the crimean war occurred after she returned to Britain. Properly presented statistics can help people to look at the larger picture rather than the fine detail.

  2. I guess it is a case of whatever design and data answers your research question. The question that you are seeking to answer may reflect your philosophical stance: whatever data you collect (be it words or numbers) should be the best way to measure and find out your answer. Simple

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s