You can download a full pdf of this blog post including the three examples here. Please feel free to share with others, though preferably direct them to this page to download it!
How do you analyse qualitative data? You code it, right? Not always. And even if you do, chances are coding has only taken you a few steps in the long journey to your most important analytical insights.
I’m not dismissing coding altogether. I’ve done it many times and blogged about it, and expect I will code again. But there are times when coding doesn’t work, or when it doesn’t make sense to code at all. Problems with coding are increasingly being recognised (see this paper by St Pierre and Jackson 2014).
I am often asked: if not coding, then what? This blog post offers a concrete answer to that in terms of a logic and principles, and the full pdf gives examples from three studies.
Whatever you do in qualitative analysis is fine, as long as you’re finding it helpful. I’m far more worried about reaching new insights, seeing new possible meanings, making new connections, exploring new juxtapositions, hearing silences I’d missed in the noise of busy-work etc than I am about following rules or procedures, or methodological dogma.
I’m not the only one saying this. Pat Thomson wrote beautifully about how we can feel compelled into ‘technique-led’ analysis, avoiding anything that might feel ‘dodgy’. Her advocacy for ‘data play’ brings us into the deliciously messy and murky realms where standard techniques might go out of the window: she suggests random associations, redactions, scatter gun, and side by side approaches.
An approach where you are a strength not a hazard
The best qualitative analyses are the ones where the unique qualities, interests, insights, hunches, understandings, and creativity of the analyst come to the fore. Yes, that’s right: it’s all about what humans can do and what a robot or algorithm can’t. And yes, it’s about what you can do that perhaps no-one else can.
Sound extreme? I’m not throwing all ideas of rigour out of the window. In fact, the first example below shows how the approach I’m advocating can work really well in a team scenario where we seek confirmation among analysts (akin to inter-rater reliability). I’m not saying ‘anything goes’. I am saying: let’s seek the analysis where the best of us shines through, and where the output isn’t just what is in the data, but reflects an interaction between us and the data – where that ‘us’ is a very human, subjective, insightful one. Otherwise we are not analysing, we are just reporting. My video on ‘the, any or an analysis’ says more about this.
You can also check out an #openaccess paper I wrote with Prachi Srivastava that highlights reflexivity in analysis by asking: (1) What are the data telling me? (2) What do I want to know? And (3) What is the changing relationship between 1 and 2? [There is a video about this paper too]
The process I am about to describe is one in which the analysts is not cast out in the search for objectivity. We work with ‘things’ that increasingly reflect interaction between data and the analyst, not the data itself.
An alternative to coding
The approach I’ve ended up using many times is outlined below. I don’t call it a technique because it can’t be mechanically applied from one study to another. It is more a logic that follows a series of principles and implies a progressive flow in analysis.
The essence is this:
- Get into the data – systematically and playfully (in the way that Pat Thomson means).
- Systematically construct synoptic units – extractive summaries of how certain bits of data relate to something you’re interested in. These are not selections of bits of data, but written in your own words. (You can keep track of juicy quotations or vignettes you might want to use later, but the point is this is your writing here).
- Work with the synoptic units. Now instead of being faced with all the raw data, you’ve got these lovely new blocks to work and play seriously with. You could:
- Look for patterns – commonalities, contrasts, connections
- Juxtapose what seems to be odd, different, uncomfortable
- Look again for silences
- Look for a prior concepts or theoretical ideas
- Use a priori concepts or theoretical ideas to see similarity where on the surface things look different, to see difference where on the surface things look the same, or to see significance where on the surface things seem unimportant
- Ask ‘What do these units tell me? What do I want to know?’
- Make a mess and defamiliarize yourself by looking again in a different order, with a different question in mind etc.
- Do more data play and keep producing artefacts as you go. This might be
- Freewriting after a session with the synoptic units
- Concept mapping key points and their relationships
- An outline view of an argument (eg. using PowerPoint)
- Anything that you find helpful!
In some cases you might create another layer of synoptic units to work at a greater analytical distance from the data. One of the examples below illustrates this.
The key is that we enable ourselves to reach new insights not by letting go of the data completely, but by creating things to work with that reflect both the data and our insights, determinations of relevance etc. We can be systematic as we go through all the data in producing the synoptic units. We remain rigourous in our ‘intellectual hygiene’ (confronting what doesn’t fit, what is less clear, our analytical doubts etc) . We do not close off on opportunities for serious data play – rather we expand them.
If you’d like to read more, including three examples from real, published research, download the full pdf.