Category Archives: Research Design

When coding doesn’t work, or doesn’t make sense: Synoptic units in qualitative data analysis

You can download a full pdf of this blog post including the three examples here. Please feel free to share with others, though preferably direct them to this page to download it!

 

How do you analyse qualitative data? You code it, right? Not always. And even if you do, chances are coding has only taken you a few steps in the long journey to your most important analytical insights.

I’m not dismissing coding altogether. I’ve done it many times and blogged about it, and expect I will code again. But there are times when coding doesn’t work, or when it doesn’t make sense to code at all. Problems with coding are increasingly being recognised (see this paper by St Pierre and Jackson 2014).

I am often asked: if not coding, then what? This blog post offers a concrete answer to that in terms of a logic and principles, and the full pdf gives examples from three studies.

Whatever you do in qualitative analysis is fine, as long as you’re finding it helpful. I’m far more worried about reaching new insights, seeing new possible meanings, making new connections, exploring new juxtapositions, hearing silences I’d missed in the noise of busy-work etc than I am about following rules or procedures, or methodological dogma.

I’m not the only one saying this. Pat Thomson wrote beautifully about how we can feel compelled into ‘technique-led’ analysis, avoiding anything that might feel ‘dodgy’. Her advocacy for ‘data play’ brings us into the deliciously messy and murky realms where standard techniques might go out of the window: she suggests random associations, redactions, scatter gun, and side by side approaches.

 

An approach where you are a strength not a hazard

The best qualitative analyses are the ones where the unique qualities, interests, insights, hunches, understandings, and creativity of the analyst come to the fore. Yes, that’s right: it’s all about what humans can do and what a robot or algorithm can’t. And yes, it’s about what you can do that perhaps no-one else can.

Sound extreme? I’m not throwing all ideas of rigour out of the window. In fact, the first example below shows how the approach I’m advocating can work really well in a team scenario where we seek confirmation among analysts (akin to inter-rater reliability). I’m not saying ‘anything goes’. I am saying: let’s seek the analysis where the best of us shines through, and where the output isn’t just what is in the data, but reflects an interaction between us and the data – where that ‘us’ is a very human, subjective, insightful one. Otherwise we are not analysing, we are just reporting. My video on ‘the, any or an analysis’ says more about this.

You can also check out an #openaccess paper I wrote with Prachi Srivastava that highlights reflexivity in analysis by asking: (1) What are the data telling me? (2) What do I want to know? And (3) What is the changing relationship between 1 and 2? [There is a video about this paper too]

The process I am about to describe is one in which the analysts is not cast out in the search for objectivity. We work with ‘things’ that increasingly reflect interaction between data and the analyst, not the data itself.

 

An alternative to coding

The approach I’ve ended up using many times is outlined below. I don’t call it a technique because it can’t be mechanically applied from one study to another. It is more a logic that follows a series of principles and implies a progressive flow in analysis.

The essence is this:

  1. Get into the data – systematically and playfully (in the way that Pat Thomson means).
  2. Systematically construct synoptic units – extractive summaries of how certain bits of data relate to something you’re interested in. These are not selections of bits of data, but written in your own words. (You can keep track of juicy quotations or vignettes you might want to use later, but the point is this is your writing here).
  3. Work with the synoptic units. Now instead of being faced with all the raw data, you’ve got these lovely new blocks to work and play seriously with. You could:
    1. Look for patterns – commonalities, contrasts, connections
    2. Juxtapose what seems to be odd, different, uncomfortable
    3. Look again for silences
    4. Look for a prior concepts or theoretical ideas
    5. Use a priori concepts or theoretical ideas to see similarity where on the surface things look different, to see difference where on the surface things look the same, or to see significance where on the surface things seem unimportant
    6. Ask ‘What do these units tell me? What do I want to know?’
    7. Make a mess and defamiliarize yourself by looking again in a different order, with a different question in mind etc.
  4. Do more data play and keep producing artefacts as you go. This might be
    1. Freewriting after a session with the synoptic units
    2. Concept mapping key points and their relationships
    3. An outline view of an argument (eg. using PowerPoint)
    4. Anything that you find helpful!

 

In some cases you might create another layer of synoptic units to work at a greater analytical distance from the data. One of the examples below illustrates this.

The key is that we enable ourselves to reach new insights not by letting go of the data completely, but by creating things to work with that reflect both the data and our insights, determinations of relevance etc. We can be systematic as we go through all the data in producing the synoptic units. We remain rigourous in our ‘intellectual hygiene’ (confronting what doesn’t fit, what is less clear, our analytical doubts etc) . We do not close off on opportunities for serious data play – rather we expand them.

If you’d like to read more, including three examples from real, published research, download the full pdf.

Advertisements

A metaphor and a simple framework for thinking about research design

Hi

I’ve published a video, freely available on youtube, outlining a framework I’ve been using for thinking about research design, particularly in social sciences.

It is based on 4 central ideas, that gradually adopt a more fine-grained focus, and a necessary gesture towards analysis: hence the idea of a 4+ part framework.

The parts are:

1. Strategy – the big picture, how you name the kind of research you are doing. This does a lot of work in setting the tone and character of your research, signals to others what they might expect, and from this, many implications for other parts of design flow.

2. Sampling – not necessarily implying positivistic / quantitative notions, but pointing to the need to think seriously about who is involved in research (or what, if you’re looking at documents for example), who isn’t, what your relationships with these people or objects are, what inclusions and exclusions there are, whether and how these matter etc.

3. Methods – the broad tools you use in your research to gather information (or generate data, if you’re coming from a more constructivist paradigm where things aren’t out there waiting to be discovered…). I raise the question of alignment between these and your research questions, but distinguish methodological issues from…

4. Techniques – this is how you use the tools in (3). What kind of interview are you doing? How are you observing? What is your survey like? Here I point explicitly to aesthetic aspects of the accomplishment or performance of research methods, the art that goes with the (social) science.

4+ – Analysis. Learning from my own mistakes in the (now dim and distant) past: going and getting data, or designing research without thinking through how the analysis will proceed is a no-go. It doesn’t mean you can predict and anticipate exactly what you’ll do analytically, but it’s better to think ahead than to get what looks like great data and then realise there’s no sensible way to analyse it that links to your research question (I say this from experience!).

 

The tree metaphor

The video makes use of a tree metaphor, talking about research as planting seeds and growing a tree.

Where do you plant the seeds here and not there (ie. why this topic / question and not another? what other trees are growing here? what else has been done?)

How tall does your tree have to be? (ie. what do you have to do to stand out and make a new contribution in this field?)

How thick is your trunk? (ie. how do your make your research sturdy, able to withstand the odd thing going wrong, and the gusty winds of academic critique?)

How wide are your branches? (ie. how far can your analysis take you beyond what you studied to saying something of wider relevance? This doesn’t mean empirical generalisability necessarily!)

How tasty is your fruit? (ie. how palatable are your conclusions? or at least, how inviting is what you have to say in terms of capturing people’s attention. You don’t necessarily want to say what people want to hear, but you’ve got to get them enticed somehow!).

 

The prezi itself can be viewed at http://prezi.com/kzirzw3yhl9m/?utm_campaign=share&utm_medium=copy&rc=ex0share but this will be without the audio commentary on the video.

 

I hope you find this helpful!

PS.

I should acknowledge that this post and the video float in a void in terms of references to methods literature. I’m not claiming anything revolutionary here and am sure that may people talk about similar issues in research design. The tree metaphor is probably new (at least as far as I’m aware), and I think some clarity around saying design involves thinking about strategy, sampling, methods, techniques (oh, and analysis!) may be helpful. These terms are used in many different ways in the literature. This is simply how I find it useful to think about them.

More on Hammersley, critique, and a taster on research design

Hi

I’ve done a couple of prezi’s, elaborating my adaptation and interpretation of Hammersley’s framework for critical reading of ethnography, which as you know by now, I think has currency as a framework for critique more generally.

Hopwood’s interpretation of Hammersley framework

The first prezi explains, with a visual accompaniment (partly prompted by a nudge in the comments to one of my posts about a lack of visuals!), how I think the whole thing works. It goes alongside the podcast I’ve published previously. See below for some brief comments on research design.

Hammersley-Hopwood framework for researchers to complete

The second prezi is very simple, and is essentially a ‘fill in the blanks’ version aimed at helping social science researchers (maybe others, too! I’d be fascinated to know if it works in other areas), to think about their research. It has deliberate (but for now unexplained) ‘fit’ with Kamler’s approach to writing abstracts (Locate, focus, report, argue). If you haven’t finished your research yet, it’s still very possible to complete the whole thing. Imagining or projecting what your claims and conclusions might be is really important. Normally we have some sense of what kind of things we might find and why they might be important. Of course we still want to leave space for the empirical world to surprise us!

Hopwood research design 4-part  framework

I just want to explain some references in the first prezi to a 4-part design framework. This is my  system for clarifying different components of research design. I’m not claiming it’s totally unique or original. It’s just the way of working / use of terms I’ve come to find useful.

1. Research Strategy – this is the big picture. Are you doing a case study? What kind of cases?  Ethnography? What of? What kind of ethnography? Longitudinal design? How long? What sequence?

2. Sampling and selection – this goes down a level to think about sampling. Samples include who your participants are (if any) and how they relate to a wider population. But let me be clear this isn’t a kow-tow to positivism. It’s about the fact that we nearly always study something smaller than the phenomenon of interest. This involves selections or samples in space, time, people, documents etc.

3. Methods for data generation – what it sounds like it is! Are you using a survey? interviews? observation? document collection? visual methods? why?

4. Techniques for data generation – this goes into a bit more detail, and refers to the craft and artistry involved. Within a survey, what kinds of items are you using? how have they been piloted? what kind of interview are you doing (semi-structured? life history?), and how good an interviewer are you? (probing questions, noting body language etc). If you’re observing, what are you looking for, what are you noticing? how structured is your observation? what kind of notes are being produced? when are they being written up?

This is rather cursory, but is there in case the references to the framework in the prezi are a bit cryptic!