Category Archives: Uncategorized

Overwhelmed by your lit review? How to read less and get away with it

This is a post for any researchers who might be feeling:

  1. That they simply don’t have time to read enough
  2. That their ‘to read’ pile is getting bigger and bigger, no matter how much you read
  3. (Being totally honest) their reading choices are shaped by fear of missing out (FOMO) – or fear of getting caught for not having read something.

I know these feelings all to well.

For those of you who prefer a video, all the key points below are in a lovely 10 minute video on my YouTube channel.

The ideas that follow are very much shaped by what I heard by colleague Julie Robert explain to research students a few years ago. The four labels are not my invention, I heard them from her.

Don’t read lots, read smart

I eventually recognised that my reading practices were not serving me well. I was reading based on FOMO, rather than based on my needs. I was defaulting to reading all articles from start to finish, or when I skipped sections I felt extremely guilty.

What I needed was a way to spend quality time with the literature, but to make sure I wasn’t wasting time either.

Reading smart isn’t about just reading the same number of books, papers, paragraphs or words in less time. It is about choosing purposefully how to spend that time.

 

A blunt reality

Time you spend reading something in full that doesn’t justify a full read is time wasted. You cannot get that time back.

But…

If you spend only a little bit of time on a particular book or paper, it isn’t going anywhere. You can always come back to it later, if and only if you have good reasons to spend more time on it.

 

The key is to read in different ‘modes’ – each with a different purpose and different practices

 

Discovery reading

This is the reading you do when you don’t know what is out there. It is about becoming familiar, getting a landscape view. It can answer questions such as:

  • Who are the key researchers who are frequently cited?
  • Where is the consensus and where is the disagreement?
  • What feels more ‘done’ and what is left more open for you to do
  • What the range of methods or theories used is.

It does not make sense to Discovery read texts from start to finish, nor to make copious detailed notes. To answer questions such as those above you might only need to read abstracts, dip into other sections, scan the conclusions and peruse the reference list.

And yes, you can cite something if you have only read the abstract or part of the text, provided (and this is a big point here) you are not leaning heavily on that text. It’s okay, in my view, to cite an abstract-only read if your citation is ‘light touch’ eg. Several studies have used x method to investigate y topic (Singh 2019, Hopwood 2017), however my study departs from these by adopting a different approach. If you are going to report, and trust, their findings, or build on concepts they use, then after discovery, you’re going to need to go back for…

 

Archaeology reading

This is the slow, detailed, fine-tooth comb reading. When you need to fully understand a theory, method, or how a particular study reached its conclusions (so you can critique them, accept them, question them, build on them), there is no choice but to go in deep.

Archaeology reading cannot be short-cut or sped up. I often archaeology read the same text multiple times.

My point is, you can save time by only archaeology reading what you need to read in this way. It will not make sense for you to read everything in this fashion.

 

Quest reading

This is another lighter-touch approach to reading. Unlike Discovery, where you don’t really know what is out there, in Quest, you are going looking for something specific. Quest is a very extractive way of reading. Maybe you have found a concept and you want to read examples of people who have used it in research to help you get a better understanding. Maybe you need to plug a little hole in your methods by finding details of how particular data collection or analysis techniques are accomplished. Maybe you’re at the late stages and just need to check that the big picture you got through your discovery hasn’t changed in the months you’ve been working hard on your own writing.

This is not about reading on the author’s terms. You’re not there, sitting down, ready to hear everything they have to say. It’s like you’ve got a metal detector. You are sweeping methodically through the text waiting for a ‘ping!’ – a moment where you can find what you are looking for.

Of course we have to maintain our ethics and rigour and make sure we don’t go quoting things out of context. But the point remains, when on a quest, reading in detail from start to finish and producing heaps more notes is unlikely to be serving your interests.

Nor will it be leaving you time to watch Game of Thrones, go to the gym, spend time with your kids, visit your parents, have dinner with friends, or whatever it is that matters to you other than your research.

 

Cheat

Yes, I said it. We are human beings. Sometimes we ‘cheat’. By this I mean we read and cite in a way that might leave people with an impression we have read in more detail than we actually have.

There are clearly times when this is not okay – especially in relation to archaeology. We can’t ‘fake’ the understandings of and intimacy with the literature that come from archaeology reading.

But sometimes in order to have high quality time with the texts that really matter, we do a bit of a drive through on some others.

Cheat is okay as long as we don’t compromise our ethics and start heavily leaning on texts we have ‘cheat’ read. The texts aren’t going anywhere, and we can always go back in with more of an archaeology approach.

 

In conclusion

The lesson I learned from years of poor reading practices that didn’t serve my needs was that default start-to-finish reading of everything was doing more harm than good. It was denying me extended quality time with the texts that really mattered.

I don’t feel guilty now I have given other ways of reading names. Thanks Julie!

I’d love to hear from you about your own ways of reading – anyone ready to confess to some cheat reading? Do these labels make sense to you? What other ways of reading do you use?

Video about SuCCEED research with families of children with complex feeding difficulties

I am part of a team doing action research with parents of children who have to feed through a plastic tube.

This is funded by Maridulu Budyari Gumal / SPHERE and you can read more about the study in the summary page here on my blog.

This post is just to share the link to the awesome video that we made – with additional support from Maridulu Budyari Gumal.

It is totally humbling and a privilege to work with Chris, Kady and Ann, as well as the generous and inspiring parents and children who have been involved.

This video is pretty cool in explaining what we are doing and why we think it matters! Hope you enjoy 🙂

Anxiety in academic work

Hi everyone

This is a short blog post to accompany a YouTube video I posted recently, about anxiety in academic work and particularly among research students. It’s a fairly simple video in which I talk mainly about how own personal history and experiences of anxiety, and what I’ve learned about it along the way. No flashy data, no promises of solutions. Just an honest sharing of experience that puts anxiety out there as something that happens and is okay to talk about.

Why did I write it? Because of the work I do, I come into contact with students from lots of different universities and countries.  I got an email from a student who had experienced anxiety in relation to her studies. Part of what she wrote was:

It is a learning process, right? I’m still figuring out what works for me, like walking for long time is really good. But just recognizing that this anxiety is a problem, like a broken finger, for example, and that it needs some time, maybe medicine, to heal, has been a big step. And I know it goes away. Just being able to put a name on it, has helped me a lot. And what also help is to talk to people who experience such things, and realizing that it is so normal. For me, I’m having the ups and downs, and I have had some therapy. But I now somewhat accept this part of me, and that is why I want to make it normal for people to talk about.

This made me think. Anxiety is out there among research students. And I agree with her about how helpful it can be to recognise it and talk about it with others. I also agreed with her about how unhelpful it is to push things like anxiety under the carpet, hide them away.

So, I wanted to make a video about anxiety. But it’s not my area of expertise, either in terms of research I’ve done about doctoral students, nor in any medical or clinical sense. So I have to be careful. I thought it might at least be useful to reflect on my own anxiety, and lay out publicly what happened, what I tried to do in response, what worked, what didn’t, and how I view it all now.

If you want to follow up with a serious academic paper on this topic, I would recommend this as a good place to start: Wisker & Robinson (2018) In sickness and in health, and a ‘duty of care’: phd student health, stress and wellbeing issues and supervisory experiences. It is a chapter in a book called Spaces, journeys and new horizons for postgraduate supervision published by SUN Academic Press.

 

 

When coding doesn’t work, or doesn’t make sense: Synoptic units in qualitative data analysis

You can download a full pdf of this blog post including the three examples here. Please feel free to share with others, though preferably direct them to this page to download it!

 

How do you analyse qualitative data? You code it, right? Not always. And even if you do, chances are coding has only taken you a few steps in the long journey to your most important analytical insights.

I’m not dismissing coding altogether. I’ve done it many times and blogged about it, and expect I will code again. But there are times when coding doesn’t work, or when it doesn’t make sense to code at all. Problems with coding are increasingly being recognised (see this paper by St Pierre and Jackson 2014).

I am often asked: if not coding, then what? This blog post offers a concrete answer to that in terms of a logic and principles, and the full pdf gives examples from three studies.

Whatever you do in qualitative analysis is fine, as long as you’re finding it helpful. I’m far more worried about reaching new insights, seeing new possible meanings, making new connections, exploring new juxtapositions, hearing silences I’d missed in the noise of busy-work etc than I am about following rules or procedures, or methodological dogma.

I’m not the only one saying this. Pat Thomson wrote beautifully about how we can feel compelled into ‘technique-led’ analysis, avoiding anything that might feel ‘dodgy’. Her advocacy for ‘data play’ brings us into the deliciously messy and murky realms where standard techniques might go out of the window: she suggests random associations, redactions, scatter gun, and side by side approaches.

 

An approach where you are a strength not a hazard

The best qualitative analyses are the ones where the unique qualities, interests, insights, hunches, understandings, and creativity of the analyst come to the fore. Yes, that’s right: it’s all about what humans can do and what a robot or algorithm can’t. And yes, it’s about what you can do that perhaps no-one else can.

Sound extreme? I’m not throwing all ideas of rigour out of the window. In fact, the first example below shows how the approach I’m advocating can work really well in a team scenario where we seek confirmation among analysts (akin to inter-rater reliability). I’m not saying ‘anything goes’. I am saying: let’s seek the analysis where the best of us shines through, and where the output isn’t just what is in the data, but reflects an interaction between us and the data – where that ‘us’ is a very human, subjective, insightful one. Otherwise we are not analysing, we are just reporting. My video on ‘the, any or an analysis’ says more about this.

You can also check out an #openaccess paper I wrote with Prachi Srivastava that highlights reflexivity in analysis by asking: (1) What are the data telling me? (2) What do I want to know? And (3) What is the changing relationship between 1 and 2? [There is a video about this paper too]

The process I am about to describe is one in which the analysts is not cast out in the search for objectivity. We work with ‘things’ that increasingly reflect interaction between data and the analyst, not the data itself.

 

An alternative to coding

The approach I’ve ended up using many times is outlined below. I don’t call it a technique because it can’t be mechanically applied from one study to another. It is more a logic that follows a series of principles and implies a progressive flow in analysis.

The essence is this:

  1. Get into the data – systematically and playfully (in the way that Pat Thomson means).
  2. Systematically construct synoptic units – extractive summaries of how certain bits of data relate to something you’re interested in. These are not selections of bits of data, but written in your own words. (You can keep track of juicy quotations or vignettes you might want to use later, but the point is this is your writing here).
  3. Work with the synoptic units. Now instead of being faced with all the raw data, you’ve got these lovely new blocks to work and play seriously with. You could:
    1. Look for patterns – commonalities, contrasts, connections
    2. Juxtapose what seems to be odd, different, uncomfortable
    3. Look again for silences
    4. Look for a prior concepts or theoretical ideas
    5. Use a priori concepts or theoretical ideas to see similarity where on the surface things look different, to see difference where on the surface things look the same, or to see significance where on the surface things seem unimportant
    6. Ask ‘What do these units tell me? What do I want to know?’
    7. Make a mess and defamiliarize yourself by looking again in a different order, with a different question in mind etc.
  4. Do more data play and keep producing artefacts as you go. This might be
    1. Freewriting after a session with the synoptic units
    2. Concept mapping key points and their relationships
    3. An outline view of an argument (eg. using PowerPoint)
    4. Anything that you find helpful!

 

In some cases you might create another layer of synoptic units to work at a greater analytical distance from the data. One of the examples below illustrates this.

The key is that we enable ourselves to reach new insights not by letting go of the data completely, but by creating things to work with that reflect both the data and our insights, determinations of relevance etc. We can be systematic as we go through all the data in producing the synoptic units. We remain rigourous in our ‘intellectual hygiene’ (confronting what doesn’t fit, what is less clear, our analytical doubts etc) . We do not close off on opportunities for serious data play – rather we expand them.

If you’d like to read more, including three examples from real, published research, download the full pdf.

Selective lowlights from my many rejections

This post relates to my rejection wall blog, tweet, and series of videos about rejection in academia.

My rejection wall is there for all to see – ‘all’ being people who happen to come to my office.

While there is fun to be poked at the ridiculousness of academic rejection, there is a serious point to it, so this blog post makes the details of my rejection wall available for all to see, and adds some commentary to it. This isn’t all my rejections, just some of the juicier bits – selected lowlights if you like – that I think might be the most rewarding for others to read. There’s other examples of people doing this in blogs and doing something similar in lab meetings (both of which I think are awesome!).

Maybe reading it just makes you feel a bit better by seeing how crap I’ve had it on several occasions with grumpy journal reviewers and months of grant-writing that came to nothing.

Maybe others can learn from my mistakes (and there have been plenty) – but I can’t promise that reading what follows will avert the inevitable misfortune of an unwarranted or nastily delivered academic rejection.

I have divided it up into rejected research proposals and rejected journal articles.

Some research rejections

Check out my video on why research grant applications get rejected, and then spot lots of those reasons in my failures below 🙂

Screen Shot 2018-06-05 at 4.07.06 am.png

It was clear that these reviewers really didn’t think our project should be funded. The language needs to be glowing to even stand a chance. Looking back at the proposal now, I can see where the reviewers were coming from. I think we suffered a bit from not having met regularly as a team to discuss things, and also a bit of group-think: we tended to see and say what we liked about what we were doing. There was no ‘red team’ asking the awkward questions. And I agree (now) that we framed the project around an issue that wasn’t obviously worth caring about in itself. And we didn’t make a good enough case for alignment in what we were proposing.

 

Screen Shot 2018-06-05 at 3.45.56 am.png

This was another big team effort. I think part of our problem was that the proposal swelled and got more complex as we fed in bits that represented what each of us offered (read: wanted to do so we all felt important). The team was very diverse – and we all felt we needed each other. None of us, nor any subset of us, could have made a case alone. But somehow it all became too much. Hence the relationship between parts being weak. The point about not offering clear understanding reflects this general problem, plus a second major weakness: we were not in the bullseye of what this particular funding round was about. We were not giving the funders what they wanted.

Screen Shot 2018-06-05 at 3.52.09 am.png

This was a proposal to a funding body specifically about womens’ safety. To this day I think our basic idea was a good one: to do a kind of action research. However, with reviewer comments like this, our proposal flew pretty wide of target. We went too heavy with unfamiliar theory and they couldn’t see how it would work in practice. They also couldn’t see how it would generalise. Lacking content was a big no-no – too much methodology. At the same time we didn’t give enough about the methods in terms of site details. And then we fell foul of the feasibility hurdle. So we misfired on multiple fronts.

Screen Shot 2018-06-05 at 4.02.14 am.png

Several more months’ of work down the tubes with this one! Among the many issues the reviewers found, the two above were the most catastrophic. Being ambitious is okay if your project looks really feasible and the reviewers don’t get lost in complexity. In this case we failed on both counts. And then we failed to make the case for adding to knowledge. Who in their right mind would fund something that wasn’t going to find something new? I still think the project as it existed in my mind would have been original and great. But what was in my mind clearly didn’t reach the reviewers. Finally the ‘reality check’ was a vicious blow! But pointed to how wrong we got it. The reviewer felt we had an over-inflated budget, to produce a measly evidence base that wasn’t going to reveal anything new. Brutal.

Screen Shot 2018-06-05 at 4.08.18 am.png

Ah – not giving concrete details about the methodology. That old chestnut. Old it might be, but a disaster for this funding proposal! I realise no-one is going to give out money – even for a study on a really important topic by brilliant people – if there isn’t a clear, feasible and persuasive business plan for how it is going to be pulled off (on time, on budget). The methodology section is key to this.

Screen Shot 2018-06-05 at 4.10.20 am.png

Again falling foul of methodological detail – in this case not explaining how we would do the analysis. The Field of Research point is really important – this is how these applications get directed to reviewers and bigger panels. We badged it as education but the readers didn’t see themselves or their field in what we proposed. I speak about getting the ‘wrong’ reviewer in the video about funding rejections.

 

And now some really lovely journal rejections

I’ve picked these to illustrate different reasons for getting rejected from academic journals – these connect with the reasons I talk about in a video focused on exactly this!

Screen Shot 2018-06-05 at 4.13.39 am.png

Ouch! This definitely wasn’t ‘a good paper went to the wrong journal’. The editors hated it and couldn’t even bring themselves to waste reviewers’ time by asking them to look at it. There was no invitation to come back with something better. Just a ‘get lost’. In the end the paper was published somewhere else.

Screen Shot 2018-06-05 at 4.18.36 am.png

This fell foul of the half-baked problem. Editor thought I was half way through. My bad for leaving him with this impression. Paper was published somewhere else without any further data collection / analysis, but with a much stronger argument about what the contribution was.

Screen Shot 2018-06-05 at 4.19.59 am.png

This was living proof for me that writing with ‘big names’ doesn’t protect you from crappy reviews. The profs I was writing with really were at the leading edge of theory in the area, and so we really did think it added something new. This paper was rejected from two journals before we finally got it published.

Screen Shot 2018-06-05 at 4.26.11 am.png

This is one of my favourites! This reviewer really hated my paper by the time she finished reading it. The problem was I got off on the wrong foot by writing as if the UK was the same as the whole world. My bad. Really not okay. But then things got worse because she didn’t see herself and her buddies in my lit review. All the studies she mentioned were missing were ones I’d read. They weren’t relevant, but now I learn to doff my cap to the biggies in the field anyway. How dare a reviewer question my ethics this way (the students involved asked to keep doing the logs as they found them so useful). How dare a reviewer tell me what theory I need to use? And what possible relevance do the names of her grand-daughter’s classmates have to my paper and its worthiness for publication?! Finally, on the issue of split infinitives, I checked (there was precedent in the journal). When this was published (eventually as a book chapter) I made sure there were plenty still in there. A classic case of annoying the reviewer who started with a valid point, then tried to ghost-write my paper the way she wanted it, and ended up flinging all sorts of mud at me.

 

Screen Shot 2018-06-05 at 4.22.00 am.png

The only thing I can say with certainty about this list is: it will get longer! I’ve published in quite a few of these too – showing a rejection doesn’t mean the end of the road for you and a particular journal.

More to follow (inevitably!)

 

Video on activist and change methodologies

I have been working with Ilaria Vanni on a new Designing Research Lab. We are approaching research methods teaching by conceptualising research approaches in terms of: deep dive, place-based, textual, and activist/change methodologies.

I made a short video summarising some key points. There are many different ways of doing activist or change-based research, and I don’t try to cover them all. After explaining some key ideas, the video focuses on action research, change laboratories, and practice-based approaches. These were chosen to illustrate different ways of going about research that can deliver change and create new possibilities for change.

You can download the PowerPoint used in the video here. Many of the images and speech bubbles highlight to other useful resources or original sources.

Some of the highlights in terms of framing ideas include these quotations:

“There is no necessary contradiction between active political commitment to resolving a problem, and rigorous scholarly research on that problem.” (Hale 2001)

“Activist research is about using or doing research so that it changes material conditions for people or places. It is different than cultural critique, where texts are written with political conviction, but no concrete changes are made on the ground.” (Hale 2001)

We are all participating in, and contributing to, the making of history and of our common future, bearing responsibility for the events unfolding today and, therefore, for what is to come tomorrow. The social structures and practices exist before we enter them… yet it is our action (or inaction), including our work of understanding and knowing, that helps maintain them in their status quo or, alternatively, to transform and transcend them” (Stetsenko 2017)

Let me know in the comments below – Are you involved in activist research? What other approaches are you using to deliver concrete change in the world?

The great wall of rejection

The saga of my #rejectionwall continues! The tweet that spurned 250,000 impressions and 1,000 retweets is still making new things happen.

The latest installment is here – from UTS’ U: Magazine.

It was preceded by my shadow CV and followed up with a post on another blog (It’s all pretty funny), a post as part of the ‘How I Fail’ series, a re-post  on an Indian website and two different pots on the UTS Futures Blog (one is a video interview, and the other a written reflection), a piece in Failure Magazine and then the interview and video for U: Magazine – produced by UTS, where I work.

The write-up brings together a few bits of the rejection story so far, and weaves them into the long and protracted story of rejections and failures in my career. There’s some new thoughts in there too.

What I really liked about it was that they got me and other academics to read out out rejections on camera. A bit like the mean tweets thing where celebrities read out insulting tweets about them, I found it really helped to step back, laugh, but also ‘own’ the rejection in a productive way. Kind of what the rejection wall did for me, but moreso.

I love the idea of a youtube channel just full of academics reading out their rejections, commenting on how unreasonable (or badly written, or unethical etc.) they are.

Anyone interested in making this a thing?!