Delivering on what research says is needed… an activist action research initiative

Beyond description: an activist approach to research

This blog post is about an action research study. It aimed initially to produce a new website for families of children who cannot feed orally, so use tube. We deliver our promise – childfeeding.org – but as is typical in action research, the more we listened to families, the more we realised needed to be done. So, we have launched a fundraising campaign to raise the $10,000 we need to take the next step and make a bigger, positive difference to families of children who tube feed. If you want to skip straight to the end, here’s the link to the fundraising page. Please consider sharing the link with people you think might be interested to donate!

Introducing the SUCCEED study

The SUCCEED Study is funded by Maridulu Budyari Gumal / Sydney Partnership for Health, Enterprise, Research and Education. Our purpose is to use collaborative research, where families, clinicians and researchers collaborate, with equal input and ownership of the project.

Summary of SUCCEED Study video produced with our funders Maridulu Budyari Gumal

The first thing we did was listen to families. We heard stories of what it is like parenting a child who is tube fed. We heard how important it is to maintain social activities – playdates, picnics, birthday celebrations, and physical activities (kids who tube-feed can run, jump, and swim!). We also heard how difficult this can be, not just because of the logistics of tube feeding, but because of the way members of the public often react to tube feeding.

What’s wrong with her? Oh gosh, how awful, how long does he have?

That’s the kind of thing parents told us other people have said to them. They also told us how they can end up isolated from friends, because their friends worry their own kids might hurt the ‘fragile’ (so they think) child who tube-feeds.

It wasn’t all doom and gloom! Parents told us a lot about the MacGyver-type strategies they use to get out of the house, help their children join in social activities with others, and respond confidently and positively when the public are curious (or worse) about the tube.

So, the next thing we did was build a website: childfeeding.org. This is full of content that came directly from parents, sharing their tips and tricks for everyday life, how they navigated important decisions, and also presenting their Real Stories – showing how every tube-feeding journey is different.

We also did things like organise Australia’s first tube-feeding picnic, which was featured on Channel 7 News. This was a chance for families to get together and celebrate. Many people offered support to make this happen, including a bubblemakers and group of superheroes who came all the way from Newcastle to bring some extra special joy to children at the picnic!

SUCCEED’ Irene and Connor, and our Tube Feeding Picnic on Channel 7 News

But our job is far from done. Parents tell us time and time again that it would make a really big difference if tube feeding was everyone’s business. Many people have never met or even seen a child who feeds using a tube. Tube feeding is often unfamiliar, scary and confronting.

Parents have asked us to work with them put together a public awareness campaign. This will be based on positive values, helping to normalise tube feeding, and showing that kids who tube feed can be happy, healthy and thriving as can their families.

Instead of tube feeding being something horrible that happens to other people, we want tube feeding to be something that everyone understands and can respond appropriately to: not with shock, fear or pity, but with a sense of connection, positivity and that they can do something be part of happy, thriving lives for kids and families.

That’s what we need $10,000 for. We’ve set up a (totally legitimate!) fundraising drive through UTS Causes. People have been incredibly generous so far, but we have a long way to go. And if we exceed our target, we will simply do more of what parents tell us we should be doing!

So please: consider donating – every cent counts! If you can’t or prefer not to donate, you can still help by sharing this link – on facebook, twitter, with people you know: https://tube-feeding.fundraising.uts.edu.au/

Everything I do on this blog is done in my own time. Everything on it is freely available to anyone with an internet connection. If you’ve ever read something on this blog and found it useful, this is a cool opportunity to give back – not to me, but to a cause that I care dearly about 🙂

Reclaiming rejection from the shadows of silence and shame

It is almost three years since I posted a picture of my Rejection Wall on Twitter. That started a whole series of events, blogs, and videos. I thought it might be time to revisit and reflect. Importantly, this update post includes heaps of links to what others have been doing in to reclaim rejection from the shadows of silence and shame.

Rejection walls

Since then more than 285,000 people have seen it. Seems like the thing I’m best known for is being rejected.

The response has been overwhelming! Lots of people are saying the feel encouraged, heartened, perhaps when facing rejection themselves.

There’s a list of links to interviews and blog posts relating to the rejection wall below (right at the bottom of this post!). Including a video explaining why I did it and why I think it matters.

People are joining in the ‘reclaim rejection’ movement by ‘confessing’ their own rejection histories.

@RoseGWhite: I’m sure I could cover a whole corridor like this!

@JRobinHighley: I would start my own display, but not sure I have a wall big enough

Similar comments from @Liam_Wagner @SimmsMelanie @TrevorABranch @naynerz @mathewjowens @SJC_fishy @RobHarcourt

@AlexaDelbosc posted a picture of her own rejection wall on twitter

People are advancing the ‘reclaim rejection’ movement through crazy, wonderful ideas

Max Mueller has told us about a rejection garden – not just reclaiming rejection, but doing some good for the environment and greening spaces we inhabit and work in as we go!

Caitlin Kirby attended her PhD Viva exam in a dress made from the rejections she’d had along the way. What a fabulous way to reclaim – to embody the reality of rejection!

@roomforwriting told us about a #wallofcourage in a writing workshop in Brazil, inspired by the rejection wall. I love the framing of reclaiming rejection around courage.

Reclaiming rejection: In the style of Mean Tweets

After the rejection wall I was asked by UTS if I would join a group of academics reading out their rejections on video. Kind of like the way some celebrities do with #meantweets

How cool is it that I work for an institution that values and celebrates rejection in this way, and that I have colleagues who are ready to shake off the shame and go public!

Reclaiming rejection through Shadow CVs

I love the idea of Shadow CVs. I just updated mine (May 2020)with extra-special new additions – more grants not awarded, more journals rejection my papers, and some juicy new sections bringing other failings and failures out into the light. A rejection extravaganza!

I also wrote a post about all the ‘non-academic’ jobs I’ve had along the way (troubling the idea of an ‘academic’ career).

The Shadow CV is a growing genre and an important thread in the move to reclaim rejection. Other examples I’ve come across include

Devoney Looser asks: What would my vita look like if it recorded not just the successes of my professional life, but also the many, many rejections?

Jeremy Fox 2012Bradley Voytek 2013 | Princeton Professor @JHausfoher 2016

Reclaiming rejection through reflecting on our experiences and responses

I’ve tried to address the issue of rejection from different angles. Sometimes it is about the overt story that it happens. Sometimes it is more about how we feel and respond when it does. Some posts you might enjoy include:

Follow-ups to the rejection wall

The rejection wall triggered quite a bit of activity – people asking for interviews, other blog posts. They share a common, core, message.

The lovely people at UTS also came to my office and helped make this video

 

Reclaim rejection from the shadows of silence and shame!

I would love to hear from you about things you or others are doing to help reclaim rejection! How are people normalising rejection? Responding to rejection?

Please get in touch – comment below or send me an email!

Why journal abstracts are really high stakes, and why they are (and should be) really hard to write

If you find writing abstracts easy, you are not writing good abstracts.

An uncomfortable truth?

This is something I was convinced of by Barbara Kamler a few years ago. I didn’t like it. But I was persuaded it is true.

Before then, and (worryingly!) since, I have been guilty of writing abstracts that aren’t great.

If you’re more of a video person you can watch a summary of the key points on the video below. If you’re more a words and text person, scroll down and read on!

 

Why abstracts matter

I’m talking those for journal articles (but also  book chapters, which increasingly have their own abstracts these days).

These abstracts are really really high stakes. A lot rides on them. They can do a huge amount of positive work, adding value, widening readership. But a poor abstract, or even a mediocre one, can undermine good work in the full paper.

Why?

  1. Abstracts are generally available to anyone with an internet connection. Full papers are often (at least for now) behind paywalls. Yes, lots of university libraries offer access, but this doesn’t cover all readers – by a long shot. Your abstract – those 200-250 words – might be the whole of your interaction with a reader. Every word, every sentence, every idea matters.
  2. Plenty of people might cite your work based only on reading the abstract. Hands up if you’ve cited something based on the abstract alone. I certainly have. And will do again. Not crucial texts, but things I need to have a passing knowledge of. If your abstract meets certain criteria (see below), people might just cite you based on it.
  3. Abstracts influence peer review, significantly.
    1. Editors get a first impression, and often make a preliminary judgement based on the abstract (perhaps to desk reject, perhaps to send for review and who might be a good reviewer). They might then dip into the full paper, but the abstract is their first contact and preliminary feelings might be hard to change (particularly if negative, eg feeling a paper is out of scope, offers nothing new).
    2. Reviewers agree or not to review based in part on the abstract. The abstract is usually all they get to see before saying yes or no. If it is written in a clear, engaging, persuasive way, promising something interesting and new, a review is more likely to say yes. And you want reviewers to say yes, because the more who say no, the further down the list of potential reviewers you fall, and this means they are likely to be less ‘ideal’, less well-placed to review your paper, and more just who says yes.
    3. Then if you do get a reviewer saying yes, your abstract has set up their expectations. If your paper drifts away from it, they are likely to get grumpy and be less favourable, less tolerant of minor errors – because they have given up their time, for free, based on a mis-sold premise. A lie.
  4. Abstracts can persuade readers to read the full paper (download it, contact you, look for a free version in an institutional repository).

 

So, abstracts are far from a summary of a larger text for the time-short reader. They really, really matter. So, we should be aiming to write really really good abstracts. Not just okay ones.

To understand why good abstracts are hard to write, we have to first understand the work that abstracts need to do.

 

What do abstracts need to do?

A good abstract:

  1. Captures your argument, message, or point – leaving readers with a clear understanding of what you are saying – while also encouraging people to read the whole paper.
  2. Communicates why this argument is important and distinctive; not leaving them to figure out what the novelty is here; nor leaving them to figure out why it matters.
  3. Persuades readers why this argument is robust – why they should take it, and take you, seriously. This often relies on methodological details.
  4. Sets up expectations so when people read the full paper, they get what they were hoping for.

 

Abstracts as tiny texts

One thing that Barbara Kamler said in the workshop I attended, was that abstracts are tiny texts (an idea she has written about with Pat Thomson; Pat has also written about an expanded version of the idea here).

My understanding of this is that abstracts are not simply a cut and paste shorter version or summary of a longer piece.

They are a genre of their own. They have their own rules, logics, criteria for excellence.

 

"Good papers have good abstracts behind and in front of them"

 

As tiny texts abstracts can do work not just for your readers, but for you. The thinking that goes into writing a really good abstract (or as Pat says, a range of tiny texts) can make the ‘full’ writing better. I think a good abstract underpins good paper-writing. But it can also lead it, be out ahead of it. Which is why I would advocate putting some serious time into writing an abstract, with a particular journal in mind, before starting to write the whole paper. (And then revising the abstract to make sure the two are aligned with one another).

 

Why are abstracts so difficult to write?

Abstracts are, and should be, hard to write. If you’re finding it hard because you are struggling over word choice, sentence structure, flow and other things amid the tight word economy of the abstract: yay! It means you’re in with a chance of writing a good one.

Abstracts are inherently dense texts, that should be experienced by readers as clear, logical and persuasive.

No reviewer ever said ‘that abstract was too clear’.

The trick – one of the key demands and difficulties – is to convey a lot of information (meeting all those needs mentioned above), while leaving readers thinking it was easy to read, understand and follow.

Dense does not often sit effortlessly alongside easy and clear when we describe writing.

 

Semantic waves

An idea I find really appealing and helpful here is the semantic wave. An abstract has to have both general, abstract (!) content, as well as specific, concrete content. The semantic wave is about smooth movement between these two. I find it useful to look at my abstract drafts and try to find where I am in one part of the wave, where I’m in another, and how I move between them. No rough, sudden breaks.

 

Wordsmithing

When I read really good abstracts I am struck by how particular words work, and the work they do. Words like ‘essential’ or ‘vital’ attach value. Words like ‘So…’ convey logical flow (I’ve written about that elsewhere). Words that announce novelty and originality are key. Words that assert a position and voice, especially in the conclusions and argument, can make all the difference.

In a tight word economy, it is easy to be occupied with the number of words. But the choice of words is so important, too.

And in this economy, I would argue that every sentence should say something about your study (perhaps excluding a very short opener about a general topic). Sentences 100% about existing work are risky: they communicate nothing about your work. Consider the difference between:

  • Existing research shows that poor abstracts are a common reason for rejection in academic publishing.
  • This paper builds on evidence that poor abstracts are a common reason for rejection in academic publishing.

The second one is only two words longer, but says something (valuable) about the study.

 

Structure and flow

What (rhetorical) moves are you making? Are they in the best order? Do you leave readers in doubt and then fix it later? (not good, I’d argue). Does your final comment resonate and echo your opener?

 

Some things that really annoy me in abstracts

I read a lot of abstracts. As an editor, as someone who is asked frequently to review for other journals. I probably read more abstracts than any other kind of academic text.

And I see patterns. And I develop reactions to things (justified or not). Here are some things that particularly irritate me.

  1. “Findings will be presented and implications discussed”. No sh!t Sherlock. I see statements bordering on this from time to time. Utterly vacuous (except perhaps if you’re writing for a conference and don’t know your findings yet, but still, it’s a cop-out).
  2. “Four themes were identified and will be outlined”. What four themes? I am not going to cite someone because they found four themes. The number is irrelevant. I need to know what they are so I can judge their relevance, interest and novelty.
  3. “I will save my argument for people who can (afford to) access the full paper”. I don’t see this directly, but I do read plenty of abstracts that don’t give the final argument and landing point. That is essentially withholding key information and messages for a select group of people. Unfair on the others, and damaging to your chances of being cited.

 

So that is why I say:

If you find writing abstracts easy, you are not writing good abstracts. But it is really worth the effort to meet the many demands of good abstract-writing.

 

 

 

 

 

What do ‘academic careers’ look like? Part 1: Dirty laundry

I was prompted to write this by a series of tweets in which people currently working in academia shared their first 5 jobs.

These included things like: McJob, Deserted Hotel Barman, Arthouse Cinema Dogsbody (@marklester), street musician, software quality assurance (@3blue1brown), cat caretaker, garden centre plant arranger, golf course bartender (@acapellascience), children’s theatre actor, chocolate store employee, bagel slinger, museum specimem preparator (@ehmee). You get the feel.

Two things struck me:

  1. I didn’t see anything that went: school > undergrad > masters > PhD > postdoc > full time / tenure track > permanent job.
  2. Even my own path, which I’d always thought to be pretty academic, wasn’t actually that way.

 

Why is this important?

Because, like our Shadow CVs (the CV with all the jobs we didn’t get, all the funding refused, articles never published) – see mine here – they give us a fuller, more realistic, picture of academic work and careers.

Because, they make us more human. Academics are like everyone else. Struggling at times, or perhaps all the time. Taking opportunities, including unglamorous ones. It reminds me of the wonderful title of Steven Shapin’s book: Never Pure: Historical Studies of Science as If It Was Produced by People with Bodies, Situated in Time, Space, Culture, and Society, and Struggling for Credibility and Authority.

 

So what does my ‘academic’ career look like? 

First, a caveat: while I am currently employed in a full time, ‘permanent’ job (though we can all still get fired or made redundant, or have our departments close etc), I am not at the end of my career. I have no idea what will become of it, and whether at the end of the day, it will merit the word ‘career’.

The point here is not to list all the ‘academic’ bits of my working life but the other bits.

So here it is. I hope it makes academic careers seem like the fuzzy, unglamorous mess that they really are.

Job 1: Looking up articles on microfiche for my mum. 

When I was 16 my mum got me 2 weeks’ work going to the library to find articles and copy them from microfiche to paper. This is important because it says something of the often invisible privilege that has powered the glass escalator I have ridden in many aspects of my life. Even before I had finished school, I knew what journal articles were, some of them even had my mum’s name as an author. At the time my dream job was to be a roller coaster designer, and I had no idea that this was normalising and introducing me to a world that would become my job for decades!

(As a side note, mum and I have since published a paper together!)

 

Job 2: Brushing up off-cuts from the floor in a flower shop

My best friend’s parents ran a flower shop. At Christmas they were crazy busy. They gave me a job sometimes sweeping the floor. To be honest, it felt like a fun way to be with my mate, and get some free money in the process.

 

Job 3: Temp for an agricultural charity.

I failed at doing a mail merge in my test at the temp agency, so I couldn’t get a decent office job. But I did get 7 days work at a charity for farmers. For a couple of days I was phoning vets for references so we could send out money to help farmers (it was at a time of crisis with Foot and Mouth disease). After that, I was in the basement shredding documents. All. Day.

 

Job 4: Working at a party and joke shop

‘Celebrations’ in Oxford was an institution. Sadly it closed down in 2019 😦

I walked in one day and the owner, who was not quite as tall as me (by a long way), asked if I could reach a mask of Tony Blair down off a high shelf. I did so. She asked if I was interested in working for them, and I said yes.

I think I got just over 4 pounds an hour, in a little brown envelope on the last shift I did each week. I kept this going during my Masters and PhD, often working Saturdays, but also the odd morning or afternoon during the week.

Halloween was crazy fun. We had people queuing out of the door all day. Other memorable moments included the day we got to try on the new costumes for hire and have our photos taken for the ‘catalogue’ (which was a scrappy album we gave to customers to help choose a costume). I also got to explain the three different types of fake turd we had in stock (straight, curly, and floating), and workshop trickier costumes like ‘jellyfish’. Then there were shifts when I didn’t ring the till once. I just counted the minutes in a tally chart.

 

Job 5: Teas, coffees & biscuits (and washing up) for my Department

I don’t remember how I got this one. Probably because I got on well with the admin staff, and they saw me around all the time.

When my Department had a function, I would get an instruction like this: TCBx50 out 1030 clear 11. I would get the keys to the kitchen cupboards, and get the cistern going for the tea and coffee, and put the biscuits out. Sometimes I would take deliveries from other providers for lunch. I would top up the water if needed. And when everyone went back to their work, I would clear it all up and doing the washing up.

Glamorous? Not really. Academic? Not quite. But did give me extra contact with some senior people in my Department, and ‘soften’ some of those academic relationships. And that mattered.

 

Conclusion?

I nearly got to the end of my PhD without being paid a penny to do anything academic at all. The closest I got was a couple of one-off tutoring gigs (for a couple of hours).

Since then I had an Research Assistant job, Postdoc, then a lectureship.

But I have also worked as a freediving instructor on weekends.

All these jobs were part of how I have become me, and have woven into the fabric of my working life in some way. They’ve all appeared on my CV at some point (though not all of them would be listed now!).

So there you go. My academic ‘career’ isn’t so academic after all.

My next post will be about how I’ve tried to make the more ‘academic’ bits of my so-called ‘career’ work.

 

Why (qualitative) analysis is like catching findings with a net, not building a wall of evidence

This is a short post using a the idea of a fishing net to think about qualitative data analysis. It’s not a practical strategy in itself, but it can help cope with some of the common difficulties in analysis.

I say (qualitative) analysis but qualitative analysis is what I have most experience of. But I have a feeling the same might apply to quantitative analysis too, or at least aspects of it.

[And for the environmentally minded among you, consider the fishing referred to here as very sustainable, selective fishing, not mega-trawlers catching everything! I’m thinking of the small nets a person might throw into the ocean by hand seeking a modest catch.]

What are the relevant features of a fishing net?

Here is a picture drawn by Kate Hughes (and used with her permission). Think of a fishing net. It is a series of thin strings tied together.  The net is not a solid sheet of material. Most of it is actually holes.

Despite being way more ‘gaps’ than substance, a net can catch and hold lots of fish.

Indeed, nets can only work because they are mainly holes. You couldn’t drag a solid sheet through the water.

The Net.jpg

What does this have to do with qualitative data analysis?

A common struggle I see qualitative researchers confronting is a feeling that they  don’t have enough ‘proof’ for the claims they want to make.

Now, of course it is good intellectual hygiene to doubt one’s claims, asking: Could it be otherwise? What other interpretations might be possible? How could I challenge this claim? etc.

But is often not healthy to feel that to make any claim you have to build up a solid wall of evidence if this means everything you claim has its direct match in the data.

Yes, I know this sounds odd. Stay with me.

The point of analysis is to do something with the data. To go beyond the data. To find new meanings. To say something that the data themselves don’t say directly. Otherwise you are just (selectively) reporting. In which case, you might as well just publish your transcripts, observation notes (etc.) in raw form.

I am not advocating a free-for-all where you can claim anything. We remain bound to some degree by what we can and can’t say because of what is and isn’t in the data.

But I do think it helpful to imagine analysis being like tying threads together to catch ideas and insights.

Some ideas and insights, like the tiny fish that pass happily through the net, won’t hold. They may be slippery, stable meanings,  confident insights or robust claims might evade us.

Some might just be too much. Like the big, heavy fish that could break the net, some claims might be more than the data and any analytical method can bear.

But the ones that work will be like the fish the net is designed to catch. Big enough to get caught, not too big to break the net.

 

Yes, but what does this mean?

Well, it means in analysis, and its writing up, we are not trying to prove everything with a direct quotation from data. That’s a symptom of quotitis, and not a good thing.

What we are trying to do is to find strong threads that can withstand the forces needed to keep the insights or interpretations (fish) in place. And strong knots to keep the holes in the right shape so it doesn’t all slip about.

I like the net idea because a net is not rigid. It moves, flexes, bulges, sways. Just like our analysis should. But it also holds tight. Firm but flexible. Robust not rigid. The threads might be bits of data, or lines of questioning we apply to the data. The knots might be theories or conceptual frameworks that allow us to tie this data to that data, this meaning to that meaning.

When you think about it, imagine how heavy a net is when you first throw it in. Not heavy at all – light enough to pick up and throw. Now, think about when it is full of fish. Much heavier. But still what a person might (with some effort) pull in and feed off.

The same about analysis. Being able to haul in great claims is not about big heavy machinery. It’s about using agile, efficient tools. The right kinds of questioning, theory and procedures (including play – see Pat Thomson on this!) and be just enough.

The fisherperson doesn’t use the biggest possible net or the thickest, strongest possible line. They do not tie the strongest, most complex possible knots. They choose the optimal balance between size, weight, strength and stretch.

I find this a refreshing way to think about analysis. It’s about weaving and tying threads together, just enough to hold things in place so you can haul out your claims, based on rich insights.

Just like the fisher chooses the net, using materials that are available, adaptable and suitable, you, the analyst choose your threads and knots. There is nothing automatic about this. Nothing given.

If you’re after very detailed, up-close findings, you’ll need an analytical approach to match. The fisher would choose a net with small holes. Conversation analysis strikes me as one of those very fine-grained approaches to analysis, where every um, and ah, and pause etc is transcribed. A fine analytical weave indeed.

If you’re after grander claims, about big social phenomena, then a different approach might serve you better. Other kinds of discourse analysis might allow you to ‘see’ things like racism, sexism, injustice – catching larger fish – but your net will be quite okay with relatively large spaces between the knots.

Of course there are heaps of theories around, and heaps of analytical approaches, and they don’t match simply to scale in the way I’ve suggested above. But hopefully you get the point!

In summary

Evidence or data can only ever be provisional, a pattern of well woven strings that hold up a bulkier mass, rather than a seamless continuous entity. Analysis is more like weaving threads to make a net than building a wall of evidence. Agile, flexible, light and strong. Not immutable, rigid and opaque.

Let me know

Do you find this idea useful? The opposite? How do you think about analysis in ways that allow for agility and escaping the burden of needing to ‘prove’ everything with a quote or extract from data?

 

 

 

Overwhelmed by your lit review? How to read less and get away with it

This is a post for any researchers who might be feeling:

  1. That they simply don’t have time to read enough
  2. That their ‘to read’ pile is getting bigger and bigger, no matter how much you read
  3. (Being totally honest) their reading choices are shaped by fear of missing out (FOMO) – or fear of getting caught for not having read something.

I know these feelings all to well.

For those of you who prefer a video, all the key points below are in a lovely 10 minute video on my YouTube channel.

The ideas that follow are very much shaped by what I heard by colleague Julie Robert explain to research students a few years ago. The four labels are not my invention, I heard them from her.

Don’t read lots, read smart

I eventually recognised that my reading practices were not serving me well. I was reading based on FOMO, rather than based on my needs. I was defaulting to reading all articles from start to finish, or when I skipped sections I felt extremely guilty.

What I needed was a way to spend quality time with the literature, but to make sure I wasn’t wasting time either.

Reading smart isn’t about just reading the same number of books, papers, paragraphs or words in less time. It is about choosing purposefully how to spend that time.

 

A blunt reality

Time you spend reading something in full that doesn’t justify a full read is time wasted. You cannot get that time back.

But…

If you spend only a little bit of time on a particular book or paper, it isn’t going anywhere. You can always come back to it later, if and only if you have good reasons to spend more time on it.

 

The key is to read in different ‘modes’ – each with a different purpose and different practices

 

Discovery reading

This is the reading you do when you don’t know what is out there. It is about becoming familiar, getting a landscape view. It can answer questions such as:

  • Who are the key researchers who are frequently cited?
  • Where is the consensus and where is the disagreement?
  • What feels more ‘done’ and what is left more open for you to do
  • What the range of methods or theories used is.

It does not make sense to Discovery read texts from start to finish, nor to make copious detailed notes. To answer questions such as those above you might only need to read abstracts, dip into other sections, scan the conclusions and peruse the reference list.

And yes, you can cite something if you have only read the abstract or part of the text, provided (and this is a big point here) you are not leaning heavily on that text. It’s okay, in my view, to cite an abstract-only read if your citation is ‘light touch’ eg. Several studies have used x method to investigate y topic (Singh 2019, Hopwood 2017), however my study departs from these by adopting a different approach. If you are going to report, and trust, their findings, or build on concepts they use, then after discovery, you’re going to need to go back for…

 

Archaeology reading

This is the slow, detailed, fine-tooth comb reading. When you need to fully understand a theory, method, or how a particular study reached its conclusions (so you can critique them, accept them, question them, build on them), there is no choice but to go in deep.

Archaeology reading cannot be short-cut or sped up. I often archaeology read the same text multiple times.

My point is, you can save time by only archaeology reading what you need to read in this way. It will not make sense for you to read everything in this fashion.

 

Quest reading

This is another lighter-touch approach to reading. Unlike Discovery, where you don’t really know what is out there, in Quest, you are going looking for something specific. Quest is a very extractive way of reading. Maybe you have found a concept and you want to read examples of people who have used it in research to help you get a better understanding. Maybe you need to plug a little hole in your methods by finding details of how particular data collection or analysis techniques are accomplished. Maybe you’re at the late stages and just need to check that the big picture you got through your discovery hasn’t changed in the months you’ve been working hard on your own writing.

This is not about reading on the author’s terms. You’re not there, sitting down, ready to hear everything they have to say. It’s like you’ve got a metal detector. You are sweeping methodically through the text waiting for a ‘ping!’ – a moment where you can find what you are looking for.

Of course we have to maintain our ethics and rigour and make sure we don’t go quoting things out of context. But the point remains, when on a quest, reading in detail from start to finish and producing heaps more notes is unlikely to be serving your interests.

Nor will it be leaving you time to watch Game of Thrones, go to the gym, spend time with your kids, visit your parents, have dinner with friends, or whatever it is that matters to you other than your research.

 

Cheat

Yes, I said it. We are human beings. Sometimes we ‘cheat’. By this I mean we read and cite in a way that might leave people with an impression we have read in more detail than we actually have.

There are clearly times when this is not okay – especially in relation to archaeology. We can’t ‘fake’ the understandings of and intimacy with the literature that come from archaeology reading.

But sometimes in order to have high quality time with the texts that really matter, we do a bit of a drive through on some others.

Cheat is okay as long as we don’t compromise our ethics and start heavily leaning on texts we have ‘cheat’ read. The texts aren’t going anywhere, and we can always go back in with more of an archaeology approach.

 

In conclusion

The lesson I learned from years of poor reading practices that didn’t serve my needs was that default start-to-finish reading of everything was doing more harm than good. It was denying me extended quality time with the texts that really mattered.

I don’t feel guilty now I have given other ways of reading names. Thanks Julie!

I’d love to hear from you about your own ways of reading – anyone ready to confess to some cheat reading? Do these labels make sense to you? What other ways of reading do you use?

Video about SuCCEED research with families of children with complex feeding difficulties

I am part of a team doing action research with parents of children who have to feed through a plastic tube.

This is funded by Maridulu Budyari Gumal / SPHERE and you can read more about the study in the summary page here on my blog.

This post is just to share the link to the awesome video that we made – with additional support from Maridulu Budyari Gumal.

It is totally humbling and a privilege to work with Chris, Kady and Ann, as well as the generous and inspiring parents and children who have been involved.

This video is pretty cool in explaining what we are doing and why we think it matters! Hope you enjoy 🙂

Anxiety in academic work

Hi everyone

This is a short blog post to accompany a YouTube video I posted recently, about anxiety in academic work and particularly among research students. It’s a fairly simple video in which I talk mainly about how own personal history and experiences of anxiety, and what I’ve learned about it along the way. No flashy data, no promises of solutions. Just an honest sharing of experience that puts anxiety out there as something that happens and is okay to talk about.

Why did I write it? Because of the work I do, I come into contact with students from lots of different universities and countries.  I got an email from a student who had experienced anxiety in relation to her studies. Part of what she wrote was:

It is a learning process, right? I’m still figuring out what works for me, like walking for long time is really good. But just recognizing that this anxiety is a problem, like a broken finger, for example, and that it needs some time, maybe medicine, to heal, has been a big step. And I know it goes away. Just being able to put a name on it, has helped me a lot. And what also help is to talk to people who experience such things, and realizing that it is so normal. For me, I’m having the ups and downs, and I have had some therapy. But I now somewhat accept this part of me, and that is why I want to make it normal for people to talk about.

This made me think. Anxiety is out there among research students. And I agree with her about how helpful it can be to recognise it and talk about it with others. I also agreed with her about how unhelpful it is to push things like anxiety under the carpet, hide them away.

So, I wanted to make a video about anxiety. But it’s not my area of expertise, either in terms of research I’ve done about doctoral students, nor in any medical or clinical sense. So I have to be careful. I thought it might at least be useful to reflect on my own anxiety, and lay out publicly what happened, what I tried to do in response, what worked, what didn’t, and how I view it all now.

If you want to follow up with a serious academic paper on this topic, I would recommend this as a good place to start: Wisker & Robinson (2018) In sickness and in health, and a ‘duty of care’: phd student health, stress and wellbeing issues and supervisory experiences. It is a chapter in a book called Spaces, journeys and new horizons for postgraduate supervision published by SUN Academic Press.

 

 

When coding doesn’t work, or doesn’t make sense: Synoptic units in qualitative data analysis

You can download a full pdf of this blog post including the three examples here. Please feel free to share with others, though preferably direct them to this page to download it!

 

How do you analyse qualitative data? You code it, right? Not always. And even if you do, chances are coding has only taken you a few steps in the long journey to your most important analytical insights.

I’m not dismissing coding altogether. I’ve done it many times and blogged about it, and expect I will code again. But there are times when coding doesn’t work, or when it doesn’t make sense to code at all. Problems with coding are increasingly being recognised (see this paper by St Pierre and Jackson 2014).

I am often asked: if not coding, then what? This blog post offers a concrete answer to that in terms of a logic and principles, and the full pdf gives examples from three studies.

Whatever you do in qualitative analysis is fine, as long as you’re finding it helpful. I’m far more worried about reaching new insights, seeing new possible meanings, making new connections, exploring new juxtapositions, hearing silences I’d missed in the noise of busy-work etc than I am about following rules or procedures, or methodological dogma.

I’m not the only one saying this. Pat Thomson wrote beautifully about how we can feel compelled into ‘technique-led’ analysis, avoiding anything that might feel ‘dodgy’. Her advocacy for ‘data play’ brings us into the deliciously messy and murky realms where standard techniques might go out of the window: she suggests random associations, redactions, scatter gun, and side by side approaches.

 

An approach where you are a strength not a hazard

The best qualitative analyses are the ones where the unique qualities, interests, insights, hunches, understandings, and creativity of the analyst come to the fore. Yes, that’s right: it’s all about what humans can do and what a robot or algorithm can’t. And yes, it’s about what you can do that perhaps no-one else can.

Sound extreme? I’m not throwing all ideas of rigour out of the window. In fact, the first example below shows how the approach I’m advocating can work really well in a team scenario where we seek confirmation among analysts (akin to inter-rater reliability). I’m not saying ‘anything goes’. I am saying: let’s seek the analysis where the best of us shines through, and where the output isn’t just what is in the data, but reflects an interaction between us and the data – where that ‘us’ is a very human, subjective, insightful one. Otherwise we are not analysing, we are just reporting. My video on ‘the, any or an analysis’ says more about this.

You can also check out an #openaccess paper I wrote with Prachi Srivastava that highlights reflexivity in analysis by asking: (1) What are the data telling me? (2) What do I want to know? And (3) What is the changing relationship between 1 and 2? [There is a video about this paper too]

The process I am about to describe is one in which the analysts is not cast out in the search for objectivity. We work with ‘things’ that increasingly reflect interaction between data and the analyst, not the data itself.

 

An alternative to coding

The approach I’ve ended up using many times is outlined below. I don’t call it a technique because it can’t be mechanically applied from one study to another. It is more a logic that follows a series of principles and implies a progressive flow in analysis.

The essence is this:

  1. Get into the data – systematically and playfully (in the way that Pat Thomson means).
  2. Systematically construct synoptic units – extractive summaries of how certain bits of data relate to something you’re interested in. These are not selections of bits of data, but written in your own words. (You can keep track of juicy quotations or vignettes you might want to use later, but the point is this is your writing here).
  3. Work with the synoptic units. Now instead of being faced with all the raw data, you’ve got these lovely new blocks to work and play seriously with. You could:
    1. Look for patterns – commonalities, contrasts, connections
    2. Juxtapose what seems to be odd, different, uncomfortable
    3. Look again for silences
    4. Look for a prior concepts or theoretical ideas
    5. Use a priori concepts or theoretical ideas to see similarity where on the surface things look different, to see difference where on the surface things look the same, or to see significance where on the surface things seem unimportant
    6. Ask ‘What do these units tell me? What do I want to know?’
    7. Make a mess and defamiliarize yourself by looking again in a different order, with a different question in mind etc.
  4. Do more data play and keep producing artefacts as you go. This might be
    1. Freewriting after a session with the synoptic units
    2. Concept mapping key points and their relationships
    3. An outline view of an argument (eg. using PowerPoint)
    4. Anything that you find helpful!

 

In some cases you might create another layer of synoptic units to work at a greater analytical distance from the data. One of the examples below illustrates this.

The key is that we enable ourselves to reach new insights not by letting go of the data completely, but by creating things to work with that reflect both the data and our insights, determinations of relevance etc. We can be systematic as we go through all the data in producing the synoptic units. We remain rigourous in our ‘intellectual hygiene’ (confronting what doesn’t fit, what is less clear, our analytical doubts etc) . We do not close off on opportunities for serious data play – rather we expand them.

If you’d like to read more, including three examples from real, published research, download the full pdf.

Selective lowlights from my many rejections

This post relates to my rejection wall blog, tweet, and series of videos about rejection in academia.

My rejection wall is there for all to see – ‘all’ being people who happen to come to my office.

While there is fun to be poked at the ridiculousness of academic rejection, there is a serious point to it, so this blog post makes the details of my rejection wall available for all to see, and adds some commentary to it. This isn’t all my rejections, just some of the juicier bits – selected lowlights if you like – that I think might be the most rewarding for others to read. There’s other examples of people doing this in blogs and doing something similar in lab meetings (both of which I think are awesome!).

Maybe reading it just makes you feel a bit better by seeing how crap I’ve had it on several occasions with grumpy journal reviewers and months of grant-writing that came to nothing.

Maybe others can learn from my mistakes (and there have been plenty) – but I can’t promise that reading what follows will avert the inevitable misfortune of an unwarranted or nastily delivered academic rejection.

I have divided it up into rejected research proposals and rejected journal articles.

Some research rejections

Check out my video on why research grant applications get rejected, and then spot lots of those reasons in my failures below 🙂

Screen Shot 2018-06-05 at 4.07.06 am.png

It was clear that these reviewers really didn’t think our project should be funded. The language needs to be glowing to even stand a chance. Looking back at the proposal now, I can see where the reviewers were coming from. I think we suffered a bit from not having met regularly as a team to discuss things, and also a bit of group-think: we tended to see and say what we liked about what we were doing. There was no ‘red team’ asking the awkward questions. And I agree (now) that we framed the project around an issue that wasn’t obviously worth caring about in itself. And we didn’t make a good enough case for alignment in what we were proposing.

 

Screen Shot 2018-06-05 at 3.45.56 am.png

This was another big team effort. I think part of our problem was that the proposal swelled and got more complex as we fed in bits that represented what each of us offered (read: wanted to do so we all felt important). The team was very diverse – and we all felt we needed each other. None of us, nor any subset of us, could have made a case alone. But somehow it all became too much. Hence the relationship between parts being weak. The point about not offering clear understanding reflects this general problem, plus a second major weakness: we were not in the bullseye of what this particular funding round was about. We were not giving the funders what they wanted.

Screen Shot 2018-06-05 at 3.52.09 am.png

This was a proposal to a funding body specifically about womens’ safety. To this day I think our basic idea was a good one: to do a kind of action research. However, with reviewer comments like this, our proposal flew pretty wide of target. We went too heavy with unfamiliar theory and they couldn’t see how it would work in practice. They also couldn’t see how it would generalise. Lacking content was a big no-no – too much methodology. At the same time we didn’t give enough about the methods in terms of site details. And then we fell foul of the feasibility hurdle. So we misfired on multiple fronts.

Screen Shot 2018-06-05 at 4.02.14 am.png

Several more months’ of work down the tubes with this one! Among the many issues the reviewers found, the two above were the most catastrophic. Being ambitious is okay if your project looks really feasible and the reviewers don’t get lost in complexity. In this case we failed on both counts. And then we failed to make the case for adding to knowledge. Who in their right mind would fund something that wasn’t going to find something new? I still think the project as it existed in my mind would have been original and great. But what was in my mind clearly didn’t reach the reviewers. Finally the ‘reality check’ was a vicious blow! But pointed to how wrong we got it. The reviewer felt we had an over-inflated budget, to produce a measly evidence base that wasn’t going to reveal anything new. Brutal.

Screen Shot 2018-06-05 at 4.08.18 am.png

Ah – not giving concrete details about the methodology. That old chestnut. Old it might be, but a disaster for this funding proposal! I realise no-one is going to give out money – even for a study on a really important topic by brilliant people – if there isn’t a clear, feasible and persuasive business plan for how it is going to be pulled off (on time, on budget). The methodology section is key to this.

Screen Shot 2018-06-05 at 4.10.20 am.png

Again falling foul of methodological detail – in this case not explaining how we would do the analysis. The Field of Research point is really important – this is how these applications get directed to reviewers and bigger panels. We badged it as education but the readers didn’t see themselves or their field in what we proposed. I speak about getting the ‘wrong’ reviewer in the video about funding rejections.

 

And now some really lovely journal rejections

I’ve picked these to illustrate different reasons for getting rejected from academic journals – these connect with the reasons I talk about in a video focused on exactly this!

Screen Shot 2018-06-05 at 4.13.39 am.png

Ouch! This definitely wasn’t ‘a good paper went to the wrong journal’. The editors hated it and couldn’t even bring themselves to waste reviewers’ time by asking them to look at it. There was no invitation to come back with something better. Just a ‘get lost’. In the end the paper was published somewhere else.

Screen Shot 2018-06-05 at 4.18.36 am.png

This fell foul of the half-baked problem. Editor thought I was half way through. My bad for leaving him with this impression. Paper was published somewhere else without any further data collection / analysis, but with a much stronger argument about what the contribution was.

Screen Shot 2018-06-05 at 4.19.59 am.png

This was living proof for me that writing with ‘big names’ doesn’t protect you from crappy reviews. The profs I was writing with really were at the leading edge of theory in the area, and so we really did think it added something new. This paper was rejected from two journals before we finally got it published.

Screen Shot 2018-06-05 at 4.26.11 am.png

This is one of my favourites! This reviewer really hated my paper by the time she finished reading it. The problem was I got off on the wrong foot by writing as if the UK was the same as the whole world. My bad. Really not okay. But then things got worse because she didn’t see herself and her buddies in my lit review. All the studies she mentioned were missing were ones I’d read. They weren’t relevant, but now I learn to doff my cap to the biggies in the field anyway. How dare a reviewer question my ethics this way (the students involved asked to keep doing the logs as they found them so useful). How dare a reviewer tell me what theory I need to use? And what possible relevance do the names of her grand-daughter’s classmates have to my paper and its worthiness for publication?! Finally, on the issue of split infinitives, I checked (there was precedent in the journal). When this was published (eventually as a book chapter) I made sure there were plenty still in there. A classic case of annoying the reviewer who started with a valid point, then tried to ghost-write my paper the way she wanted it, and ended up flinging all sorts of mud at me.

 

Screen Shot 2018-06-05 at 4.22.00 am.png

The only thing I can say with certainty about this list is: it will get longer! I’ve published in quite a few of these too – showing a rejection doesn’t mean the end of the road for you and a particular journal.

More to follow (inevitably!)