This post relates to my rejection wall blog, tweet, and series of videos about rejection in academia.
My rejection wall is there for all to see – ‘all’ being people who happen to come to my office.
While there is fun to be poked at the ridiculousness of academic rejection, there is a serious point to it, so this blog post makes the details of my rejection wall available for all to see, and adds some commentary to it. This isn’t all my rejections, just some of the juicier bits – selected lowlights if you like – that I think might be the most rewarding for others to read. There’s other examples of people doing this in blogs and doing something similar in lab meetings (both of which I think are awesome!).
Maybe reading it just makes you feel a bit better by seeing how crap I’ve had it on several occasions with grumpy journal reviewers and months of grant-writing that came to nothing.
Maybe others can learn from my mistakes (and there have been plenty) – but I can’t promise that reading what follows will avert the inevitable misfortune of an unwarranted or nastily delivered academic rejection.
I have divided it up into rejected research proposals and rejected journal articles.
Some research rejections
Check out my video on why research grant applications get rejected, and then spot lots of those reasons in my failures below 🙂
It was clear that these reviewers really didn’t think our project should be funded. The language needs to be glowing to even stand a chance. Looking back at the proposal now, I can see where the reviewers were coming from. I think we suffered a bit from not having met regularly as a team to discuss things, and also a bit of group-think: we tended to see and say what we liked about what we were doing. There was no ‘red team’ asking the awkward questions. And I agree (now) that we framed the project around an issue that wasn’t obviously worth caring about in itself. And we didn’t make a good enough case for alignment in what we were proposing.
This was another big team effort. I think part of our problem was that the proposal swelled and got more complex as we fed in bits that represented what each of us offered (read: wanted to do so we all felt important). The team was very diverse – and we all felt we needed each other. None of us, nor any subset of us, could have made a case alone. But somehow it all became too much. Hence the relationship between parts being weak. The point about not offering clear understanding reflects this general problem, plus a second major weakness: we were not in the bullseye of what this particular funding round was about. We were not giving the funders what they wanted.
This was a proposal to a funding body specifically about womens’ safety. To this day I think our basic idea was a good one: to do a kind of action research. However, with reviewer comments like this, our proposal flew pretty wide of target. We went too heavy with unfamiliar theory and they couldn’t see how it would work in practice. They also couldn’t see how it would generalise. Lacking content was a big no-no – too much methodology. At the same time we didn’t give enough about the methods in terms of site details. And then we fell foul of the feasibility hurdle. So we misfired on multiple fronts.
Several more months’ of work down the tubes with this one! Among the many issues the reviewers found, the two above were the most catastrophic. Being ambitious is okay if your project looks really feasible and the reviewers don’t get lost in complexity. In this case we failed on both counts. And then we failed to make the case for adding to knowledge. Who in their right mind would fund something that wasn’t going to find something new? I still think the project as it existed in my mind would have been original and great. But what was in my mind clearly didn’t reach the reviewers. Finally the ‘reality check’ was a vicious blow! But pointed to how wrong we got it. The reviewer felt we had an over-inflated budget, to produce a measly evidence base that wasn’t going to reveal anything new. Brutal.
Ah – not giving concrete details about the methodology. That old chestnut. Old it might be, but a disaster for this funding proposal! I realise no-one is going to give out money – even for a study on a really important topic by brilliant people – if there isn’t a clear, feasible and persuasive business plan for how it is going to be pulled off (on time, on budget). The methodology section is key to this.
Again falling foul of methodological detail – in this case not explaining how we would do the analysis. The Field of Research point is really important – this is how these applications get directed to reviewers and bigger panels. We badged it as education but the readers didn’t see themselves or their field in what we proposed. I speak about getting the ‘wrong’ reviewer in the video about funding rejections.
And now some really lovely journal rejections
I’ve picked these to illustrate different reasons for getting rejected from academic journals – these connect with the reasons I talk about in a video focused on exactly this!
Ouch! This definitely wasn’t ‘a good paper went to the wrong journal’. The editors hated it and couldn’t even bring themselves to waste reviewers’ time by asking them to look at it. There was no invitation to come back with something better. Just a ‘get lost’. In the end the paper was published somewhere else.
This fell foul of the half-baked problem. Editor thought I was half way through. My bad for leaving him with this impression. Paper was published somewhere else without any further data collection / analysis, but with a much stronger argument about what the contribution was.
This was living proof for me that writing with ‘big names’ doesn’t protect you from crappy reviews. The profs I was writing with really were at the leading edge of theory in the area, and so we really did think it added something new. This paper was rejected from two journals before we finally got it published.
This is one of my favourites! This reviewer really hated my paper by the time she finished reading it. The problem was I got off on the wrong foot by writing as if the UK was the same as the whole world. My bad. Really not okay. But then things got worse because she didn’t see herself and her buddies in my lit review. All the studies she mentioned were missing were ones I’d read. They weren’t relevant, but now I learn to doff my cap to the biggies in the field anyway. How dare a reviewer question my ethics this way (the students involved asked to keep doing the logs as they found them so useful). How dare a reviewer tell me what theory I need to use? And what possible relevance do the names of her grand-daughter’s classmates have to my paper and its worthiness for publication?! Finally, on the issue of split infinitives, I checked (there was precedent in the journal). When this was published (eventually as a book chapter) I made sure there were plenty still in there. A classic case of annoying the reviewer who started with a valid point, then tried to ghost-write my paper the way she wanted it, and ended up flinging all sorts of mud at me.
The only thing I can say with certainty about this list is: it will get longer! I’ve published in quite a few of these too – showing a rejection doesn’t mean the end of the road for you and a particular journal.
More to follow (inevitably!)