Category Archives: Academic writing

My Shadow CV: Updated 2020

The idea of the shadow CV

I was been inspired to write this blog post Devoney Looser’s article in the Chronicle of Higher Education, in which she asks: What would my vita look like if it recorded not just the successes of my professional life, but also the many, many rejections?

After doing some digging I  found Shadow CVs going back to 2012 by Jeremy Fox, another by Bradley Voytek from 2013 and a piece by Jacqueline Gill from the same year in which she mooted the idea (but refrained from sharing the dirt, yet). There’s also this piece, about the Princeton Professor @JHausfoher who shared his dirty career laundry in April 2016.

I have long been an advocate for more candid and open sharing of the often harsh realities of academic work. Here is my attempt to model the sort of warts and all honesty that I advocate and wish to see in others.

Nick Hopwood

What the ‘Main CV’ doesn’t highlight is that I’m white, male, middle class – and all the privilege that comes with that. (It also doesn’t mention that I am gay either).

Education

Okay, I don’t have a whole lot of failures in terms of totally flunking or failing degrees. I guess I could mention that I remember getting a D in my Maths in primary school. Given how the English school system worked when I went through it, I tended to select out what I wasn’t so good at.

I could mention how what are presented as ‘my’ achievements on my CV are also a product of systemic privilege. I went to a private school (wish a music bursary, which is itself a reflection of the fact I had the kind of parents who encouraged and payed for violin lessons).

That private school took me down to Oxford where I met someone from my school studying Geography (the subject I wanted to apply for). It also helped me prepare for my Oxford interviews, drawing on its extensive history of sending students to Oxbridge.

I got an ESRC 1+3 Scholarship for my Masters and PhD. What my CV doesn’t say is how much I think that was down to where my degree came from (Oxford) rather than the idea for my research.

Employment

My CV has a lovely little paragraph talking about an internationally recognized research profile. It all seems wonderfully coherent, planned, deliberate.

My Shadow CV would say something more like this: Nick started education research doing a MSc and PhD focusing on young people’s learning about geography and sustainability. However there were no jobs in this area when he graduated (see ESRC failure #1 below), so he had to look elsewhere. He got a job looking at doctoral education, and so there was then a period when this was his main focus. When that (4 year contract) job ended, again there were no jobs in that field (or none he could get in a place he was willing to live), so he applied for a postdoc at UTS. To be successful in that, he had to change fields again. In short: Nick’s research interests have gone where the jobs and money are. True, there are some consistent questions and approaches that I’ve been exploring and developing through these broad contexts. But a lot of it was to do with opportunity and constraint.

My employment history

My CV shows how I went from a funded postgrad scholarship to a full time job on a project at Oxford, to my UTS Chancellor’s Postdoctoral Research Fellowship, which was converted into an ongoing position at UTS.

My Shadow CV could also mention all the non-academic jobs I’ve had (you can read this in more detail in a separate blog Post).

My Shadow CV also mentions:

  • ESRC failure #1 – I applied for an ESRC postdoc, but didn’t get it. I found that out 6 weeks before I was due to finish my PhD, and had no job lined up. Panic stations.
  • Not getting interviewed 1: about 3 years into my postdoc job at Oxford, I applied for a advertised at Lecturer/Senior Lecturer level. I felt I had a pretty good publication track record, and relevant teaching experience. I wasn’t even called for interview. I had no idea how small a fish I looked in such a big, competitive pond.
  • Not getting interviewed 2: with the clock ticking on my Postdoc fixed contract, I applied for a second job, again at Lecturer/Senior Lecturer level. No interview.

Research income

My CV proudly announces a six-figure sum of research funding I have managed to cajole out of various sources. And it lists some lovely-sounding projects I have done or am doing.

My Shadow CV mentions:

  • ARC failure #6 – Part of interdisciplinary team wanting to find out what fathers can do to help mothers with child feeding. Reviewers seemed to like the idea. No funding.
  • NHMRC failure #1 – Part of a team wanting to find collect much-needed data on complex feeding difficulties in infants and young children, and then to improve care. No funding.
  • ESRC failure #3 – Part of a team applying for money to look at the education system in Bhutan. Mixed reviews. No funding.
  • ESRC failure #2 – I was part of a team that applied for funding for a project on doctoral education. The reviews were pretty blunt. No cash registers ringing anywhere near me this time!
  • ARC failures #1-5 – The Australian Research Council funding is highly prestigious, and undoubtedly a tough nut to crack. I heard of success rates around 17%. If that is true, then I’m no better than average I was involved in two Linkage submissions that were not funded, and two Discovery submissions that were not funded. I was also part of a proposal that started as a Linkage, fell over before it got submitted, came back to life as a Discovery, got submitted, and then was not funded.
  • Spencer Foundation – Particularly galling because I’d roped in some key international people to join in, and they put some time in… I feel it all falls on my shoulders. Interestingly, both the key people stuck by me and are now involved in my DECRA.
  • ANROWS – yup you guessed: another detailed proposal that took months to put together that resulted in $0.
  • Office of Learning & Teaching – didn’t get through to second round.
  • Norwegian Research Council – a project on innovation, but they didn’t think it was innovative enough.
  • STINT – application for funding to support research collaboration on simulation. $0.

 

Publications 

My CV proudly shows off a number of book, journal article, and book chapter publications, alongside complimentary citation metrics.

My shadow CV acknowledge that I still get plenty of papers rejected (some of which I  blog about).

Off the top of my head I can say I’ve been (sometimes quite rightly!) rejected by

  • British Educational Research Journal,
  • Oxford Review of Education,
  • Learning, Culture & Social Interaction (twice)
  • Journal of Advanced Nursing,
  • Qualitative Research,
  • Vocations and Learning,
  • Advances in Health Sciences Education,
  • Journal of Curriculum Studies,
  • Studies in Higher Education,
  • Australian Journal of Primary Health,
  • Journal of Child Health Care,
  • Higher Education,
  • Nurse Education Today.

Some of these have previously or subsequently accepted papers I’ve been involved with, too, proving a rejection doesn’t mean you’re marked for life as useless.

I also mention the folder called ‘Going nowhere’ which is full of papers I started and which have never got to submission.

My book proposals didn’t all sail through at the first attempt either. I would hope that my rejections these days tend to be for ‘good’ reasons (foibles of peer review, fact that I’m presenting complex, sometimes challenging arguments) rather than ‘bad’ reasons (failure to do my homework, Early Onset Satisfaction etc.). My shadow CV would also point to the many papers that haven’t been cited by many people, including those that have only been cited by me. My published work is clearly not of uniform or universal appeal or value in the eyes of others.

My CV presents a suite of strong-looking metrics – highlighting my most-cited papers etc. It doesn’t mention that one of my books has sold astonishingly few copies. So few, that I have still to earn enough for the publishers to bother sending me a cheque (which they do if my Royalties get to about 60 Euro – so that will give you a sense of how few copies have been sold!).

Non-Awards

My CV mentions Awards I have received. My Shadow CV mentions the following (all things I did enter in for, so could have won if I was that good):

  1. Not winning any of the awards at the EARLI SIG 14 conference in Geneva 2018
  2. Not winning the ACGR Award in Graduate Research Leadership (twice)
  3. Not winning UTS Supervisor of the Year 2019
  4. Not winning UTS Research Excellence Award 2013.

 

Non-completions

My CV proudly states how many doctoral students I have successfully supervised. What it doesn’t mention is that there are two whom I co-supervised who never finished, and a Masters by Research student to whom that applies too.

In all these cases, the non-completion was due to tough life circumstances affecting the students, not their academic weakness or failings. But I think these should go on the Shadow CV if only to break some of the silence around attrition in higher degrees by research, and to make the case that sometimes not finishing is nothing to be ashamed of – either for the student or their supervisor.

 

Teaching evaluations

My CV says student feedback is available on request, and anyone asking for it will receive a lovely PDF showing (true) positive evaluations and high scores.

I might leave out (or hope the reader misses) some comments that really cut the other way. They still hurt! One student ticked ‘strongly disagree’ for every item in evaluating a class I taught (where that means ‘super bad’). So I clearly got something wrong there, for example.

 

In conclusion

I could add sections about awards (Shadow CV mentioning those applied or nominated for that I didn’t do so well in), about reviewing (the times I’ve said no, I’m too busy; the reviews where I have been harsher than was warranted), etc. etc.

Well, I doubt this post has achieved much except echoing Devoney’s brilliant piece. I’m just trying to say “Yes, she’s totally right! We need to do more of this kind of thing!”.

 

Aren’t I nervous about making this kind of stuff public?

Academia is a highly competitive and often insecure work environment. While I currently have the privilege of an ongoing, full time contract, who knows what the future will bring. It seems reasonable to expect that someday, someone might be looking at my CV and doing some digging around my online scholarly identity, considering whether to appoint me to another job, or perhaps even just as part of a promotion panel.

Devoney wrote about the tendency for us to hide our rejections, arguing: “That’s a shame. It’s important for senior scholars to communicate to those just starting out that even successful professors face considerable rejection.”.

All academics face considerable rejection. I’m not revealing anything that I wouldn’t expect to be broadly true of any colleagues competing with me for whatever job or promotion it might be.

More importantly, if a prospective employer thinks twice about offering me a job because of what they read below, then I probably don’t want to be working for or with that person.

The values I see reflected in presenting a public shadow CV are ones of honesty, openness, and trust. Success in academia is not about never failing, never being rejected. It is about not allowing rejections to take hold of you. If I preach this but don’t have the gall to match generalisations with concrete detail, I should just shut up.

A PhD student receives a rejection from a journal. Here is how she and her supervisors responded

I was talking with a colleague recently who described an interaction with one of her students who had been rejected from a journal. The response of her supervisors sounded really interesting, so I asked if she’d mind forwarding the emails onto me for a blog post. Which she kindly did! There’s a lot here that is useful in thinking about how to respond when you get rejected. I should point out this is in a country where many students complete a PhD through publications, and in this case the article was written by the student, with all the supervisors helping her and named as authors.

First the student wrote to her supervisors

Dear supervisors,

At last I have got response from the journal regarding my second manuscript. Unfortunately they are not interested to publish it.

I´m very disappointed about that. I can agree with a lot of the comments, it is useful for me in the future process but it has taken over 6 months to deliver that answer and right now I don´t have so much positive energy to restart the work.

I think I can interpret their comments (at least from the first reviewer) as if I rewrite the manuscript I can try to resubmit it but I´m not really sure if that is their suggestion.

Then one supervisor replied, cc’ing the others

Thank you for your email. Yes that is somewhat disappointing, but from the comments, perhaps it is good that it isn¹t published in its current form: because from what the reviewers saw, I don¹t think the paper did full justice to your work and your thinking! Better to have a stronger paper published, even if it is later.

I have had similarly prickly experiences, particularly in this journal, with reviewers who really want accounts of research to feel as if the research was quantitative (a bit like reviewer 1 worrying about interpretation in ethnographic research etc).

On the plus side:

  1. Both reviewers appear to have read your paper in quite a bit of detail! (which is not always the case)
  2. Both reviewers have offered well-written comments that are quite easy to understand (which is not always the case)
  3. There is lots in the comments that will help to improve the paper.

I think both the reviewers offer largely helpful comments – they are not fighting the kind of story you want to tell, or questioning its importance. They do want to know more concrete detail about the study methods, want a clearer alignment between the question, theory, findings and discussion, and a very clear argument as to what is new and why it matters. They are all very achievable without having to go back and do more analysis!

I think the process now should be to wait a few days until you feel a bit less fed up, and then to start:

  1. Thinking of alternative journals (although R1 seemed to invite this the journal is definitely not asking for a resubmission as I interpret the email). XXX might be one possibility. Or YYY?
  1. Coming up with your own to-do list in terms of changes you think are worth making to the paper – and perhaps differentiating those that are small/easy, and those that require a bit more thought and work. You can also list those points the reviewers made that you¹re not so bothered about and don¹t want to make big changes.

So, when you¹re feeling you have the energy to take it up again, there are my suggestions 🙂

Then another supervisor added her voice

I understand that it feels a bit disappointing, particularly since they kept you waiting so long for the decision. But I can only echo what [Supervisor 1] is suggesting, once you have worked through the comments, your paper will be much stronger.  I think you should let it sit while you are completing the paper on the [different analysis], you are in a good flow with that one at the moment! And we should think of an alternative journal, I agree, we need to aim for one that is included in Web if Science.

And then a third supervisor added his voice

This is the kind of experience that is not only sometimes happening, but rather a rule than an exception. And just as S1 and S2 state; it will in the end improve the paper. But I do agree they could have given us this feedback at least half a year earlier….

I also think S2’s advice is right; go on with the paper on [different analysis] and let this paper rest (just like a wine; it will become better with time and maturation – ask your husband!).

So let this experience take its time and aim for a journal that is indexed in Web if Science, although the IF is not too important.

Then the student replies

Thanks for the support!

I totally agree with you all and as I said, the comments from the reviewers are very good for me in the future process and also for my paper regarding the [different analysis]. I  struggle with the same issues here I guess; clear arguments for the study, evidence for my findings and how to discuss that much more clear.

Brief comment from me

What I like here is:

  1. That we end up with the student being able to take the rejection letter as a way to identify some things that she needs to look out for in another paper
  2. That S3 normalises this kind of experience
  3. That S2 provides very concrete suggestions in terms of not getting distracted by the rejection when work is going well on another paper
  4. That S1 finds positive things to appreciate in the reviewers’ comments, even though it was a rejection
  5. That the student felt comfortable sharing this, and got such strong and immediate support.

New video on effective feedback / peer review in academic writing

Hi

I’ve just posted a short video on some principles and practices for effective giving of feedback  / peer review on academic writing.

It could be relevant to people working together in a writing group – perhaps providing a focus for discussion of your own principles and working rules for commenting on each other’s work.

It is also relevant to anyone who is involved in reviewing / refereeing for academic journals, where, let’s be honest, the feedback authors get isn’t always delivered in the most ethical, constructive and professional way :-0

I argue that effective feedback / reviewer comments can be understood as pedagogic in their effects – helping the writer develop the text and as a writer – rather than only judgemental.

To start I suggest thinking about the effects you want to have on the original author when you review someone’s work or give feedback on their writing. I then suggest some principles and practices that might help achieve certain effects. Important among these are (1) mirroring first (just saying what you think it is that the person is trying to achieve and what they are arguing); (2) being specific in praise and critique and giving reasons to explain your judgements; (3) being careful and sensitive with language, including avoidance of unnecessary emphasis, and use of more speculative phrasing where appropriate; (4) critiquing the text rather than the person; (5) being blatantly subjective – by which I mean writing in a way that acknowledges the review or feedback is the result of an interaction between the text and you (with your own values, history, knowledge, ignorance, privilege, preferences)… not an objective discovery of flaws in text, argument, research, or the scholar at the receiving end!

Current trends in academic publishing and where things might be heading

WARNING! This post may well be out of date already, and if not now, then quite possibly by the time you’ve finished reading it! Not because it’s long, but because things are changing very quickly!

This is my attempt to identify some of the big changes that are happening in academic publishing, and to point to where I think things are going. This is not based on extensive research or systematic reviews of literature, nor amazing insider-insights through industry contacts (my industry contact seems as uncertain as me about much of this)… it’s more a combo of gazing into a crystal ball, and well not exactly wishful thinking, but perhaps my instinct to resist cynicism and hope for a palatable outcome.

Open access

What’s the change? There is more than a groundswell of opinion that academic research should not be locked away behind pay-walls, but freely available to everyone. A crude summation of the logics and values at play here goes something like this:

1. The view from the ‘outside’… Where taxpayers pay for research (through government grants etc) they shouldn’t pay to access the outcomes of that research. The person just diagnosed with cancer should be able to go online and read about treatments and the latest trials without being hit with a bill for doing so. After all she ‘paid’ for the research in the first place through her taxes.

2. The view from the ‘inside’… Hey! There’s heaps of money being made in academic publishing but none of it is coming to me, the poor academic who wrote all the stuff in the first place! So I’m going to thwart those greedy publishers by publishing in open access journals (even though I still make no money!)

In practice what this means is that some researchers or their institutions are now paying a fee to publishers to make their articles open access (no fee, no ‘free’ access for others). Or, some journals (often the more ‘indy’ types) ask authors to pay a fee up front (no fee, no publish).

Where do I see it heading?

Hard to call. Like most of the changes I discuss here, the status quo is pretty much a big mess, and difficult to predict. I’ll start with the most certain: the journals that are both free-to publish and free-to-access will soon be extinct. Often hosted on university websites, it’s hard to see how these will survive the cut and thrust of contemporary higher education funding. Either these will end up charging to publish (as happened with one that I published in while it was still free, phew!), or they’ll get bought out by commercial publishers (when they are established enough that the publishers think people will pay to access content, or pay to have the content opened ‘freely’).

What about stopping people having to pay to read research when they paid for it through taxes, or have some other innate ‘right’ to access it? This argument has gone a fair way in the UK, such that now some funding bodies build in costs for paying the open access fee to publishers. The political winds may mean this catches on, with funding bodies basking in the warm glow of ‘everyone can read what our researchers publish’ feelings. But don’t I see this becoming the norm. Why? Several reasons.

  1. Because it doesn’t change the fact that people are still paying for access, they’re just paying as a collective one step further upstream.
  2. Who wants to read what’s in journal articles anyway? Are there really masses of people desperate to read academic papers? I very much doubt it (even in medical fields). Academic papers work to inform academic debate and are not our most effective or primary means of engaging wider non-academic audiences. (I expect you may disagree with me here). And anyway, will making all our papers open access actually improve things for the masses? I’ve been doing educational research for over a decade now and I still find many if not most papers pretty hard going. Hey, I struggle with understanding and motivation a lot of the time, and I’m paid to be interested in this stuff, and extensively trained to read it, with a masters degree, doctorate, years of practice and thousands of references in my endnote. Why should I expect the proverbial woman or man on the street to be jumping at the bit to read this stuff? And even if she or he is keen now, send them a few dozen papers and see if they’re as keen later on. My guess is Game of Thrones or re-reading Harry Potter will probably look more enticing. I’m not about denying access to knowledge to people. I do doubt whether open access journal articles will result in masses of the masses relishing in their newly found right to roam the academic literature for free.
  3. Because universities paying for open access when they already pay to subscribe to a journal is a hard pill to swallow. Harder still when universities in many countries are facing unprecedented budget cuts, perceived threats from MOOCS (though I think we’ve been unnecessarily spooked by MOOCS, as a sector, but don’t get me started), and uncertain futures. There simply isn’t the proverbial money down the sofa for universities to start paying for open access or paying to publish in the first place. And academics aren’t going to do it out of their own pockets. At least, I’m not.
  4. And research funding bodies are often facing funding cuts, too. And why should they give out less money for research because they’re having to pay more to make it free? Is it better for cancer patients to read journal articles for free, or for that open access fee (which is often not inconsiderable) to have paid for more research to develop and trial treatments? I’m just saying…

The question is, who’s going to blink first? Universities aren’t universities if they’re not producing publications. Commercial publishers can’t exist without profits. And academics are, of course, greedy money-grabbing tight-arses, who refuse to pay a mere few hundred or thousand dollars for every paper so the plebs down below can read their inaccessible waffle. I haven’t blinked yet. Have you?

But there are other changes afoot, and more reasons why I think paid-for publishing or paid-for open access are not going to become the norm very soon.

  1. Institutional repositories: the content (ie pre-proof version) of many papers can already be made freely available to anyone who can be bothered to read it, through institutional repositories. The cancer patient can read your paper, just without the fancy doi numbers and typesetting etc, without paying anything. But institutional repositories are proving a bit slow to catch on, unless institutions mandate their staff to submit.
  2. Maybe the publishers have not quite blinked, but squinted. One BIG publisher has recently released its embargo on the pre-proof version of a paper (the one the academic typed and was accepted by the editor) – we’re now free to put these documents on our blogs, departmental websites. If you don’t know which publisher this is, do some digging!
  3. Heaps of stuff is already open access, although it shouldn’t be if you pay attention to the copyright. If you’re any good at ‘the internet’, it’s not hard to find free versions of papers you’re ‘supposed’ to pay for. Not every paper is freely available this way, but lots are, and the number isn’t getting smaller. I expect academics publish their papers this way out of ignorance of copyright, naivety, as a way to give the evil publishers the proverbial finger gesture, or to enhance their citations and h-index. Or maybe because they lie awake at night worrying about all the people also lying awake because they found an article on the latest poststructural deconstruction of liminality, or a miraculous formula for predicting nearly-prime numbers, and they couldn’t afford the $30 fee to read it.

Vanity publishers, predatory publishers, and the in-between

Vanity publishers are nothing new – paying someone to publish your work (particularly in book form). What is new is the fact that the ‘publish or perish’ climate in academia is leading some researchers to secure their moment in the sun by flexing their credit cards rather than their intellectual muscles. Will this become the norm? Screw peer review. Screw the big commercial publishers, screw the fact it won’t end up on amazon and no-one will ever know it exists, I’ll pay this lovely boutique press to print 200 copies of my book. I think not.

Predatory publishers. “Dear Dr Dr Hopwood Nicholas. I recently read your paper entitled… and know you are an expert in this area. I invite you to submit a manuscript in this new international, peer reviewed journal, with this stellar international editorial board…” Click the url and something’s not quite right. Not only is the email clearly automated (“Dr Dr Hopwood Nicholas,” pah!) but this journal has a mysterious 10 volumes published in the last 2 years by academic celebrities you’ve never heard of who are citing works you’ve never read… Need I say more?

The in-between. I’m not going to name names. You know who they are. They’re the ones saying they’d like to publish your PhD as a book, before they’ve even read it, or who manage to conduct a ‘thorough’ review of your manuscript in about 8 seconds. An interesting business model for now. Is it the future? Put it this way, if I were playing the stockmarket, I’d be selling my shares in these companies quicksmart.

 

Peer review

Another trend, or perhaps a fad, is to claim that peer review is broken. Peer reviewers are getting it wrong, causing embarrassment for journal editors and their publishers, who have to retract papers, apologise to the public, and lick their wounds as their reputation takes a knock (forget the stupid authors who did dodgy research in the first place, they should have been caught earlier!).

Peer review is also showing symptoms of ill health, and the prevailing winds do not look favourable. Most reviewers aren’t paid, but the ‘rewards’ for doing reviews are slim. Our university employers want us to do more, better, faster, for less, and doing reviews isn’t counted very highly (or at all) in the grand scheme of things. So we feel we have less time to do reviews, meaning we may do fewer of them, and do them less well when we do say ‘yes’. Neither are good for our disciplines – the fewer people who do reviewers, the narrower (and more tired, frustrated) the gates controlling and supporting the expression of new knowledge become.

Peer review has historically happened under a cloak of anonymity, often ‘double-blind’, where neither reviewer nor reviewee knows who the other is (as if it’s not often blatantly obvious, or we can’t take an educated guess or do a bit of digging on google)… this anonymity has well-rehearsed benefits, but also results in some otherwise decent and professional folk unleashing torrents of abuse at their peers.

In natural science fields now it is becoming increasingly common for reviews (and authors’ responses to them) to be published, and even for the reviewers to be named. This, it is argued, makes the whole process more transparent, enhances the quality of reviews (referees are more careful writing comments when they know they will be made public), and enables readers to see how the paper came to take the form it reached, and what doubts or criticisms were raised along the way.

Of all the trends I reckon this is the most likely to catch on. It doesn’t have huge cost implications, or many drawbacks as far as I can see (though I admit I’ve not looked hard enough into this and haven’t yet experienced it in my field so I may well revise this view later!). I can see it spreading through the natural sciences pretty quickly, particularly in the current climate where retractions appear to be becoming more common, and there is seemingly strong sense that because some  reviewers are getting it wrong the peer review system can’t be trusted. Even if peer review isn’t ‘broken’ and therefore doesn’t need fixing, this is an interesting idea that seems to have legs. I can imagine the social sciences coming round to this (or perhaps not fighting when norms from natural sciences are inherited or imposed on us). Who will be the last ones standing on the island of opacity as the waves from the sea of transparency lick higher and tides of change push forward? Anyone? Bueller? Anyone? Humanities? Anyone? Anyone?

If peer review is broken, why not pay reviewers? Then they’d review heaps more papers, treat the process seriously, and do it all on time too. Brilliant idea! Except there’s no money. Even if there was some money to pay for this (which there isn’t), it would be like saying “Hey, you know that thing you used to get for free? Well screw you! You’re going to have to pay for it now!” (the fact that this is precisely what has happened in relation to undergraduate tuition fees in many countries is not lost on me, in case you were worried).

Let’s say we do find some extra cash down the back of the lecture seats (which we won’t; I looked, it had already been pillaged by the big publishers, greedy tenured academics, overpaid managers and busybody bureaucrats), I don’t think it would make any difference. In fact it might make things worse – if people were incentivised to do reviews for money, it could distort things quite significantly. And I like to believe that academics still do things for the good of their discipline or field rather than for money anyway. So even if it was a good idea (which it isn’t) and there was the money for it (which there isn’t), it wouldn’t catch on.

Democra-truth

This kind of brings together all the issues so far. The idea that universities should stop being so elitist in claiming their exclusive rights to knowledge. Forget the elbow-patched professors festering slowly amid their piles of self-citing, self-aggrandising and self-plagiarising books full of interminable critique and concluding that “everything is more complex than we thought, so there!”. Let’s storm the university and take knowledge back into our own hands! Vive la revolution!

Except, when made ‘democratic’ or left to the ‘market of the masses’ to sort it out, it doesn’t always go so well. Do some searching about errors in a certain large internet encyclopaedia and you’ll see what I mean. Furthermore, the masses will tend to agree around the knowledge they want to know, that they are comfortable with.

Do you really believe democra-truth wouldn’t end up being ‘media-mogul-truth’ instead? The media would have us believe there is a ‘debate’ about climate change, for example. If by ‘debate’ you mean overwhelming scientific consensus on a global scale, versus vocal and vociferous, cherry-picking dissent, then okay, you’ve got me. [If you’re one of those dissenters, you can still see the point I’m making, just choose any topic where the media holds palpable sway over public opinion]. But we often trust the public with other important things, like in judicial systems with juries, right? Yes, but see how that would work if the judges, clerkes, and lawyers were all pulled off the street too. Oh.

I strongly believe there should be places preserved and reserved where we can ask the really awkward questions that no-one else wants to face up to (particularly governments and the general public), and present the arguments no matter how unpalatable they may be. We also need to cherish the pursuit of knowledge and discovery without necessarily knowing where it will take us. No, I’m not sold on democra-truth (but of course I’m biased, my job kind of depends on universities maintaining certain kind of rights to generating and policing what counts as knowledge).

So, there you have it. As the pilot says when a huge storm appears on the radar screen: “Please fasten your seat belts, it may get a little bumpy”.

Video about journal publishing basics

I’ve been preparing for some workshops on journal publishing for postgraduate research students and early career researchers. Following the idea of Flipped Learning, and the ‘Learning 2014’ strategy at UTS, my home university, I’ve been trying to minimise the time participants spend in the workshops sitting listening to me talk, and to create more time for group discussion and activities instead.

So I created a 30 minute video covering some basic points – many of which I’ve written about in other posts. Although readers of this blog won’t by default be able to come to the workshops I’m running, I thought I’d share the video anyway in the hope it might still be useful. One day I might even put my face in front of the camera!

If you’re interested, the workshops will then go on to look at: why papers get rejected, what reviews look like and how to respond to nasty ones (which are a sad inevitability in academic life), how to frame a response letter when you’re asked to revise and resubmit, and the ethics of peer review.

The main video can be viewed here

https://www.youtube.com/watch?v=1wGIieGeQ9U&feature=youtu.be

There are two supplementary videos

1. How to find out the ‘zombie’ rank of a journal. https://www.youtube.com/watch?v=19b1z50E5Js

2. A bit more about researching the relative rather than absolute impact factor (or other status measure) of a journal. http://youtu.be/z3HhUtfXxUQ

The second one gets a bit more into technical side of using excel once you’ve imported relevant journal metrics data from an external source such as Scopus or SciMago SJR.

Please do add feedback and comments below! Are the videos useful? Do you disagree? Do you choose journals in a different way? Do you assess journal status differently? Am I out of date about copyright issues?

On this last point, a big BUYER BEWARE warning: copyright things are changing very fast. Only this week Taylor and Francis announced AAM (author accepted manuscripts) can be put on personal or departmental websites, free of embargo (this doesn’t mean you can make the final paper pdf freely available, but the pre-proofed word version)… so some of my comments will get out of date quite quickly if things keep changing!

 

A guide to choosing journals for academic publication

The key is the match between your paper and the journal

Choosing a journal for your paper is a complex and nuanced process. Don’t expect to be able to ask anyone else off the cuff and get a sensible answer. Only people who know what you want to say and what you want to achieve in saying it can provide guidance, and even then it’s up to you to judge. In writing this I hope to make this process more transparent, and to help you be as informed as possible about your decisions. If you disagree, or can add more things to consider, or more measures of status please leave a response at the bottom!

Chicken and egg

Which comes first the paper or the choice of journal? Neither. Both. In my view you can’t write a good paper without a sense of the journal you are writing for. How you frame the argument / contribution, how long it is, which literature you locate it within, how much methodological detail, how much theoretical hand-holding is needed for readers, what kind of conclusions you want to present, what limitations you should acknowledge: ALL of these are shaped by the journal. But how do you know the answers to these questions? Usually by writing a draft! See the chicken-egg problem? My process is as follows:

  1. Come up with a rough idea for a paper – what data am I going to analyse, with what theoretical focus, presenting what new idea?
  2. Come up with a short list of potential journals (see below)
  3. Plan the paper down to paragraph level helps me think through the ideas and make good judgements about the fit between it and journals in the short list.
  4. Choose a journal. If in doubt write the abstract and send it to the editor for initial comment: what’s the worst that could happen? She or he could ignore it!

An ongoing conversation

Most journal editors want to publish papers that join and extend a dialogue between authors that is already happening in their journal. This gives the journal a certain shape and develops its kudos in particular fields or lines of inquiry. If no-one has even come close to mentioning your topic in a particular journal in the last 5 years, I’d think twice about targeting that outlet. Unless you really are planning a major disruption and claiming woeful neglect of your topic (which says something about the editors…)

Check out the editors, and stated aims and scope

Editors have the ultimate say over whether or not to accept your paper. Check out who they are, and do some research. What are their interests? How long have they been on the editorial board? If it’s a new editorial board, are they signalling a broadening, narrowing, or change in scope perhaps? What special issues have come out?

Don’t be stupid

Don’t get the journal equivalent of ‘bright lights syndrome’ and choose somewhere just because it is uber-high status (like Nature). Don’t be a ‘sheep’ either and choose a journal just because someone you know has got their paper accepted in it. Don’t send a qualitative paper to a major stats / quantitative journal. Don’t send a piece of policy analysis from (insert your random country of choice here) to a major US journal (for example) when your paper has nothing to say to a US audience.

The devil is in the detail: yes – more homework

Check out things like word limits, and whether they include references. If the journal allows 3,000 words including references, and your argument takes 5,000 to develop, either change your argument or change the journal. Simples. Also check out the review process. Look under abstracts in published papers for indications as to the timeline for review, and check if there are online preview or iFirst versions published (which massively reduces the time to publication). Don’t be caught out with a whopping fee for publication if your paper is accepted. And don’t be shocked when you read the copyright form and find it costs $3,000 for open access. Some journals publish their rejection rates: you’d be foolish to plough on not knowing 90% of papers are rejected even before review (if this was the case).

Publish where people you want be visible to are reading

Think who you want to read your paper. Forget dreams of people from actual real life reading academic journals. The only people who read them (except some health professionals) are, on the whole, other academics. This isn’t about getting to the masses: there are other, better venues for that. This is about becoming visible among your disciplinary colleagues. Where are the people you like and want to be known to in your field publishing? What journals do they cite in their papers?

Understand the status of the journal you are submitting to and its implications for your career

This is the biggie. So big I’ve written a whole section on how to do this below. But for now a few key points.

  1. It pays to know what will be counted by universities in terms of outputs, and what will have kudos on your CV. In Australia, for example, journals not on the ERA list are pretty much no-go. In some fields (particularly hard science and health), journals not indexed in Web of Science aren’t recognised as worth the paper (or pixels) they are printed on.
  2. Remember that status measures only measure what can be measured. A really prestigious journal in your field – with lots of top people publishing lots of great papers in it – might be lower (or not even register at all) in all the various indices and metrics.
  3. There is no single flawless measure of status. Take a multi-pronged approach to suss out where a particular journal lies between ‘utter crap that publishes anything’ to ‘number 1 journal in the world for Nobel Laureates only’.
  4. There are many good reasons for publishing deliberately in lower status journals. It may be they have the ‘soft’ status I mentioned above. Maybe that is where you can actually say what you want to say without having to kow-tow to ridiculous reviewers who don’t understand or accept your innovative approach (which they view as floppy, oddball etc.).

How journal status is measured and how to find this information out

A whole book could be written on this, so please forgive my omissions.

Impact Factor

This is the one everyone talks about. It is also the bane of many people’s lives outside natural and health sciences. Impact Factor is a measure of the mean number of citations to recent articles published in a particular journal, excluding citations in other papers in the same journal. So an Impact Factor of 2.01 in Journal X means that each paper in X has been cited a mean of 2.01 times in all the other indexed journals, except X, over the past two years (five year figures are also used). The higher the impact factor, the higher the status, because it shows that the papers are not only read but they are cited lots too. Excluding the ‘home’ journal stops editors bumping up their own Impact Factor by forcing authors to cite papers in their journal. Why is this problematic? Where do I start?!

  1. Not all citations are for the same reason but they all get counted the same. If you cite paper P as one of several that have investigated a topic, and paper Q as a hopeless study with flawed methods, and paper R as hugely influential and formative, shaping your whole approach, they all get counted the same. In theory, publishing a terrible paper that gets cited lots for being terrible can boost an Impact Factor.
  2. The key is in the reference to other indexed journals. The issue is: what gets to be indexed? There are strict rules governing this, and while it works okay in some fields, lots of important, robust journals in social sciences and humanities aren’t indexed in the list used to calculate Impact Factor; at least that is my experience. This can deflacte Impact Factor measures in these fields because lots of citations simply don’t get counted. The formal ‘Impact Factor’ (as in the one quoted on Taylor and Francis journal websites, for example) is based on Journal Citation Reports (Thomson Reuters), drawing on over 10,000 journals. Seems a lot? In my field, many journals are missed off this index.
  3. The time taken to be cited is often longer than two years (google ‘citation half-life’ for more). Lets say I read a paper today in the most recent online iFirst. I think it’s brilliant, and being a super-efficient writer, I weave it into my paper and submit it in a month’s time. It takes 9 months to get reviewed, and then another 3 months to get published online. Then someone reads it. Process starts again. If the world was full of people who read papers the day they came out, and submitted papers citing them almost immediately, still the lag-time to publication in many fields prevents citations within the magic 2 year window. There are versions of Impact Factor that take five years into account to try to deal with this problem. This is better, but doesn’t benefit the journals that publish the really seminal texts that are still being cited 10, 15, 20 years later.
  4. Impact Factors are not comparable across disciplines. An Impact Factor of 1.367 could be very low in some sciences, but actually quite high in a field like Education. So don’t let people from other fields lead your decision making astray.
  5. Impact Factor may work very well to differentiate highly read and cited for less highly read and cited journals in some fields (where the value range is great, say from 0 to over 20), but in fields when the range for most journals is between 0 and 1.5 its utility for doing so is less good.
  6. Editors can manipulate Impact Factors to a degree (eg by publishing lots of review articles, that tend to get cited lots). See Wikipedia’s page on impact factor for more.

How do you find out the Impact Factor for a journal? If you don’t know this you haven’t been using your initiative or looking at journal webpages closely enough. Nearly all of them clearly state their Impact Factor somewhere on the home page. What can be more useful though is knowing the Impact Factors for journals in your field. In this case you need to use your go to Web of Science. I recommend downloading the data and importing it into excel so you can really do some digging. In some cases it may not be so obvious to find, in which case try entering ‘Journal title Research Gate’ into google eg ‘Studies in Higher Education Research Gate’. The top result should give the journal title and research gate, and a url like this: http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f . Immediately on clickling the link you will find data on Impact Factor, 5 year Impact Factor and more (based on Thomson Reuters). Note this is not an official database and may be out of date at times.

Alternatives to Impact Factor: SJR

An alternative that may work better in some fields is the Scopus Scimago Journal Rankings (SJR). This includes a range of metrics or measures, and I have found it includes more of the journals I’ve been reading and publishing in (in Education). The SJR indicator is calculated in a different way from Impact Factor (which I admit I don’t fully understand, see this Wikipedia explanation). It has a normalising function as part of the calculation which reduces some of the distortions of Impact Factor and can make it more sensitive within fields where there are close clusters. SJR also has its version of impact called the ‘average citations per document in a 2-year period’. When I compare the SJR and Thomson Reuters measures for journals in my field, some are very similar and some are quite different. So it pays to do your homework. SJR data are also easily exportable to excel and you can then easily find where journals lie in a list from top to bottom by either of these measures (or others that SJR provide). The easiest way to find out the SJR data for a particular journal is simple: type the journal name and SJR into google eg ‘Studies in Higher Education SJR’. Almost always the top result will be from SCImago Journal & Country Rank, something like http://www.scimagojr.com/journalsearch.php?q=20853&tip=sid . If you go there you’ll fild a little graph on the left hand side showing the SJR and cites per doc tracking over 5 years, given to 2 decimal places. There is also a big graph, with a line for each of these two metrics. If you hover over the right hand end, you get the current figure to 3 decimal places. See the screen shot below.

Scimago info

A screen shot from SJR showing the Indicator and cites per paper data

Alternatives to Impact Factor: Zombie Journal Rankings

In Australia, lots of journals were, at one time, ranked A*, A, B or C. This was done using a pool of metrics and also peer-based data with groups of academics providing information based on their expertise. For various reasons (don’t get me started) these have been abolished. However they are a common reference point still in many fields in Australia and New Zealand, and so I call them ‘zombie rankings’. Even if you’re not in Australasia, it might be useful to look up what the rank was, to see if it confirms what you’re finding from other measures. The quickest way to is go to the Deakin University hosted webpage and to check under Historical Data, then Journal Ranking Lists, then 2010 (the rankings were alive in 2010, and abolished shortly afterwards). The direct URL is here: http://lamp.infosys.deakin.edu.au/era/?page=fnamesel10 . Type in the journal name, or a keyword and ta-dah! If you just type in keywords you will get multiple results and may be able to see a range of options. I’ve put an image of what it looks like below. Pretty easy stuff.

Zombie Ranks

A screen shot from the Deakin website showing former ERA journal rankings

Alternatives to Impact Factor: ERA list

Now there are no rankings, ‘quality’ is indicated in a binary way as either included in the ERA list or not. We’ve just had a process in Australia of nominating new journals to be included in the list for 2015. But the current 2012 list is also available through Deakin. http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f .

Alternatives to Impact Factor: rejection rates

The more a journal rejects, the better it must be, right? Well that is the (dubious, in my view) logic underpinning the celebration of high rejection rates in some journals. I’m more interested in what gets in and what difference that makes to scholarly discourse, that what is thrown out. But hey, if you can find this information out (and it’s not always easy to do), then it may be worth taking into consideration. More for your chances of survival than as a status indicator perhaps.

Alternatives to Impact Factor: ask people who know!

While only you can judge the match between your paper and a journal, lots of people in your field can give you a sense of where is good to publish. This ‘sense’, in my view is not to be dismissed because it cannot be expressed in a number or independently verified. It is to be valued because it draws (or should do) on knowledge of all the metrics, but years of experience and reading.

Conclusions

Choosing journals is tricky. If you’re finding it quick and easy it’s probably because you’re not doing enough homework, and a bit more time making a really well informed decision will serve you well in the long run. As I said earlier this post is not exhaustive either in terms of things to consider in your choice, or status indicators. But I hope this is useful as a starting place.

Do you have quotitis? How to diagnose, treat, and prevent!

What is quotitis?

Quotitis is a common disease among qualitative researchers. It’s a name I have started using to refer to the tendency for people writing about qualitative data to over-rely on raw quotes from interviews, fieldnotes, documents etc.

 

Why is this a problem?

I used the term over-rely deliberately, implying not only more than is necessary, but too much to the point of being counter-productive by virtue of its excess.

The basic point is this: whether in a journal article, thesis or other scholarly publication, people are giving their time (and quite often paying money, too) to read what you have to say, not what others have said. The value add in your work comes from expressing your thoughts, interpretations, arguments, and ideas.

 

How do I know I have quotitis?

Quotitis can be diagnosed both through its manifestations in writing, but also through reflective questioning of the (often tacitly held) assumptions underpinning your writing.

Symptoms to spot in writing

Look at your findings / discussion section. How much is indented as quotes from raw data? How much is “quoting the delicious phrases of your participants” within a sentence? It would be daft of me to give a fixed proportion to limit this, so I’m not going to. Do you give multiple exemplars to illustrate the same theme? Look at the text around the quotes. Have you given yourself (word) space to introduce quotes appropriately, and to comment on them in detail?

Underlying causes (assumptions)

A full diagnosis requires you to consider what frames your approach to writing up qualitative research. Any of the following assumptions might well give the writing doctor cause for concern:

  1. No-one will trust or accept your claims unless you ‘prove’ each one with evidence in the form of quotes from raw data
  2. Participants express themselves perfectly, and your own words are never as good, and lack authenticity
  3. Not to quote participants directly is to deny them appropriate ‘voice’
  4. Raw data is so amazingly powerful it can ‘speak for itself’.

All of these assumptions are false. Perhaps at times, in certain kinds of research that place high emphasis on sharing knowledge production with participants, you may take issue with point 3. But still, I would suggest that an academic text will be more valuable by virtue of you developing ideas around data rather than just reproducing it.

Of course, the really uncomfortable truths around some cases of quotitis are as follows:

  1. You may have a fear of your own voice and words (whether self-doubt, uncertainty, insecurity), and prefer to rest in the safety of the words of others
  2. Simple laziness, for example using quotes to pad out a text and increase the number of words.
  3. Lack of analytic insight. Lots of cases of quotitis seem to be to reflect the fact that the researcher hasn’t gone much further than coding her or his data, coming up with a bunch of themes, and wishing to illustrate them with quotes from data in the text. Coding is sometimes useful as a starting point. It is rarely an outcome of analysis.

Prevention rather than treatment or cure

It is better to address underlying causes than to treat surface symptoms, so I’ll deal with this first, before presenting some tips for treatment/cure for an existing text.

Let’s challenge those underlying assumptions.

Raw data are needed to convince readers to believe your claims

This is about the ‘evidential burden’ placed on quotes from raw data. Think about it. Does a sentence or two from an interview really prove (or establish credibility) in anything by itself? Surely we have to think about where the quote came from, how it was treated as part of a sophisticated analytic process, how it relates to other features of the data, and what features of it readers are supposed to notice and interpret in particular ways.

Moreover placing the burden of proof on quotes may be utterly illogical and force (or be a symptom) of highly reductive analyses. I doubt very much that many of the most interesting analytical insights into qualitative datasets can be accurately conveyed in someone else’s words (in the case of an interview), or in your own field notes (in the case of observation). In my experience the real value-add ideas can’t be pinpointed to one bit of data or another. They come by looking across codes, themes, excerpts etc.

To prove my point I wrote a paper based on analysis of interviews with doctoral students. It was about relationships they have with other people and their impact on learning and experience. The paper does not contain one single quote from raw data. Admittedly one of the reviewers found this odd, but I argued my case to the editor and the paper stands with no raw data quoted whatsoever. Don’t believe me? Check it out here at the publisher’s website, or here (full text free) from ANU.

The justification was this: I did my analysis by identifying all the relationships between each participant and others around them (supervisors, students, family etc). I then went through and looked for all the data relating to that relationship. After several readings, I was able to write a synoptic text, summarising everything I knew about that relationship, its origins, importance and so on. This drew on all available data, and was shaped by a holistic and synthetic reading of the data. There was no one line or even paragraph from an interview that could demonstrate, illustrate, or even support what I had to say. Because what I had to say was at a different level from what students told me directly.

This is an extreme example, and I’ve written plenty of other papers where I use quotes from raw data. But I use them sparingly and I don’t operate from misplaced assumptions about evidential burden. The problem is, many referees do apply these unfortunate ideas, so be ready to defend yourself when they do!

Participants express themselves perfectly, your words are worse

Do people really speak in the most considered, informed and evocative ways? Sure, sometimes the odd gem of a quote comes out. But I’d suggest that the craft we can put into our written text, playing around with word order, phrasing, vocabulary, emphasis and so on, means we can reach much tighter and considered words than the on-the-spot responses in interviews, or madly rushed field notes.

What are raw data ‘authentic’ expressions of that your words in the paper or not? They may authentically capture what someone said or what you wrote in the field. But is that really what your paper is about? Is it not about reading into what people say, constructing a new argument out of those comments. In which case authenticity lies at a different level: what is authentic to your argument or contribution may not be what is authentic to a participant. Unless your contribution rests solely on reproducing what others say or feel about something, for example.

Not to quote is a denial of participant voice

I never promise participants they will be ventriloquized in my writing about them (though I know in some qualitative approaches this can be important). And anyway, I would never get chance to quote from all participants equally, so there would always be some who are denied more than others. Why should those who happen to say something in a particular way (the ‘real gem’ quotes) be given voice, while those who are less articulate be silenced? Not a useful or valid basis for my writing. Neither is giving everyone blanket the same ‘voice’ because that doesn’t seem likely to be a sound foundation for a balanced, well structured text either.

What’s more as I’ve hinted above, there’s another denial going on when you over-quote from raw data: denying readers access to your opinions and insights. You’re the author of the paper: it’s your interpretations and arguments I’m interested in. Don’t deny me, the reader, chance to benefit from your thoughts by hiding behind the words of others.

Raw data speaks for itself

No it doesn’t. Or at the best this is rarely the case. This is a continuation of the point above. If raw data really was that powerful and self-evident, we would simply present interview transcripts as papers and let it be. But we don’t. Why? Because readers need help and guidance in making sense of those data. You need to hold my hand, shine the light on relevant features, make links, show connections, read between the lines, and provide contextual information that is not contained in the quote itself.

So the way you introduce quotes is important – is this ‘typical’, ‘illustrative’, or chosen for some other reason? How does it relate to other quotes you could have chosen?

And you need to provide a commentary on each quote. What work is it doing in the development of your argument? What do you want readers to take from it? Why is it important?

Raw data speaks most powerfully when you speak on its behalf.

 

Treatment and cure of quotitis

Maybe you’re working on a text and you can diagnose a likely case of quotitis: the symptoms are there in the text itself, and your assumptions are in need of some serious questioning. What can you do? Here are some tips:

Ask yourself some really difficult questions, and be ready for answers you don’t want to hear: Are you over-reliant on quotes because your analysis is half-baked? Are you presenting a list of themes or categories but not doing much with them? Are you hiding behind your data because you aren’t clear about what you actually have to say or want to add to them?

Challenge yourself to sort the wheat from the chaff: are any of your quotes absolutely essential? I promise you, not all of them will be. So bin the one’s that aren’t, and start adding better introductions and commentaries on those that are most crucial. A good way to start the sorting process is by asking: am I giving three (or more) quotes when one would do? You don’t have to prove that three (or more) people said something relating to a theme by presenting three (or more) quotes. You can quote once and say something about the occurrence of these theme across your dataset.

Ask yourself ‘what is going on here’ when you read a bunch of quotes. I mean, in the sense, what do these quotes collectively say about a particular phenomenon or idea. How can you read between the lines, analyse, synthesise, interpret them together? Perhaps you can swap heaps of raw data for paraphrasing and making a higher-level argument.

Address your anxiety about evidential burden by being really clear in your methods section why readers should trust in your evidence (because your methods of data generation were appropriate and high quality) and what you have to say about it (because your methods of analysis are clearly explained so people have a sense of how you arrived at the claims you make without having to have everything ‘proved’ with a quote).

 

In conclusion

Quotitis can be painful, especially for readers. Left undiagnosed and untreated, it can be deadly (for your publications, scholarly reputation etc). Fortunately it is easy to spot, treatable, and its underlying causes can be addressed with some critical and honest reflection. Over to you…

A few things you’ve always wanted to know about academic publication but were too afraid to ask

 

This might have been titled ‘Academic publishing for dummies’ or ‘The idiots’ guide to publishing’. But I don’t think of the readers of this blog as dummies or idiots. But I do know that among research student and early career researcher populations, there are often lots of myths about publication, aspects of academia that are rather opaque, and lots of understandable reluctance to ask others the most basic questions.

This is an accompaniment to other posts I’ve done about getting published and getting cited.

Is it for everyone?

Yes. It’s for everyone

There is nothing whatsoever stopping students (undergrads, masters, doctoral) at any stage from submitting something for publication. Providing you have something new to say that other people will care about. Yes, when you register with journal online submission processes you often provide information about your degree(s), role etc. But this is not available to the reviewers. I published three papers based on my master’s research and not once did I encounter any resistance because I didn’t have ‘Dr’ in front of my name.

Book publishing is a bit different – contracts for monographs require a different kind of work, and publishers often look at CVs, expecting evidence to show that you’ve been active in the publication game and to give them confidence you will deliver.

 

No. It’s not for everyone

I wish I could sit and write otherwise, but it continues to be a sad reality that academic publishing is not as equitable as it should be. Historical relations of power, exclusion and privilege continue to exert force. Publishing in English matters (in terms of getting jobs, promotion, research funding) in many countries where English is not an official or even widely spoken language. Academic discourses in many fields still implicitly work on assumptions of a core (call it Global North, Anglo-European, Western) and a periphery. I was reviewing a paper recently based in Turkey, and asked ‘Why Turkey’? But when I write about the UK or Australia (countries where I’ve lived and worked), this context seems automatically acceptable (to me).  So I pressed the delete button a few times and tried to engage more openly with the Turkish work. I’m not saying academia is closed to non-English, non-core publication. But I’d be lying and misleading you if I painted a picture of a globally equal and fair game. Cos it ain’t. I and many others continue to benefit from historical imbalances at the expense of others.

 

Is there money in it?

No. There’s no money in it

Pretty much the only link between academic publishing and your bank account is the fact that you won’t get a job if you don’t publish (discounting the impact buying books has on your bank balance). You don’t get paid for articles you publish. The reviewers don’t get paid for their reviews. The editors (by and large) don’t get paid for the hours they spend editing journals. If you’re lucky you might get a single figure % of royalties for an academic book, but unless you’ve got the academic equivalent of Game of Thrones in the pipeline, this is going to change your income to the degree of the odd Mars bar here and there. Perhaps a nice haircut once in a while.

It pays to remember that reviewers and editors aren’t paid. If for no other reason than to realise that, ethically, you owe the academic community your free services at least as much as you have received them. Send a paper off and get 3 reviews? Better make sure you do at least 3 reviews in return. Later in your career, when you’re asked to be an associate editor, join an editorial board, or be lead editor: you’re tempted to say no I’m too busy, but ask yourself whether the people who edited all the journals you’ve been publishing were waiting round all day with nothing to do.

 

Yes. There’s heaps of money in it (just not for you)

Only a fool thinks academic publishing is all about ideas and nothing about money. As I’ve written before (how not to get published), academic publishing is (at least for now) big business. It’s just that the money doesn’t flow to academics or to universities. It goes to publishers, and increasingly fewer of them. Universities pay to subscribe to journals, they pay their academics to do research and write papers, they allow their staff time to do reviews and editing, and then sometimes they even pay journals again for open access (see below). Some publishers have recently moved into the academic field because they see the profits as more stable: it’s rather uncertain where the next Harry Potter is going to come from, but a steady stream of academics submitting papers to proliferating journals (etc) is quite nice thank you. If you’re publishing with a commercial publisher, don’t forget that their bottom line is profit. Simples.

 

Yes and no but maybe… it’s all changing

Open access. Wow, this is a biggie. In many countries now, people are cottoning on to what has been happening. Taxpayers are saying: hang on, if I funded this research through my taxes, why do I have to pay again to read it. Now I’ve been diagnosed with [whatever] I’d quite like to read up on the research without paying again. Often what this means is paying commercial publishers again to release copyright so papers can be made freely available (and some funding councils require budgets for this up front). It can also mean universities checking copyright very closely and putting pre-print versions on open repositories. And, excitingly, it can mean academics choosing to publish in open access journals where there are no barriers to access whatsoever (though some ask authors to pay for the right to publish, which is another matter). I’m getting more and more emails from big publishers each week telling me about their open access offerings. Something has got the system spooked.

Established, high-ranking journals published commercially aren’t going to disappear overnight. But I think we’re experiencing minor tremors of what will amount to a major tectonic shift. The point here: beware, and be aware. You’ve got to be legally savvy, know what you’re signing copyright wise. Beware: there are plenty of crappy open access journals. But be aware that open access is gaining kudos rapidly.

 

Is academic publishing fair?

Yes. It’s a fair game

Overall, I think the system of peer review does a remarkably good job of managing the frontiers of knowledge. To those uber-cynics who point out conservatism, policing of the status quo, I point to innumerable, radical differences between scholarship today and even five or ten years ago. Compared to what I hear from friends working outside academia, I’m heartened by the non-hierarchical and open nature of academic publishing. And I cherish the principle of peer review. Yes I’ve been frustrated and annoyed at times by rejection. But every paper I’ve written, without exception, has been improved through the process. I’ve always been given a fair go, rightly dismissed when I wrote crap, and given the chance to improve where I’ve shown glimpses of potential (even if that means me taking a rejection on the chin, working on my paper, and sending it somewhere else, the fact I have the chance to do this is worth noting).

 

No. It’s really not a fair game

If you think publishing decisions are made purely on the basis of scholarly merit, think again. Scholarly merit comes into it, but so do a heap of other things (I’m going to blog about these and the peer review process soon).

 

 

10 ways to make sure your journal article never gets read, or worse, cited

Since posting this I have created a slideshow highlighting some of the key points, along with those from the previous post on not getting published in the first place.

You’ve gone through the tortuous process of peer review, and now your work is finally published. Of course the last thing you want is for people to go around citing your work, spreading your ideas, or worse, actually using them! But don’t worry, there are some very simple and easy things you can do to make sure this doesn’t ever happen. You’re in good company – plenty of people implement these easy to follow steps with nearly every piece of their published research.

The first few relate to making sure your work never gets read – thus ensuring it can’t be cited. Then we consider how to manage the unpleasant risks if someone actually reads your paper.

1. Give your paper a truly awful title

“A dull and irrelevant waste of time” – how does that grab you? That’s what a surprising number of titles I read in my ToC alerts seem like. To make sure I don’t click on the link, or develop any interest in what you have you have to say, (i) make what you have to say sound disconnected from any of the big ideas, concepts or issues I’m working on; and (ii) make it sound dull. I don’t have a psychic ability to detect interesting and relevant points buried in your paper. Neither am I stuck on a desert island with nothing else to do but read everything on the offchance something fun crops up. So a poor title will work wonders in ensuring I never read your work. Contrastingly, my last paper was called ‘Harry Potter and the child and family health nurse’, and the one before was ‘Fifty shades of practice theory’. Both promise to be exciting romps full of magic or steamy sex. And they’ve sent my h-index into the stratosphere. I’ve got an h-index googol on google (scholar).

2. Follow up your awful title with a horrendous abstract

You caught me in a rare moment when I could be bothered to forgive your poor titling skills and I proceeded to read your abstract. From this, I’ve learned nothing of your argument, or why I should read anything more. You’ve definitely forgotten to tell me how your paper joins an ongoing conversation or body of research. You’ve left me clueless as to what your methods or findings were. And I’ve no idea at all why I should care about what you’ve found out. Awesome. Instead you’re regurgitated existing literature, or barraged me with terms or concepts I don’t know, and dense text, so that reading your abstract feels like trying to swim through cold porridge. I forgot my snorkel and prefer eating porridge to drowning in it, so I think I’ll do something more productive like watching Celebrity Splash or Weakest Link on repeat.

3. Keep it a secret

Your paper has just been published online in the preview section, or maybe it’s actually come out in hard copy. Of course the last thing you should do is tell anyone about it! Definitely don’t put the details of it or a hyperlink on your email signature (ugh! reeks of crass self-promotion). Don’t mention it on academia.edu or your blog. Don’t update your university web profile. Don’t put a copy on your office door or tell your colleagues (you never want your Dean to know you’ve been so productive, do you?). Don’t put pre-prints in your university repository, and don’t make copies available via your own website (if copyright allows). And never do anything so stupid as to announce it on facebook or twitter! You’ve heard how putting stuff on social media can make it available to the plebian masses: imagine that! Hundreds of people, thousands maybe, reading your abstract, and maybe even downloading your paper! No, better leave it to chance and hope it crops up in search engines every now and then.

4. Publish in a really obscure journal

Some relatively young, open access journals do quite well (one of my most cited papers is in a free-to-all, online qualitative methods journal). But luckily, if you’re a citaphobe, there are some wonderfully obscure academic backwaters whose location in scholarly life is the equivalent of the dark side of the moon. There, your paper can Rest In Peace, free of the interfering gaze of interested readers. Hone in on those over-keen editors – she or he is probably trying to fill up unused slots for the next issue. What are those crazy academics doing leaving those slots unused? They must all be bonkers! I, however, see this opportunity for the amazing thing that it is (too good to be true?). If you’re in health or hard science disciplines, publishing in journals that aren’t indexed in Scopus is a pretty fool-proof way to make sure your paper isn’t read or treated as worth the paper it’s unlikely ever to be printed on.

5. Put in a great plot twist

Leave the best to the end, right? Wait until the last minute before your magical big reveal, where you make connections between your research and a wider issue, or link it into the big debate that’s raging in your field. Er… no, actually. It pays to treat your readers as if they were slightly more interested yet considerably more time-poor than readers of the British tabloid press (The Sun, The Daily Mail etc). If you read articles in those papers, you can see the authors don’t assume readers will get to the end of each piece. They barely assume readers will get much beyond the first sentence or two. So all the important information is captured succinctly as soon as possible. There is no secret pot of gold that rewards the readers who slog it out to the last full stop. In academic journals, every line you write is another few seconds of your readers’ time, competing for her or his attention with other much better articles, piles of marking, work on a research grant, or just buggering off home or to bed. If you’re saving the best til last, chances are your reader will have lost patience and think you and your research are no good.

6. Make your article as uninteresting and full of jargon as possible

Okay, your paper been read, but the danger of citation can still be headed off. One way of doing this is to do your readers a favour and present them with the intellectual equivalent of a marathon-meets-decathlon-meets-Tour-de-France. Make the paper as long as possible (use all available words, preferably more), and better still, make it feel even longer! Your readers’ brains should be sweating by the time they finish. After all it took you ages to go through all the drafts and revisions – why should your readers get of scot free? The decathlon element can easily be incorporated by making your paper address multiple ideas, concepts, methods, and arguments. Readers will feel short-changed if all you do is present a clear line of argument, a concise package of new knowledge, justifying your claims and their importance. More ideas! More complexity! More references! No-one ever said ‘That paper was just too clear for me’. No, they complain: ‘Pah! One well-presented and nicely explained idea. What do they think I am? A moron?’.

7. Hide your arguments in waffle

You’ve had a genius idea, or your data show something unexpected but really important. Worrying stuff. How will you hide forever in obscurity if someone actually finds this out?! No worries, there is extensive precedent for how to avoid this unpleasant eventuality.  Rather than making clear statements, and making it clear when you are arguing new ideas or presenting new material, you can bury your original thinking and novel claims in waffle. Pile it on! Never start a paragraph by announcing your great idea. Never conclude a paragraph by reinforcing your message. Preferably, hang your new claim off the end of a sentence with multiple clauses.

8. Therefore it can be seen that to a certain extent the statement is true

After all those months or years generating data, and those hours of tedium doing analysis, you’ve had your Eureka! moment. There’s strong empirical basis here to say something really bold, exciting, interesting, or controversial. Oops! Better manage this carefully. The best way is to utterly play down your claims or arguments. Hedge them like hell! Place them in multiple caveats! Belts and braces! Say more about your limitations than your diligent methods. If you do this, you can make your claims sound so inconsequential that no-one will give them second thought. Phew.

9. Therefore the earth is flat and revolves around the sun (and no-one ever said otherwise)

An alternative to point 8 is to make ridiculous, unsubstantiated claims. Better still, present them as if they are not controversial. Never anticipate a counter-claim. Never acknowledge alternative views or existing understandings. That defeats your purpose! No, stand firm and blast your audience with your findings. POW! Remember, whatever your field (science, social science, humanities, arts) what readers really want is reason to dismantle their entire discipline or maybe the whole of human history or contemporary society. Give them any less and they’ll think you and your research are as worthless as an inflatable dartboard.

10. Turn robust research into polemic

In some ways an extension of point 9. Don’t waste time giving details about your methods of data collection. No-one cares about analysis (qualitative interpretive methods, statistical tests etc), so leave those out too. Existing literature already exists, so no need to repeat it here, either. Use your valuable word space to present your view of the world, and elaborate on it fully. Yes, scholarly journals are really just expensive newspaper columns. Jeremy Clarkson gets to rant about the world, so why can’t you? If you’re unable to totally erase any empirical origins of your work, you can do the next best thing by describing it in fuzzy, unclear ways. Or by not presenting any data. Or by presenting data but allowing it to speak for itself. Or by presenting data that has no clear connection to your argument.

Coda

After the responses to point 7 in my previous post about avoiding publication altogether, I would like to reiterate here: it is my strongly held view that scholars have an obligation to make their work available to a range of audiences, not all of whom are academics and not all of whom can or ever feel inclined to access academic journals. This post focuses on publication in peer-reviewed journals because it is a crucial part of the academic endeavour. There are other crucial parts, too.

10 easy ways to make sure you have no publication record when you finish your PhD and forever after

Since posting this I have created a slideshow highlighting some of the key points, along with those from the subsequent post about not getting read or cited.

There is a lot of pressure on doctoral students and early career academics to publish. Want even the slightest chance of getting job? Publish. Want anyone other than your examiners to read your work? Publish. Want to actually contribute to knowledge? Publish. What to do the ethical thing and deliver what was promised to the people who funded your work, or those who contributed to it through support, helping with data etc? Publish.

Now, some of you may wish to do those things, but in my experience there seem to be plenty of people out there who don’t. They see publication as the ultimate stain their good reputation, the catastrophe to end all catastrophes, the academic apocalypse. They are the publishaphobes.

Well there is good news! By following these few and easy rules, you too can make sure your work gathers dust on library shelves (or better still in the basement), so that no-one ever reads it, and the labour of love that has invaded the last 3+ years of your life can all come to nothing more than some letters before or after your name. Perhaps the non-publishing option makes sense because you’re an intellectual fraud and are afraid of getting found out.

1. Keep your papers locked away in your computer / desk drawer

By far the easiest way to make sure you never have anything published is to never actually send anything off for review. Reasons for this may be fear of critical feedback and perfectionism (see below), but it’s worth making this simple but powerful point: NOT sending your paper (or book proposal etc) off is the only 100% safe guarantee to make sure you NEVER get published. Simples. When you wonder how those stellar professors, or the students / postdocs who seem to be on a fast-track to tenured jobs and academic stardom got so many publications, the answer is: they sent lots of stuff off for review (notwithstanding all the rejections they got along the way).

2. Wait until your paper is perfect before you submit it

You’ve realised that you have to submit something in order for it to get published. Well done you! But you know it’s good to be good to stand a chance, so you’re going to let it sit for a while and come back and tweak it later. You know you don’t take rejection or harsh feedback well, so better to get it perfect first, right? WRONG. Perfectionism is the enemy of publication. you’ll never write anything perfect so stop trying.

3. Send half-baked crap off while suffering EOS

The perfect counterpart to perfectionism. Or should that be imperfect? Pat Thomson has written an excellent blog post; about ‘early onset satisfaction’ (EOS) – a bad thing for writing and writers: “feeling too happy with a piece of writing meant that you didn’t rewrite and rewrite as often and as hard as you ought to” (the phrase being attributed to Mem Fox). Pat recalls a time when she was reviewing an article for a journal and came to the conclusion that the author had been struck with EOS, and probably hadn’t given it to anyone else to read, or ‘if they had, I’d have taken bets that they hadn’t asked anyone to ask them the hard questions – like – so what, and why should I care?’. Atta boy! Way to go! The peer review process isn’t 100% foolproof, so there is a small chance that someone will publish the rubbish that your bout of EOS has duped you into regarding as brilliant; but by and large reviewers will pick it up and ensure a quick and firm rejection (or major revisions). Phew!

4. Be crushed by rejection and negative feedback

Second only to not sending your written work out is this: sending it out, but then buckling completely when it gets rejected. There must be hundreds of (potentially) good papers stuck in limbo because their authors are defeated by something as inconsequential as rejection from one or more journals. So the editors and reviewers didn’t like your paper? EITHER: yes, they’ve pronounced true judgement on your intellectual worthlessness and the irrelevance of your research (in which case by all means leave your paper to rot in the depths of your hard drive); OR perhaps you went for the wrong journal, need to clarify your argument etc, (in which case get cracking on finding a different journal / making revisions, and get it out there again. no excuses).

5. Ignore word limits and reference styles

A fantastic way to get your paper bounced back to you before the editor has even read a word. The journal has a limit of 4,000 words including references, but your study is special, so all the rules for being succinct and equality of space in the issue should be disregarded just for you. Maybe you’ve used qualitative data so need long quotes from interviews (wow! what a pioneering thing you’re doing! Interviews!). Maybe there’s a lot of literature in your field, so you need 2,000 words just of lit review (wow! no-one else has read as much as you!). Maybe your theoretical framework is complex and requires detailed, lengthy explanation (wow!… [you get the message]). A journal editor worth their salt will open your paper, check the word count and bounce it right back to you if it is over.

Perhaps you’ve actually bothered to think about a key argument, and redrafted your paper so it is now a succinct argument that fits within the word limit (or is even well below it so when the reviewers ask for more explanation you have some room for manoeuvre). But fear not – you can still make sure you get rejected quicksmart. Each journal has a clearly specified reference style. But formatting references is boring. Or maybe you haven’t learned to use Endnote properly. Or maybe you think even though all other academics format their own references, the copyeditors should do this for you. Maybe you think the doi numbers in the new APA 6th reference style can be ignored (because you don’t have them and can’t be arsed to go and look them up for all the references in your bibliography). Way you go! You just got yourself a rejection! [I’m not joking: I foolishly neglected to look up the differences between APA 5th and 6th, and had a paper de-submitted from a journal and was smartly told to get the references right if I wanted my paper to be considered].

6. Pay no regard to the aims, scope, and recent content of the journal

Another brilliant way to avoid your work getting in the public domain is to do everything you can to secure a resounding rejection from the editor. Better still, you can get yourself rejected before your paper even gets sent out for review. By some miracle of accident or adversity you’ve got a paper under the word limit, with correct references. You heard from a friend that the Polynesian Quarterly is a highly respected journal, so you send your paper about political resistance in the slums of Detroit off to the editor. You’re not stupid, you see it isn’t a direct fit, but your research is just so good, they’ll want the paper. And anyway, this journal has a big word limit which you need. BOING! Back it comes with a: thanks, but no thanks (the first of these thanks really means: ‘what were you thinking?! why did you waste my precious time?). Now this is a fairly drastic example, but time and again I hear editors (and experience myself as an editor and reviewer) saying a prime reason for rejection is lack of fit with the journal.

There is a parallel here for book proposals. Your mate published her PhD through Publisher X, so you send your proposal in to them, too. A bigger BOING. Publishers have lists, scope, and priorities just like journals. (Except the fishing ones (often from Germany) who emailed you and said they’d like to publish your PhD; but you’re not considering them, are you?).

(If, on the other hand, you’d like your paper to go out for review, see the end of my previous post on selecting journals).

6. Write one title / abstract, and then a completely different paper

Almost as effective as a complete mismatch between your paper and the journal, is a complete mismatch between your title / abstract, and the main text. If a rejection is what you’re looking for, promising one thing and delivering another is a fairly safe way to go. Set the editor and reviewers up with grand yet specific expectations, but then write something that drifts off course completely and concludes in an utterly surprising way. That way you will confuse, disappoint, frustrate and irritate all the important people in one go.

With book proposals, a great way to get no interest all in your work is to get it send to the wrong sub-department. I did this brilliantly in a recent proposal I sent off to Routledge. The book I had in mind was about professional practice and learning, firmly within established fields of educational research. However my proposal clearly left the first reader at Routledge that it was a book about early childhood development. (It’s about child and family health practices). It got sent to the early childhood people and was swiftly rejected. As of course I would expect. This is not me moaning about Routledge: this is me saying I should have done a better job at making it clear where my work is located academically.

7. Give it all away for free

Please note: a number of people have taken issue with the points I make below. I won’t edit them here, so that the replies and comments make sense. But I will re-quote from the journal submission process to clarify what it is I am warning about. I am essentially saying that you need to make sure you can tick this box: “Confirm that the manuscript has been submitted solely to this journal and is not published, in press, or submitted elsewhere.” I have approved and published the replies because I think it’s important to be open and to be clear that there are different views on this matter. What’s really crucial is that you think carefully and seek informed advice.

Publishers publish to make money. They’re in it for profit. By and large they are not charities. All the big publishers gobbling up all the journals do so because they see there’s money to be made. How do they make their money? Because people or libraries, pay for access to journals, because people want to read them. And why do people want to read them? Because they can read something there that they can’t read elsewhere: something new!

So a great way to avoid anyone ever wanting to publish your work (in book or journal form), is to make sure that it’s all already out there in the public domain, preferably on a blog or academia.edu or a open access conference website. That way, when you’re asked to tick the box about original work, you can’t do so and your publishing treadmill grinds to a sticky, rusty halt. (Yes conference papers that get developed into articles are fine, and your thesis can be turned into a book; but you’ve got to be careful about it).

There’s a middle ground here. Before you finish your PhD, or perhaps shortly afterwards, you’re likely to get an email from a publishing company, saying they’d like to publish your PhD as a book. You’re asked to send your manuscript in, and miraculously, within a short time you’ve got the offer of a contract. No proposal. No reviewers’ comments. Just the offer. Your work will be out there, in a book with an ISBN, for sale on amazon etc within days. Problem is, other academics won’t really take this seriously as an academic book, because they’re not convinced a thorough peer review process was undertaken. I’ve used one of these publishers to publish a report that otherwise would have been printed in-house at my uni. Neither are great academic coups, but the published version is at least available online and reaches a wider audience. It doesn’t count as a book on my CV or for my research output. So if you want to show off your shiny book to your friends, and feel good about having got your work out there, but don’t care about your long term academic reputation and publishing prospects, go ahead.

8. Trap your paper in inter-author disputes

Many of us co-author journal papers with colleagues. If you’re hoping to avoid publication, a strong tactic is to make sure there is no clarity around authorial roles and sign-off. Not discussing what contributions, rights and responsibilities are expected from each author is a great way to start. Then, all being well, your draft can get stuck in limbo as authors keep adding changes, undoing the changes their colleagues have just made, and no-one knows who ultimately says ‘Enough! Let’s just send it off!’.

9. Only the best will do

Other students publish in poxy journals with low impact factors. You, however, are the next Einstein / Piaget / [insert relevant superstar here]. You’re head and shoulders better than all the other students around you who frankly, probably barely even qualify for MENSA, and can write their IQ without using standard form. You don’t want to pollute your academic CV with low- or mid-status journals. High status might not even match your utter brilliance. No, for you, it’s got to be Nature, New Scientist, BMJ, [insert your field’s top journal with uber-high rejection rates here]. Nothing else will do. You can say one thing to your publication track record: byeeeeee! [except it doesn’t exist anyway]

10. Cheat: send your article off to more than on journal at once

When the journal submission system asks you if you’ve sent the same paper off to any other journals, they don’t really care, do they? Luckily for all you publishaphobes out there, sending off the same paper to two (or more) journals at once doesn’t double (or triple) your chances of publication. It annihilates them. If you get found out (and chances are you will, because, guess what, editors talk to each other, know and use the same reviewers etc), not only is your work in an article-shaped coffin, but your the dirt is being piled on the remains of what was (potentially) your academic career. (This point neglects the idiocy of sending the same paper to two journals: they all have different aims, scope, length, styles, conversation histories – you’d have to be pretty naive to think that this is a way to go anyway, even if it wasn’t one of the seven deadly academic sins).

(NB. With book proposals it may be acceptable to make contact with multiple publishers at once, but check with your supervisor and others first as to how this might play; also remember different publishers means your proposal will be different anyway).

To all the publishaphobes have a go at diagnosing your phobia. While I’d secretly love you all to remain as you are and lower the competition in journals and books for the rest of us, I think scholarship will be the better for your participation. To those who are up for it, remember these 10 easy steps, but above all, remember never to take them!