Tag Archives: academic publishing

How to make sure people care about your research

No-one cares about your research. Particularly if it’s your PhD (or any other kind of doctorate). In fact if someone knows it’s the latter, or you mention it, they probably care less, or at least have alarm bells ringing that you’re about to launch into a prolonged account of your scholarship woes, the fact your supervisor hasn’t replied to any emails for 17 hours now, the horrible ethics committee, and the impossibility of writing only 100,000 words when it’s taken you 7 years and you’ve just got so much to say…

Even more of concern is the journal reviewer, or assessor of your grant proposal who is put off and frustrated before they’ve finished reading the first paragraph.

Fear not, for help is at hand! Fortunately, there is a really easy and effective way to avoid all these problems. Admittedly, this assumes your research does actually matter in some way, in the sense that it connects with something wider and non-trivial.

My solution will cost you nothing: no hard currency, no bitcoins, and no sleepless nights. Probably not even any extra words. In fact you may end up telling and selling the story of your research in fewer words than before! All it takes is a bit of trust, and a few minutes of your time.

My solution is this: when introducing your research, use a sequence that follows a ‘so’ logic rather than a ‘why?’ logic. This may well involve reversing the order of your ideas and sentences. If so, rejoice! – because this means you’ve already had all the right ideas, made all the right connections. You just need to turn it all upside down.

So what on earth is a ‘so’ logic, or a ‘why?’ logic, and why do these matter?

A ‘why’ logic is based on a sequence of sentences where each sentence is followed by one that explains the first. Example:

My research is about improving generic skills of university graduates.

This is important because employers increasingly look for generic skills in recruiting new staff, and repeatedly report shortcomings among graduates.

This matters because generic skills are known to be crucial to successful business innovation.

This looks great, right? It’s clear, follows a nice logical order, and explains to the reader why your research is important. I’ll admit, it’s not bad. Just I think it could be better. What’s really going on in the sequence above is an unwritten conversation with the reader. Let’s look at it again, this time with the silent responses inserted:

My research is about improving generic skills of university graduates.

[So what?]

This is important because employers increasingly look for generic skills in recruiting new staff, and repeatedly report shortcomings among graduates.

[Yeah. And? Why should I care about that?]

This matters because generic skills are known to be crucial to successful business innovation.

[Oh! Now I get it!]

Look at it from the reader’s point of view. You first sentence left them unconvinced, and probably rang all the alarm bells of dread, foreboding the terrors I outlined at the beginning of this post. Only after pushing you twice for more information, are they rewarded with something that they actually ‘get’, and might even care about. To them your research, in only three sentences, has been an uphill slog, full of doubt, experienced as some kind of puzzle that leaves them guessing. After each sentence they are left asking themselves: “why?”. This is the reason I call this a ‘why?’ logic.

But it doesn’t have to be this way. We can swap ‘why?’ for ‘so’. And we barely have to change a word. In fact we delete quite a few!

Generic skills are known to be crucial to successful business innovation.

Employers increasingly look for generic skills in recruiting new staff, and repeatedly report shortcomings among graduates.

My research is about improving generic skills of university graduates.

In this logic, you start with the idea that the reader really ‘got’ in the first scenario. The thing that matters most universally, directly and immediately to your readers. The kind of thing that they will accept as obvious, perhaps even unquestionable. There’s nothing wrong with showing a reader that you are both on the same wavelength. Take a shared assumption about something that you know to be a common concern. Something you don’t have to convince them to care about. Exploit what’s already there between you!

Then simply follow up with a sentence that leads from that towards your research, in a gradually narrowing down. What’s happening this time, is something more like this:

Generic skills are known to be crucial to successful business innovation.

[Absolutely! You sound like a sensible sort of person who knows what I care about. I’m curious. Tell me more].

Employers increasingly look for generic skills in recruiting new staff, and repeatedly report shortcomings among graduates.

[Yes. That makes sense.]

My research is about improving generic skills of university graduates.

[Seriously?! Wow! That’s wonderful! It’s just what we need. And it sounds very focused too. Tell me all about it in intricate detail!]

At each step you carry the reader with you, and one sentence follows on from the next exploiting this. Sentence 1 [brilliant!] so…. sentence 2 [amazeballs!] so… sentence 3 [no way! Where’s that Novel prize nomination form?]

That’s it. It may take you more than 3 sentences (hopefully not too many more, though).

Give it a try. I dare you. What have you got to lose?

Acknowledgement

I would like to acknowledge the influence of Martyn Hammersley’s framework for reading ethnographic research (see my video and podcast), Pat Thomson and Barbara Kamler’s miraculous ‘tiny texts’ approach to writing abstracts, the group of UTS Doctor of Education students based in Hong Kong, and Lee Williamson from UTS’ Research Office. Without you all this would never have come to fruition.come to fruition.

A PhD student receives a rejection from a journal. Here is how she and her supervisors responded

I was talking with a colleague recently who described an interaction with one of her students who had been rejected from a journal. The response of her supervisors sounded really interesting, so I asked if she’d mind forwarding the emails onto me for a blog post. Which she kindly did! There’s a lot here that is useful in thinking about how to respond when you get rejected. I should point out this is in a country where many students complete a PhD through publications, and in this case the article was written by the student, with all the supervisors helping her and named as authors.

First the student wrote to her supervisors

Dear supervisors,

At last I have got response from the journal regarding my second manuscript. Unfortunately they are not interested to publish it.

I´m very disappointed about that. I can agree with a lot of the comments, it is useful for me in the future process but it has taken over 6 months to deliver that answer and right now I don´t have so much positive energy to restart the work.

I think I can interpret their comments (at least from the first reviewer) as if I rewrite the manuscript I can try to resubmit it but I´m not really sure if that is their suggestion.

Then one supervisor replied, cc’ing the others

Thank you for your email. Yes that is somewhat disappointing, but from the comments, perhaps it is good that it isn¹t published in its current form: because from what the reviewers saw, I don¹t think the paper did full justice to your work and your thinking! Better to have a stronger paper published, even if it is later.

I have had similarly prickly experiences, particularly in this journal, with reviewers who really want accounts of research to feel as if the research was quantitative (a bit like reviewer 1 worrying about interpretation in ethnographic research etc).

On the plus side:

  1. Both reviewers appear to have read your paper in quite a bit of detail! (which is not always the case)
  2. Both reviewers have offered well-written comments that are quite easy to understand (which is not always the case)
  3. There is lots in the comments that will help to improve the paper.

I think both the reviewers offer largely helpful comments – they are not fighting the kind of story you want to tell, or questioning its importance. They do want to know more concrete detail about the study methods, want a clearer alignment between the question, theory, findings and discussion, and a very clear argument as to what is new and why it matters. They are all very achievable without having to go back and do more analysis!

I think the process now should be to wait a few days until you feel a bit less fed up, and then to start:

  1. Thinking of alternative journals (although R1 seemed to invite this the journal is definitely not asking for a resubmission as I interpret the email). XXX might be one possibility. Or YYY?
  1. Coming up with your own to-do list in terms of changes you think are worth making to the paper – and perhaps differentiating those that are small/easy, and those that require a bit more thought and work. You can also list those points the reviewers made that you¹re not so bothered about and don¹t want to make big changes.

So, when you¹re feeling you have the energy to take it up again, there are my suggestions 🙂

Then another supervisor added her voice

I understand that it feels a bit disappointing, particularly since they kept you waiting so long for the decision. But I can only echo what [Supervisor 1] is suggesting, once you have worked through the comments, your paper will be much stronger.  I think you should let it sit while you are completing the paper on the [different analysis], you are in a good flow with that one at the moment! And we should think of an alternative journal, I agree, we need to aim for one that is included in Web if Science.

And then a third supervisor added his voice

This is the kind of experience that is not only sometimes happening, but rather a rule than an exception. And just as S1 and S2 state; it will in the end improve the paper. But I do agree they could have given us this feedback at least half a year earlier….

I also think S2’s advice is right; go on with the paper on [different analysis] and let this paper rest (just like a wine; it will become better with time and maturation – ask your husband!).

So let this experience take its time and aim for a journal that is indexed in Web if Science, although the IF is not too important.

Then the student replies

Thanks for the support!

I totally agree with you all and as I said, the comments from the reviewers are very good for me in the future process and also for my paper regarding the [different analysis]. I  struggle with the same issues here I guess; clear arguments for the study, evidence for my findings and how to discuss that much more clear.

Brief comment from me

What I like here is:

  1. That we end up with the student being able to take the rejection letter as a way to identify some things that she needs to look out for in another paper
  2. That S3 normalises this kind of experience
  3. That S2 provides very concrete suggestions in terms of not getting distracted by the rejection when work is going well on another paper
  4. That S1 finds positive things to appreciate in the reviewers’ comments, even though it was a rejection
  5. That the student felt comfortable sharing this, and got such strong and immediate support.

A guide to choosing journals for academic publication

The key is the match between your paper and the journal

Choosing a journal for your paper is a complex and nuanced process. Don’t expect to be able to ask anyone else off the cuff and get a sensible answer. Only people who know what you want to say and what you want to achieve in saying it can provide guidance, and even then it’s up to you to judge. In writing this I hope to make this process more transparent, and to help you be as informed as possible about your decisions. If you disagree, or can add more things to consider, or more measures of status please leave a response at the bottom!

Chicken and egg

Which comes first the paper or the choice of journal? Neither. Both. In my view you can’t write a good paper without a sense of the journal you are writing for. How you frame the argument / contribution, how long it is, which literature you locate it within, how much methodological detail, how much theoretical hand-holding is needed for readers, what kind of conclusions you want to present, what limitations you should acknowledge: ALL of these are shaped by the journal. But how do you know the answers to these questions? Usually by writing a draft! See the chicken-egg problem? My process is as follows:

  1. Come up with a rough idea for a paper – what data am I going to analyse, with what theoretical focus, presenting what new idea?
  2. Come up with a short list of potential journals (see below)
  3. Plan the paper down to paragraph level helps me think through the ideas and make good judgements about the fit between it and journals in the short list.
  4. Choose a journal. If in doubt write the abstract and send it to the editor for initial comment: what’s the worst that could happen? She or he could ignore it!

An ongoing conversation

Most journal editors want to publish papers that join and extend a dialogue between authors that is already happening in their journal. This gives the journal a certain shape and develops its kudos in particular fields or lines of inquiry. If no-one has even come close to mentioning your topic in a particular journal in the last 5 years, I’d think twice about targeting that outlet. Unless you really are planning a major disruption and claiming woeful neglect of your topic (which says something about the editors…)

Check out the editors, and stated aims and scope

Editors have the ultimate say over whether or not to accept your paper. Check out who they are, and do some research. What are their interests? How long have they been on the editorial board? If it’s a new editorial board, are they signalling a broadening, narrowing, or change in scope perhaps? What special issues have come out?

Don’t be stupid

Don’t get the journal equivalent of ‘bright lights syndrome’ and choose somewhere just because it is uber-high status (like Nature). Don’t be a ‘sheep’ either and choose a journal just because someone you know has got their paper accepted in it. Don’t send a qualitative paper to a major stats / quantitative journal. Don’t send a piece of policy analysis from (insert your random country of choice here) to a major US journal (for example) when your paper has nothing to say to a US audience.

The devil is in the detail: yes – more homework

Check out things like word limits, and whether they include references. If the journal allows 3,000 words including references, and your argument takes 5,000 to develop, either change your argument or change the journal. Simples. Also check out the review process. Look under abstracts in published papers for indications as to the timeline for review, and check if there are online preview or iFirst versions published (which massively reduces the time to publication). Don’t be caught out with a whopping fee for publication if your paper is accepted. And don’t be shocked when you read the copyright form and find it costs $3,000 for open access. Some journals publish their rejection rates: you’d be foolish to plough on not knowing 90% of papers are rejected even before review (if this was the case).

Publish where people you want be visible to are reading

Think who you want to read your paper. Forget dreams of people from actual real life reading academic journals. The only people who read them (except some health professionals) are, on the whole, other academics. This isn’t about getting to the masses: there are other, better venues for that. This is about becoming visible among your disciplinary colleagues. Where are the people you like and want to be known to in your field publishing? What journals do they cite in their papers?

Understand the status of the journal you are submitting to and its implications for your career

This is the biggie. So big I’ve written a whole section on how to do this below. But for now a few key points.

  1. It pays to know what will be counted by universities in terms of outputs, and what will have kudos on your CV. In Australia, for example, journals not on the ERA list are pretty much no-go. In some fields (particularly hard science and health), journals not indexed in Web of Science aren’t recognised as worth the paper (or pixels) they are printed on.
  2. Remember that status measures only measure what can be measured. A really prestigious journal in your field – with lots of top people publishing lots of great papers in it – might be lower (or not even register at all) in all the various indices and metrics.
  3. There is no single flawless measure of status. Take a multi-pronged approach to suss out where a particular journal lies between ‘utter crap that publishes anything’ to ‘number 1 journal in the world for Nobel Laureates only’.
  4. There are many good reasons for publishing deliberately in lower status journals. It may be they have the ‘soft’ status I mentioned above. Maybe that is where you can actually say what you want to say without having to kow-tow to ridiculous reviewers who don’t understand or accept your innovative approach (which they view as floppy, oddball etc.).

How journal status is measured and how to find this information out

A whole book could be written on this, so please forgive my omissions.

Impact Factor

This is the one everyone talks about. It is also the bane of many people’s lives outside natural and health sciences. Impact Factor is a measure of the mean number of citations to recent articles published in a particular journal, excluding citations in other papers in the same journal. So an Impact Factor of 2.01 in Journal X means that each paper in X has been cited a mean of 2.01 times in all the other indexed journals, except X, over the past two years (five year figures are also used). The higher the impact factor, the higher the status, because it shows that the papers are not only read but they are cited lots too. Excluding the ‘home’ journal stops editors bumping up their own Impact Factor by forcing authors to cite papers in their journal. Why is this problematic? Where do I start?!

  1. Not all citations are for the same reason but they all get counted the same. If you cite paper P as one of several that have investigated a topic, and paper Q as a hopeless study with flawed methods, and paper R as hugely influential and formative, shaping your whole approach, they all get counted the same. In theory, publishing a terrible paper that gets cited lots for being terrible can boost an Impact Factor.
  2. The key is in the reference to other indexed journals. The issue is: what gets to be indexed? There are strict rules governing this, and while it works okay in some fields, lots of important, robust journals in social sciences and humanities aren’t indexed in the list used to calculate Impact Factor; at least that is my experience. This can deflacte Impact Factor measures in these fields because lots of citations simply don’t get counted. The formal ‘Impact Factor’ (as in the one quoted on Taylor and Francis journal websites, for example) is based on Journal Citation Reports (Thomson Reuters), drawing on over 10,000 journals. Seems a lot? In my field, many journals are missed off this index.
  3. The time taken to be cited is often longer than two years (google ‘citation half-life’ for more). Lets say I read a paper today in the most recent online iFirst. I think it’s brilliant, and being a super-efficient writer, I weave it into my paper and submit it in a month’s time. It takes 9 months to get reviewed, and then another 3 months to get published online. Then someone reads it. Process starts again. If the world was full of people who read papers the day they came out, and submitted papers citing them almost immediately, still the lag-time to publication in many fields prevents citations within the magic 2 year window. There are versions of Impact Factor that take five years into account to try to deal with this problem. This is better, but doesn’t benefit the journals that publish the really seminal texts that are still being cited 10, 15, 20 years later.
  4. Impact Factors are not comparable across disciplines. An Impact Factor of 1.367 could be very low in some sciences, but actually quite high in a field like Education. So don’t let people from other fields lead your decision making astray.
  5. Impact Factor may work very well to differentiate highly read and cited for less highly read and cited journals in some fields (where the value range is great, say from 0 to over 20), but in fields when the range for most journals is between 0 and 1.5 its utility for doing so is less good.
  6. Editors can manipulate Impact Factors to a degree (eg by publishing lots of review articles, that tend to get cited lots). See Wikipedia’s page on impact factor for more.

How do you find out the Impact Factor for a journal? If you don’t know this you haven’t been using your initiative or looking at journal webpages closely enough. Nearly all of them clearly state their Impact Factor somewhere on the home page. What can be more useful though is knowing the Impact Factors for journals in your field. In this case you need to use your go to Web of Science. I recommend downloading the data and importing it into excel so you can really do some digging. In some cases it may not be so obvious to find, in which case try entering ‘Journal title Research Gate’ into google eg ‘Studies in Higher Education Research Gate’. The top result should give the journal title and research gate, and a url like this: http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f . Immediately on clickling the link you will find data on Impact Factor, 5 year Impact Factor and more (based on Thomson Reuters). Note this is not an official database and may be out of date at times.

Alternatives to Impact Factor: SJR

An alternative that may work better in some fields is the Scopus Scimago Journal Rankings (SJR). This includes a range of metrics or measures, and I have found it includes more of the journals I’ve been reading and publishing in (in Education). The SJR indicator is calculated in a different way from Impact Factor (which I admit I don’t fully understand, see this Wikipedia explanation). It has a normalising function as part of the calculation which reduces some of the distortions of Impact Factor and can make it more sensitive within fields where there are close clusters. SJR also has its version of impact called the ‘average citations per document in a 2-year period’. When I compare the SJR and Thomson Reuters measures for journals in my field, some are very similar and some are quite different. So it pays to do your homework. SJR data are also easily exportable to excel and you can then easily find where journals lie in a list from top to bottom by either of these measures (or others that SJR provide). The easiest way to find out the SJR data for a particular journal is simple: type the journal name and SJR into google eg ‘Studies in Higher Education SJR’. Almost always the top result will be from SCImago Journal & Country Rank, something like http://www.scimagojr.com/journalsearch.php?q=20853&tip=sid . If you go there you’ll fild a little graph on the left hand side showing the SJR and cites per doc tracking over 5 years, given to 2 decimal places. There is also a big graph, with a line for each of these two metrics. If you hover over the right hand end, you get the current figure to 3 decimal places. See the screen shot below.

Scimago info

A screen shot from SJR showing the Indicator and cites per paper data

Alternatives to Impact Factor: Zombie Journal Rankings

In Australia, lots of journals were, at one time, ranked A*, A, B or C. This was done using a pool of metrics and also peer-based data with groups of academics providing information based on their expertise. For various reasons (don’t get me started) these have been abolished. However they are a common reference point still in many fields in Australia and New Zealand, and so I call them ‘zombie rankings’. Even if you’re not in Australasia, it might be useful to look up what the rank was, to see if it confirms what you’re finding from other measures. The quickest way to is go to the Deakin University hosted webpage and to check under Historical Data, then Journal Ranking Lists, then 2010 (the rankings were alive in 2010, and abolished shortly afterwards). The direct URL is here: http://lamp.infosys.deakin.edu.au/era/?page=fnamesel10 . Type in the journal name, or a keyword and ta-dah! If you just type in keywords you will get multiple results and may be able to see a range of options. I’ve put an image of what it looks like below. Pretty easy stuff.

Zombie Ranks

A screen shot from the Deakin website showing former ERA journal rankings

Alternatives to Impact Factor: ERA list

Now there are no rankings, ‘quality’ is indicated in a binary way as either included in the ERA list or not. We’ve just had a process in Australia of nominating new journals to be included in the list for 2015. But the current 2012 list is also available through Deakin. http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f .

Alternatives to Impact Factor: rejection rates

The more a journal rejects, the better it must be, right? Well that is the (dubious, in my view) logic underpinning the celebration of high rejection rates in some journals. I’m more interested in what gets in and what difference that makes to scholarly discourse, that what is thrown out. But hey, if you can find this information out (and it’s not always easy to do), then it may be worth taking into consideration. More for your chances of survival than as a status indicator perhaps.

Alternatives to Impact Factor: ask people who know!

While only you can judge the match between your paper and a journal, lots of people in your field can give you a sense of where is good to publish. This ‘sense’, in my view is not to be dismissed because it cannot be expressed in a number or independently verified. It is to be valued because it draws (or should do) on knowledge of all the metrics, but years of experience and reading.

Conclusions

Choosing journals is tricky. If you’re finding it quick and easy it’s probably because you’re not doing enough homework, and a bit more time making a really well informed decision will serve you well in the long run. As I said earlier this post is not exhaustive either in terms of things to consider in your choice, or status indicators. But I hope this is useful as a starting place.

10 easy ways to make sure you have no publication record when you finish your PhD and forever after

Since posting this I have created a slideshow highlighting some of the key points, along with those from the subsequent post about not getting read or cited.

There is a lot of pressure on doctoral students and early career academics to publish. Want even the slightest chance of getting job? Publish. Want anyone other than your examiners to read your work? Publish. Want to actually contribute to knowledge? Publish. What to do the ethical thing and deliver what was promised to the people who funded your work, or those who contributed to it through support, helping with data etc? Publish.

Now, some of you may wish to do those things, but in my experience there seem to be plenty of people out there who don’t. They see publication as the ultimate stain their good reputation, the catastrophe to end all catastrophes, the academic apocalypse. They are the publishaphobes.

Well there is good news! By following these few and easy rules, you too can make sure your work gathers dust on library shelves (or better still in the basement), so that no-one ever reads it, and the labour of love that has invaded the last 3+ years of your life can all come to nothing more than some letters before or after your name. Perhaps the non-publishing option makes sense because you’re an intellectual fraud and are afraid of getting found out.

1. Keep your papers locked away in your computer / desk drawer

By far the easiest way to make sure you never have anything published is to never actually send anything off for review. Reasons for this may be fear of critical feedback and perfectionism (see below), but it’s worth making this simple but powerful point: NOT sending your paper (or book proposal etc) off is the only 100% safe guarantee to make sure you NEVER get published. Simples. When you wonder how those stellar professors, or the students / postdocs who seem to be on a fast-track to tenured jobs and academic stardom got so many publications, the answer is: they sent lots of stuff off for review (notwithstanding all the rejections they got along the way).

2. Wait until your paper is perfect before you submit it

You’ve realised that you have to submit something in order for it to get published. Well done you! But you know it’s good to be good to stand a chance, so you’re going to let it sit for a while and come back and tweak it later. You know you don’t take rejection or harsh feedback well, so better to get it perfect first, right? WRONG. Perfectionism is the enemy of publication. you’ll never write anything perfect so stop trying.

3. Send half-baked crap off while suffering EOS

The perfect counterpart to perfectionism. Or should that be imperfect? Pat Thomson has written an excellent blog post; about ‘early onset satisfaction’ (EOS) – a bad thing for writing and writers: “feeling too happy with a piece of writing meant that you didn’t rewrite and rewrite as often and as hard as you ought to” (the phrase being attributed to Mem Fox). Pat recalls a time when she was reviewing an article for a journal and came to the conclusion that the author had been struck with EOS, and probably hadn’t given it to anyone else to read, or ‘if they had, I’d have taken bets that they hadn’t asked anyone to ask them the hard questions – like – so what, and why should I care?’. Atta boy! Way to go! The peer review process isn’t 100% foolproof, so there is a small chance that someone will publish the rubbish that your bout of EOS has duped you into regarding as brilliant; but by and large reviewers will pick it up and ensure a quick and firm rejection (or major revisions). Phew!

4. Be crushed by rejection and negative feedback

Second only to not sending your written work out is this: sending it out, but then buckling completely when it gets rejected. There must be hundreds of (potentially) good papers stuck in limbo because their authors are defeated by something as inconsequential as rejection from one or more journals. So the editors and reviewers didn’t like your paper? EITHER: yes, they’ve pronounced true judgement on your intellectual worthlessness and the irrelevance of your research (in which case by all means leave your paper to rot in the depths of your hard drive); OR perhaps you went for the wrong journal, need to clarify your argument etc, (in which case get cracking on finding a different journal / making revisions, and get it out there again. no excuses).

5. Ignore word limits and reference styles

A fantastic way to get your paper bounced back to you before the editor has even read a word. The journal has a limit of 4,000 words including references, but your study is special, so all the rules for being succinct and equality of space in the issue should be disregarded just for you. Maybe you’ve used qualitative data so need long quotes from interviews (wow! what a pioneering thing you’re doing! Interviews!). Maybe there’s a lot of literature in your field, so you need 2,000 words just of lit review (wow! no-one else has read as much as you!). Maybe your theoretical framework is complex and requires detailed, lengthy explanation (wow!… [you get the message]). A journal editor worth their salt will open your paper, check the word count and bounce it right back to you if it is over.

Perhaps you’ve actually bothered to think about a key argument, and redrafted your paper so it is now a succinct argument that fits within the word limit (or is even well below it so when the reviewers ask for more explanation you have some room for manoeuvre). But fear not – you can still make sure you get rejected quicksmart. Each journal has a clearly specified reference style. But formatting references is boring. Or maybe you haven’t learned to use Endnote properly. Or maybe you think even though all other academics format their own references, the copyeditors should do this for you. Maybe you think the doi numbers in the new APA 6th reference style can be ignored (because you don’t have them and can’t be arsed to go and look them up for all the references in your bibliography). Way you go! You just got yourself a rejection! [I’m not joking: I foolishly neglected to look up the differences between APA 5th and 6th, and had a paper de-submitted from a journal and was smartly told to get the references right if I wanted my paper to be considered].

6. Pay no regard to the aims, scope, and recent content of the journal

Another brilliant way to avoid your work getting in the public domain is to do everything you can to secure a resounding rejection from the editor. Better still, you can get yourself rejected before your paper even gets sent out for review. By some miracle of accident or adversity you’ve got a paper under the word limit, with correct references. You heard from a friend that the Polynesian Quarterly is a highly respected journal, so you send your paper about political resistance in the slums of Detroit off to the editor. You’re not stupid, you see it isn’t a direct fit, but your research is just so good, they’ll want the paper. And anyway, this journal has a big word limit which you need. BOING! Back it comes with a: thanks, but no thanks (the first of these thanks really means: ‘what were you thinking?! why did you waste my precious time?). Now this is a fairly drastic example, but time and again I hear editors (and experience myself as an editor and reviewer) saying a prime reason for rejection is lack of fit with the journal.

There is a parallel here for book proposals. Your mate published her PhD through Publisher X, so you send your proposal in to them, too. A bigger BOING. Publishers have lists, scope, and priorities just like journals. (Except the fishing ones (often from Germany) who emailed you and said they’d like to publish your PhD; but you’re not considering them, are you?).

(If, on the other hand, you’d like your paper to go out for review, see the end of my previous post on selecting journals).

6. Write one title / abstract, and then a completely different paper

Almost as effective as a complete mismatch between your paper and the journal, is a complete mismatch between your title / abstract, and the main text. If a rejection is what you’re looking for, promising one thing and delivering another is a fairly safe way to go. Set the editor and reviewers up with grand yet specific expectations, but then write something that drifts off course completely and concludes in an utterly surprising way. That way you will confuse, disappoint, frustrate and irritate all the important people in one go.

With book proposals, a great way to get no interest all in your work is to get it send to the wrong sub-department. I did this brilliantly in a recent proposal I sent off to Routledge. The book I had in mind was about professional practice and learning, firmly within established fields of educational research. However my proposal clearly left the first reader at Routledge that it was a book about early childhood development. (It’s about child and family health practices). It got sent to the early childhood people and was swiftly rejected. As of course I would expect. This is not me moaning about Routledge: this is me saying I should have done a better job at making it clear where my work is located academically.

7. Give it all away for free

Please note: a number of people have taken issue with the points I make below. I won’t edit them here, so that the replies and comments make sense. But I will re-quote from the journal submission process to clarify what it is I am warning about. I am essentially saying that you need to make sure you can tick this box: “Confirm that the manuscript has been submitted solely to this journal and is not published, in press, or submitted elsewhere.” I have approved and published the replies because I think it’s important to be open and to be clear that there are different views on this matter. What’s really crucial is that you think carefully and seek informed advice.

Publishers publish to make money. They’re in it for profit. By and large they are not charities. All the big publishers gobbling up all the journals do so because they see there’s money to be made. How do they make their money? Because people or libraries, pay for access to journals, because people want to read them. And why do people want to read them? Because they can read something there that they can’t read elsewhere: something new!

So a great way to avoid anyone ever wanting to publish your work (in book or journal form), is to make sure that it’s all already out there in the public domain, preferably on a blog or academia.edu or a open access conference website. That way, when you’re asked to tick the box about original work, you can’t do so and your publishing treadmill grinds to a sticky, rusty halt. (Yes conference papers that get developed into articles are fine, and your thesis can be turned into a book; but you’ve got to be careful about it).

There’s a middle ground here. Before you finish your PhD, or perhaps shortly afterwards, you’re likely to get an email from a publishing company, saying they’d like to publish your PhD as a book. You’re asked to send your manuscript in, and miraculously, within a short time you’ve got the offer of a contract. No proposal. No reviewers’ comments. Just the offer. Your work will be out there, in a book with an ISBN, for sale on amazon etc within days. Problem is, other academics won’t really take this seriously as an academic book, because they’re not convinced a thorough peer review process was undertaken. I’ve used one of these publishers to publish a report that otherwise would have been printed in-house at my uni. Neither are great academic coups, but the published version is at least available online and reaches a wider audience. It doesn’t count as a book on my CV or for my research output. So if you want to show off your shiny book to your friends, and feel good about having got your work out there, but don’t care about your long term academic reputation and publishing prospects, go ahead.

8. Trap your paper in inter-author disputes

Many of us co-author journal papers with colleagues. If you’re hoping to avoid publication, a strong tactic is to make sure there is no clarity around authorial roles and sign-off. Not discussing what contributions, rights and responsibilities are expected from each author is a great way to start. Then, all being well, your draft can get stuck in limbo as authors keep adding changes, undoing the changes their colleagues have just made, and no-one knows who ultimately says ‘Enough! Let’s just send it off!’.

9. Only the best will do

Other students publish in poxy journals with low impact factors. You, however, are the next Einstein / Piaget / [insert relevant superstar here]. You’re head and shoulders better than all the other students around you who frankly, probably barely even qualify for MENSA, and can write their IQ without using standard form. You don’t want to pollute your academic CV with low- or mid-status journals. High status might not even match your utter brilliance. No, for you, it’s got to be Nature, New Scientist, BMJ, [insert your field’s top journal with uber-high rejection rates here]. Nothing else will do. You can say one thing to your publication track record: byeeeeee! [except it doesn’t exist anyway]

10. Cheat: send your article off to more than on journal at once

When the journal submission system asks you if you’ve sent the same paper off to any other journals, they don’t really care, do they? Luckily for all you publishaphobes out there, sending off the same paper to two (or more) journals at once doesn’t double (or triple) your chances of publication. It annihilates them. If you get found out (and chances are you will, because, guess what, editors talk to each other, know and use the same reviewers etc), not only is your work in an article-shaped coffin, but your the dirt is being piled on the remains of what was (potentially) your academic career. (This point neglects the idiocy of sending the same paper to two journals: they all have different aims, scope, length, styles, conversation histories – you’d have to be pretty naive to think that this is a way to go anyway, even if it wasn’t one of the seven deadly academic sins).

(NB. With book proposals it may be acceptable to make contact with multiple publishers at once, but check with your supervisor and others first as to how this might play; also remember different publishers means your proposal will be different anyway).

To all the publishaphobes have a go at diagnosing your phobia. While I’d secretly love you all to remain as you are and lower the competition in journals and books for the rest of us, I think scholarship will be the better for your participation. To those who are up for it, remember these 10 easy steps, but above all, remember never to take them!

Self-sabotage your academic career

I’ve been doing lots of workshops about academic careers, doctoral study, publications, perfectionism, study habits etc, recently.

Noah Riseman (of Australian Catholic University) pointed out this article in the Chronicle of Higher Education to me, and it is well worth a read. Be honest with yourself when you read it.

My big take home lessons (in a deliberately blunt style):

1. Don’t wait around for someone to pat you on the back and give you wonderful opportunities  / blank research funding cheques / book contracts / tenured jobs. If you’re not doing anything about this, you can pretty much assume no-one else is either.

2. Don’t delay by seeking perfection. Nothing you write will ever be perfect. Deal with it and get it out there. But don’t rush it all either. Hit the sweet spot (and I would add: be ready to accept that much if not everything we do at least in part reflects what we can do in particular times and circumstances).

3. Don’t mope and self-vicitimise in the face of failure and harsh reviews. Sure it will feel rubbish for a while. But if you’re not able to cope with criticism and rejection, academia probably isn’t for you. Sorry but that’s pretty much the size of it. And in case you doubt: I’m pretty happy to say I’ve been rejected by plenty of journals, research funders, and job panels in my time. Yes, it didn’t feel great when it happened. But no, I’m not embarrassed by it, or ashamed. Nor do I allow it to fuel self-doubt.

4. Be visible (and as per point 1, don’t expect others to go around shining the light on you), but be ready to step aside as personal and political storms pass.

5. Be flexible and coherent at the same time. Chances are the job that equals lecturing and researching on the topic of your PhD does not and probably never will exist. Be ready to go where the money is or jobs are. I moved from geography in secondary schools to a project about doctoral education, and now am researching health. But I can tell a coherent story about pursuing questions of learning, consistent methodologies, and developing theoretical approaches. Be ready to teach courses that aren’t in your direct area. It’s super-competitive out there so you can’t be precious. And you can’t be stuck in what was interesting / good for you at one moment in time. The world and academic disciplines will move (on) regardless of how much you still love your doctoral topic and paradigm.

Journal impact factors, rankings, and citations: why I do and don’t bother about them!

I was nudged into writing this blog sooner rather than later by a tweet from a doctoral student who had been encouraged to target high impact factor journals.

Why I care about impact factors

Impact factors are a measure of how many times articles in a particular journal get cited within a particular group (not all) of journals within a particular period of time. They are an average metric and give some useful information. A journal with a really low impact factor, it would seem either isn’t read by many people, or is read, but people don’t then cite articles from that journal in their own work. A high impact factor suggests lots of readers, and that lots of them cite those articles when they themselves write papers.

It’s important to know the quality of the journals we are submitting to. Why?

  1. As part of our general professionalism and maintenance of standards, rigour and pride in our work
  2. Because we cannot escape being measured ourselves against performance indicators – it affects our institutional standing, chances of promotion, getting research grants etc
  3. Because it’s not only nice, but good to be publishing in the best venues alongside other scholars as brilliant as yourself – your work can receive a kind of status boost by virtue of its being in a good journal. Like moving to a posh suburb – people think you’re posh because you live there.

Why impact factors don’t make (much) sense

Impact factors are a very crude metric. They look at the bibliography part of a journal article, to see who has been cited. They do not take into account that we cite other works for very different reasons.

In a paper I’ve been working on, I use Theodore Schatzki’s practice theory as a central conceptual and methodological framework, and so cite his work (lots) in my paper. I also cite a few people who have looked a practice in my field, including noting the limitations of this work. Schatzki’s papers influenced my work profoundly; the others did not. But they get counted the same.

A journal could boost its impact factor by publishing a highly controversial paper that everyone disagrees with and so it gets cited lots.

Impact factors are often based only on citations in other journals. If a paper gets cited lots in other books and book chapters, but less in journal articles, the impact factor goes down. (Reflecting the science-based nature of these metrics, where the proportion of publications in journals is higher than many other fields.)

In sciences, journal papers can be reviewed, accepted and published in very short times (weeks, even). In many social sciences, it can take 3-6 months to get a review, more weeks to make revisions, and then weeks or months before something is in print. Then someone has to read it, write a paper, and submit it so the whole process starts again. Reasonably from the moment you write it could be years before you get a chance to be cited.

And the architecture of knowledge isn’t the same: some sciences work on a cumulative model, where the latest trial result or findings are built on very quickly. Social sciences, in my experience, tend to develop more laterally, and patchily. And we tend to cite old things a lot too.

Impact factors have a finite temporal window that simply cannot attend to or reflect this complexity. Impact factors in social science will invariably be lower because it’s a slower field in terms of publishing timescales.

Impact factors are a form of bean counting. The problem is the ‘beans’ that get counted aren’t particularly meaningful.

Finally, impact factors don’t make (much) sense in my experience, because they don’t have give a very valid or consistent measure of what I care about in terms of journal quality.

One of my most cited papers is in quite a low-impact journal (and it’s cited meaningfully: it presents a methodological framework that other people are actually using). Some of the papers in higher-impact journals aren’t being cited so much. So aggregate measures like impact factors don’t translate into personal victories, nor do they work as proxy measures for the quality of your article. They say something about the journal you’ve published in as it was at a time before you published in it. They say nothing about your writing or its impact.

[But don’t get me wrong: I admit to loving it when I check google scholar and see my citations or h-index have gone up; and it bothers me to see hard work not getting cited too]

So what about citation counts then?

At least these are directly tied to your own work. I look at google scholar a lot (because it includes publication forms that scopus and other more science-based citation systems ignore, but which are important and valued in social science). So I can see which of my papers are most cited, and by whom. All very nice for selling myself up to show off how many times things have been cited.

But the same problem applies: this is still only looking at bibliographies and not how or why my work is cited. It could be that everyone is slating my work as rubbish, or that they think it’s amazing. The citation count would be the same.

Rejection rates?

Some journals (particularly those in the USA) champion their rejection rates: “we reject 75% of our papers, so we must be good!” I’d be worried about a journal with a very low rejection rate because it would imply sloppy reviewing and/or low demand from authors to publish in it. And high rejections might mean tough reviewing and therefore quality in papers. But the really popular journals get loads of submissions, and my guess is many of those will be rubbish to start off with. So why conclude that quality reflects how much dross you turn away? Does not compute.

And journal rankings?

In Australia we used to have a system in which journals were ranked A*, A, B, C. This was in theory an improvement on impact factor because it reflected discussions held among experienced researchers in particular fields. Knowledge based on collective decades of working in a field led to classifications of some journals as better than others. Potentially a much richer and more valid set of evidence being drawn upon here. But there was inevitably politics, and some very odd rankings emerged (at least in my field of education). Nonetheless, overall it seemed to me reasonable and the best of the approaches.

One of the most astonishing things about this ranking system was that it got abolished because the powers that be were surprised and annoyed that universities were putting pressure on academics to target the top tier journals.

Seriously? What did they expect? If you attach more value to some things over others (A* and A vs B and C), and then attach high stakes to that value (eg. rankings being used in grant applications to show good track records), then of course people will behave in a way that seeks to maximise the value attached to their work and increase their chances of getting research funding, promotion etc.

So rankings have been abolished in Australia. But they live on as people still use them when choosing journals, reporting their achievements, selling themselves for promotion etc. In some institutions, A* and A rankings are still formally tied to performance measures, work allocations and so on.

These things matter whether we like them or not.

 So what do I consider when I’m targeting a journal?

  1. Does the scope of the journal fit with what I want to say. Methodologically, substantively, and theoretically. What’s the conversation going on in that journal to which I can contribute? Does the current editorial board want to take the journal in new directions that fit where I’m headed?
  2. My own sense of quality: do I read lots of papers in this journal? Do I think they are good papers, generally? Are the other people writing in this journal people who I respect and think do good work? In other words, will my paper benefit from being seen in this journal, alongside other papers (and therefore people)?
  3. Other measures of quality: I won’t deny I have my eye on my next grant proposal, promotion application etc. I’ve got to make sure I keep my performance up in ways that satisfy the bean counters and show up on the various measures, however flawed they are.
  4. Being part of a conversation that matters to me and my work. I sometimes target low impact or low-ranked journals because I know they’re read by people I care about, and by the people I want to be aware of my work. New journals brought out by gurus I admire, highly specialist journals in small sub-fields can score poorly on metrics but bring kudos in your international research community
  5. Have I published in it before? Do I know who the editors are and what their take on things is
  6. Word length, timescales for review, whether they publish online previews etc. All are taken into account.

And a final comment (one that will surely be repeated in future posts):

Remember, you can’t unpublish. Once it’s out there, it’s out there for good. For ever.

So rather than thinking about publishing early, or publishing lots, or publishing in top journals,  think about publishing well. Publishing well has nothing to do with impact factors and journal rankings, and everything to do with the quality of your research and what you write. I have faith in my research community and peers to recognise (and cite) good research and good writing when they see it.