A guide to choosing journals for academic publication

The key is the match between your paper and the journal

Choosing a journal for your paper is a complex and nuanced process. Don’t expect to be able to ask anyone else off the cuff and get a sensible answer. Only people who know what you want to say and what you want to achieve in saying it can provide guidance, and even then it’s up to you to judge. In writing this I hope to make this process more transparent, and to help you be as informed as possible about your decisions. If you disagree, or can add more things to consider, or more measures of status please leave a response at the bottom!

Chicken and egg

Which comes first the paper or the choice of journal? Neither. Both. In my view you can’t write a good paper without a sense of the journal you are writing for. How you frame the argument / contribution, how long it is, which literature you locate it within, how much methodological detail, how much theoretical hand-holding is needed for readers, what kind of conclusions you want to present, what limitations you should acknowledge: ALL of these are shaped by the journal. But how do you know the answers to these questions? Usually by writing a draft! See the chicken-egg problem? My process is as follows:

  1. Come up with a rough idea for a paper – what data am I going to analyse, with what theoretical focus, presenting what new idea?
  2. Come up with a short list of potential journals (see below)
  3. Plan the paper down to paragraph level helps me think through the ideas and make good judgements about the fit between it and journals in the short list.
  4. Choose a journal. If in doubt write the abstract and send it to the editor for initial comment: what’s the worst that could happen? She or he could ignore it!

An ongoing conversation

Most journal editors want to publish papers that join and extend a dialogue between authors that is already happening in their journal. This gives the journal a certain shape and develops its kudos in particular fields or lines of inquiry. If no-one has even come close to mentioning your topic in a particular journal in the last 5 years, I’d think twice about targeting that outlet. Unless you really are planning a major disruption and claiming woeful neglect of your topic (which says something about the editors…)

Check out the editors, and stated aims and scope

Editors have the ultimate say over whether or not to accept your paper. Check out who they are, and do some research. What are their interests? How long have they been on the editorial board? If it’s a new editorial board, are they signalling a broadening, narrowing, or change in scope perhaps? What special issues have come out?

Don’t be stupid

Don’t get the journal equivalent of ‘bright lights syndrome’ and choose somewhere just because it is uber-high status (like Nature). Don’t be a ‘sheep’ either and choose a journal just because someone you know has got their paper accepted in it. Don’t send a qualitative paper to a major stats / quantitative journal. Don’t send a piece of policy analysis from (insert your random country of choice here) to a major US journal (for example) when your paper has nothing to say to a US audience.

The devil is in the detail: yes – more homework

Check out things like word limits, and whether they include references. If the journal allows 3,000 words including references, and your argument takes 5,000 to develop, either change your argument or change the journal. Simples. Also check out the review process. Look under abstracts in published papers for indications as to the timeline for review, and check if there are online preview or iFirst versions published (which massively reduces the time to publication). Don’t be caught out with a whopping fee for publication if your paper is accepted. And don’t be shocked when you read the copyright form and find it costs $3,000 for open access. Some journals publish their rejection rates: you’d be foolish to plough on not knowing 90% of papers are rejected even before review (if this was the case).

Publish where people you want be visible to are reading

Think who you want to read your paper. Forget dreams of people from actual real life reading academic journals. The only people who read them (except some health professionals) are, on the whole, other academics. This isn’t about getting to the masses: there are other, better venues for that. This is about becoming visible among your disciplinary colleagues. Where are the people you like and want to be known to in your field publishing? What journals do they cite in their papers?

Understand the status of the journal you are submitting to and its implications for your career

This is the biggie. So big I’ve written a whole section on how to do this below. But for now a few key points.

  1. It pays to know what will be counted by universities in terms of outputs, and what will have kudos on your CV. In Australia, for example, journals not on the ERA list are pretty much no-go. In some fields (particularly hard science and health), journals not indexed in Web of Science aren’t recognised as worth the paper (or pixels) they are printed on.
  2. Remember that status measures only measure what can be measured. A really prestigious journal in your field – with lots of top people publishing lots of great papers in it – might be lower (or not even register at all) in all the various indices and metrics.
  3. There is no single flawless measure of status. Take a multi-pronged approach to suss out where a particular journal lies between ‘utter crap that publishes anything’ to ‘number 1 journal in the world for Nobel Laureates only’.
  4. There are many good reasons for publishing deliberately in lower status journals. It may be they have the ‘soft’ status I mentioned above. Maybe that is where you can actually say what you want to say without having to kow-tow to ridiculous reviewers who don’t understand or accept your innovative approach (which they view as floppy, oddball etc.).

How journal status is measured and how to find this information out

A whole book could be written on this, so please forgive my omissions.

Impact Factor

This is the one everyone talks about. It is also the bane of many people’s lives outside natural and health sciences. Impact Factor is a measure of the mean number of citations to recent articles published in a particular journal, excluding citations in other papers in the same journal. So an Impact Factor of 2.01 in Journal X means that each paper in X has been cited a mean of 2.01 times in all the other indexed journals, except X, over the past two years (five year figures are also used). The higher the impact factor, the higher the status, because it shows that the papers are not only read but they are cited lots too. Excluding the ‘home’ journal stops editors bumping up their own Impact Factor by forcing authors to cite papers in their journal. Why is this problematic? Where do I start?!

  1. Not all citations are for the same reason but they all get counted the same. If you cite paper P as one of several that have investigated a topic, and paper Q as a hopeless study with flawed methods, and paper R as hugely influential and formative, shaping your whole approach, they all get counted the same. In theory, publishing a terrible paper that gets cited lots for being terrible can boost an Impact Factor.
  2. The key is in the reference to other indexed journals. The issue is: what gets to be indexed? There are strict rules governing this, and while it works okay in some fields, lots of important, robust journals in social sciences and humanities aren’t indexed in the list used to calculate Impact Factor; at least that is my experience. This can deflacte Impact Factor measures in these fields because lots of citations simply don’t get counted. The formal ‘Impact Factor’ (as in the one quoted on Taylor and Francis journal websites, for example) is based on Journal Citation Reports (Thomson Reuters), drawing on over 10,000 journals. Seems a lot? In my field, many journals are missed off this index.
  3. The time taken to be cited is often longer than two years (google ‘citation half-life’ for more). Lets say I read a paper today in the most recent online iFirst. I think it’s brilliant, and being a super-efficient writer, I weave it into my paper and submit it in a month’s time. It takes 9 months to get reviewed, and then another 3 months to get published online. Then someone reads it. Process starts again. If the world was full of people who read papers the day they came out, and submitted papers citing them almost immediately, still the lag-time to publication in many fields prevents citations within the magic 2 year window. There are versions of Impact Factor that take five years into account to try to deal with this problem. This is better, but doesn’t benefit the journals that publish the really seminal texts that are still being cited 10, 15, 20 years later.
  4. Impact Factors are not comparable across disciplines. An Impact Factor of 1.367 could be very low in some sciences, but actually quite high in a field like Education. So don’t let people from other fields lead your decision making astray.
  5. Impact Factor may work very well to differentiate highly read and cited for less highly read and cited journals in some fields (where the value range is great, say from 0 to over 20), but in fields when the range for most journals is between 0 and 1.5 its utility for doing so is less good.
  6. Editors can manipulate Impact Factors to a degree (eg by publishing lots of review articles, that tend to get cited lots). See Wikipedia’s page on impact factor for more.

How do you find out the Impact Factor for a journal? If you don’t know this you haven’t been using your initiative or looking at journal webpages closely enough. Nearly all of them clearly state their Impact Factor somewhere on the home page. What can be more useful though is knowing the Impact Factors for journals in your field. In this case you need to use your go to Web of Science. I recommend downloading the data and importing it into excel so you can really do some digging. In some cases it may not be so obvious to find, in which case try entering ‘Journal title Research Gate’ into google eg ‘Studies in Higher Education Research Gate’. The top result should give the journal title and research gate, and a url like this: http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f . Immediately on clickling the link you will find data on Impact Factor, 5 year Impact Factor and more (based on Thomson Reuters). Note this is not an official database and may be out of date at times.

Alternatives to Impact Factor: SJR

An alternative that may work better in some fields is the Scopus Scimago Journal Rankings (SJR). This includes a range of metrics or measures, and I have found it includes more of the journals I’ve been reading and publishing in (in Education). The SJR indicator is calculated in a different way from Impact Factor (which I admit I don’t fully understand, see this Wikipedia explanation). It has a normalising function as part of the calculation which reduces some of the distortions of Impact Factor and can make it more sensitive within fields where there are close clusters. SJR also has its version of impact called the ‘average citations per document in a 2-year period’. When I compare the SJR and Thomson Reuters measures for journals in my field, some are very similar and some are quite different. So it pays to do your homework. SJR data are also easily exportable to excel and you can then easily find where journals lie in a list from top to bottom by either of these measures (or others that SJR provide). The easiest way to find out the SJR data for a particular journal is simple: type the journal name and SJR into google eg ‘Studies in Higher Education SJR’. Almost always the top result will be from SCImago Journal & Country Rank, something like http://www.scimagojr.com/journalsearch.php?q=20853&tip=sid . If you go there you’ll fild a little graph on the left hand side showing the SJR and cites per doc tracking over 5 years, given to 2 decimal places. There is also a big graph, with a line for each of these two metrics. If you hover over the right hand end, you get the current figure to 3 decimal places. See the screen shot below.

Scimago info

A screen shot from SJR showing the Indicator and cites per paper data

Alternatives to Impact Factor: Zombie Journal Rankings

In Australia, lots of journals were, at one time, ranked A*, A, B or C. This was done using a pool of metrics and also peer-based data with groups of academics providing information based on their expertise. For various reasons (don’t get me started) these have been abolished. However they are a common reference point still in many fields in Australia and New Zealand, and so I call them ‘zombie rankings’. Even if you’re not in Australasia, it might be useful to look up what the rank was, to see if it confirms what you’re finding from other measures. The quickest way to is go to the Deakin University hosted webpage and to check under Historical Data, then Journal Ranking Lists, then 2010 (the rankings were alive in 2010, and abolished shortly afterwards). The direct URL is here: http://lamp.infosys.deakin.edu.au/era/?page=fnamesel10 . Type in the journal name, or a keyword and ta-dah! If you just type in keywords you will get multiple results and may be able to see a range of options. I’ve put an image of what it looks like below. Pretty easy stuff.

Zombie Ranks

A screen shot from the Deakin website showing former ERA journal rankings

Alternatives to Impact Factor: ERA list

Now there are no rankings, ‘quality’ is indicated in a binary way as either included in the ERA list or not. We’ve just had a process in Australia of nominating new journals to be included in the list for 2015. But the current 2012 list is also available through Deakin. http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f .

Alternatives to Impact Factor: rejection rates

The more a journal rejects, the better it must be, right? Well that is the (dubious, in my view) logic underpinning the celebration of high rejection rates in some journals. I’m more interested in what gets in and what difference that makes to scholarly discourse, that what is thrown out. But hey, if you can find this information out (and it’s not always easy to do), then it may be worth taking into consideration. More for your chances of survival than as a status indicator perhaps.

Alternatives to Impact Factor: ask people who know!

While only you can judge the match between your paper and a journal, lots of people in your field can give you a sense of where is good to publish. This ‘sense’, in my view is not to be dismissed because it cannot be expressed in a number or independently verified. It is to be valued because it draws (or should do) on knowledge of all the metrics, but years of experience and reading.

Conclusions

Choosing journals is tricky. If you’re finding it quick and easy it’s probably because you’re not doing enough homework, and a bit more time making a really well informed decision will serve you well in the long run. As I said earlier this post is not exhaustive either in terms of things to consider in your choice, or status indicators. But I hope this is useful as a starting place.

25 thoughts on “A guide to choosing journals for academic publication

  1. nickhopwood Post author

    I should add: another thing about Impact Factor (and any variants of it) is that it offers no guide whatsoever as to how much your own paper will be read or cited. That is a question of the quality and interest of your writing, drawing in readers with a good title and abstract, and the match between the paper and journal: a good paper with a good match and good ‘selling points’ is the closest you can get to securing lots of citations. Or write a review paper. Or coin a term. Or develop a methodological tool. Actually, I feel another blog coming on. Watch this space…

    Reply
    1. Matt

      Good general point, but not quite technically true. Knowing the Impact Factor of a journal allows you to make predictions about how many times an article you publish in that journal will be cited in the same way that knowing the mean of some variable allows you to make an unbiased estimate for an individual member of the population. If you are looking at two journals with very different impact factors, it IS reasonable to make a best guess that you will get more citations by publishing in the journal with the higher impact factor. (Estimating the number of times your article will be *read* is a different story, admittedly.)

      The more subtle point is that there is a great deal of variability in the impact of articles *within* journals, and the distribution of citations is highly skewed across articles. (Most articles are never cited or only cited once or twice, while a small number of articles are cited an extremely large number of times). All of which raises the question of why it is that funding and career progression for researchers depends so much on the average impact of the journals they publish in, rather than the actual impact of their own articles…

      Reply
      1. nickhopwood Post author

        Thank you very much for your comment, Matt. I take the point about impact factor as an ‘unbiased estimate’. However say Journal X has IF 1.54, and journal Y has IF of 1.85. The unbiased estimate says publishing in Y will lead to more citations, but given the variability within the journals, this remains a poor guide – the match between the paper and the journal, and who reads the relevant journals will surely be much stronger influences. I can’t really imagine the situation where it the citation of the same paper in two different journals will only (or even mainly?!) shaped by the relative IFs…

        I totally agree regarding the fallacies of using IF for progression and funding etc for all the reasons you say. That’s why we often give IF and actual citation data in our research applications and promotion!

        I appreciate your comment and the helpful clarification you offer.

  2. Brian

    Hi Nick, great post and ever so timely. I am in the process of developing some ideas for articles. I had an interesting experience recently, where I prepare a paper based on my conference presentation. The presentation was more of a philosophical plea. The paper was printed as a critical essay. I am assuming/hoping they meant critical, as in critical analysis. I know I should have asked but as a newbie, and it being my first publication, I was happy with getting published.

    Reply
    1. nickhopwood Post author

      Hi Brian. Thanks for your comment. I’m glad you found the post useful.
      Turning conference papers into journal papers is a great move, and congratulations on getting published! I think ‘critical’ would be being used in exactly the way you hope. Interestingly those sort of ‘plea’ papers can be quite popular (ie cited) provided what you’re pleading for chimes with what others see as a need/gap :-)

      Reply
  3. Andy S

    Great post Nick! As a related (and only half serious) point, we’re starting to see semantic technology taking a role here, especially in sciences – facilities like: http://www.edanzediting.com/journal_advisor purport to be able to find you the best journal location (providing you’re ONLY interested in finding a journal that publishes similar stuff, but as your blog post suggests, realities are often a little more complex…)

    Reply
    1. nickhopwood Post author

      Thanks Andy!
      I didn’t know about the semantic facility, but shouldn’t be surprised: the same is already on offer to analyse qualitative data, and even to grade essays, so why not find a match for a journal!
      And as a fundamentally lazy academic, if a computer can do (at least some of) the work for me, bring it on!

      Reply
  4. Pingback: A guide to choosing journals for academic publication | Web 2.0 PT

  5. Sarina

    Thanks Nick, I’ve been to many workshops over the years on this, but glad to have the information all in one place for easy reference!

    I recently submitted to a Journal that asked me to provide the names of at least 3 potential reviewers for my paper…I was stumped & a bit unsure how to proceed…the hot shots? The people I agree/disagree with? The people I’ve cited lots….I know there is fair bit written about thesis examiners but you got any thoughts on article reviewers?

    Reply
    1. nickhopwood Post author

      Hi Sarina.
      Thanks for your comment and your question. This is something I get asked about a lot. It’s still relatively new in qual / social science and I am not on very secure ground giving a response. My reading is that this is something that reflects the development of online submission templates aimed at natural/medical sciences, and that they are pretty meaningless in the social sciences still. My guess is the big publishers find it easier to have one system, and as most of the money comes from sciences, it is the system that suits them best that we have to work with.

      So I wouldn’t overthink this. Yes, put some names and I would say make sure they are people you think will be sympathetic at least to your topic and method or theory. Don’t feel you have to refer to the gurus. But don’t expect them to actually be selected as the reviewers either!

      This may be being used by journals as a way to increase their pool of reviewers, particularly as new methods, theories or questions are coming up. So it could actually be serving quite a useful function in that respect, and if so is worth some careful thought. Make sure you suggest people who have relevant expertise and are part of the same bigger discussion.

      Hope this helps

      Nick

      Reply
  6. Jo VanEvery (@JoVanEvery)

    Great article. I particularly like that you foreground the “who do you want to read this” concerns before getting into the status questions.

    One additional point about Impact Factors, especially in relation to Social Science and Humanities fields is that they ONLY index journals and citations in journals. If you are an economist or a psychologist, they probably work pretty well. If you work in a discipline where a lot of people publish in books (either edited collections or monographs) then all those citations are NOT counted. You publish in a journal. 27 people cite your paper but THEY publish the thing that cites you in a book and it’s as if it never happened.

    As you mention in your last comment, the whole industry is dominated by practices in the sciences and health sciences where book publishing isn’t even on the radar.

    I also find that checking your own citations and then following up to see where your biggest fans are publishing and reading can help with your strategy. I helped a client with this recently and it really helped her see things about her publishing strategy that she hadn’t noticed before.

    Reply
    1. nickhopwood Post author

      Hi (again), and thanks (again!)

      Your point about Impact Factors ignoring non-journal outputs is bang on. I’ve had conversations with scientists who poo-poo google scholar because it counts sloppy outputs like monographs and chapters in edited volumes! When, in some disciplines, it’s all about the monograph, and at least in my field, chapters are really important (for one thing they tend to develop fuller arguments as they can get more directly into the meat of issues).

      And your last point is something I do all the time: go on google scholar and find out who has been citing which papers, and why! (Hoping that I’m not being cited as an exemplar of crappy research!)

      Reply
  7. Jo VanEvery (@JoVanEvery)

    I also have a suggestion for Sarina based on the advice I give to people apply for grants that ask for suggested reviewers:

    The people you suggest don’t give status to your paper. The journal is probably mostly looking to expand their database of experts. You know your field and thus are in a good position to suggest people who have relevant expertise.

    Hot shots are a bad suggestion because they are really busy and thus likely to say no. Very early career people may not be a good suggestion because they may not have enough experience with the process to do a good job and they are probably overwhelmed with a lot of stuff. But suggesting one early career person puts them in the data base and gives them an opportunity to start reviewing.

    You want to think of people who have relevant knowledge and might also have time to review or at least be familiar enough with the process (and well networked enough) that when they say no, they will suggest other good reviewers.

    You can also focus on recommending people who might not be found with obvious keyword searches. For example if most of the literature in the main area of your paper does not use your methodology, you could suggest someone knowledgeable about your methodology.

    Reply
    1. nickhopwood Post author

      Hi Jo.
      Thanks for your reply. I think you’re right – it is probably a way of increasing names in a database of reviewers.

      I personally think that doctoral students and early career researchers often write the best reviews: they are normally very (most) up to date with the literature, have more time than busy professors, more likely to say yes, and do the review most diligently (they’ve no career-long axe to grind)…

      But thanks again for your response – for me getting a conversation going after a post is just the best thing!

      Reply
  8. Jo VanEvery (@JoVanEvery)

    Nick, I remember seeing a link to some research recently (saw it on Twitter, didn’t save, sorry I can’t be more precise) that showed that people really don’t cite work they think is shite. It’s kind of an urban myth about citation indices. Mostly there are just the “this is an example of this broad kind of approach” and the “this in particular has influenced how I’m working with this in these ways” citations.

    And I think you cover the bit about how things like Google Scholar, monographs, etc are perceived in your opening bit about being careful who you get advice from about publications. It is REALLY important to know how things work in ones own discipline and not to assume that all of academe is playing by the same rules.

    The biggest problem with edited volumes (vs. journals) is that it is much harder for people to find your work unless they are interested in the whole volume. It’s not indexed and abstracted as an individual chapter the way a journal article would be and thus won’t necessarily show up in data base searches.

    But that takes us into a whole different discussion of how people find the stuff they put in their “to read” pile (virtual or physical) and how stuff moves to the top of that pile.

    Reply
    1. nickhopwood Post author

      Hi. This is proving a great discussion!

      I don’t think people cite work they think is shite, so much as cite it in order to define a gap – things get cited lots for what they haven’t done, rather than for what they have…
      At least google scholar does find chapters in edited volumes. From my experience a good edited volume with a good editor and the right collection of writers and content often has a wide presence and good shelf life. Yes it doesn’t come up in auto alerts and narrow search engines, but for me at least these are the least common ways I find things anyway!

      Reply
      1. Jo VanEvery (@JoVanEvery)

        In a workshop I led a few years ago I asked people how they found things to read. The range of responses was quite interesting. From “do a data base search and read everything that comes up” to “check out articles by that guy I heard and liked at a conference” to “I read x journals regularly and sometimes look for other things by an author I am impressed with which sometimes leads to finding other journals”. Thinking about these things, and how they might vary by discipline, can help with the choosing of journals I suspect. Also helps see the connection between giving a conference paper and someone later reading your articles. I keep thinking it is probably worthwhile to let people who talked to you after a conference paper know when the article is published. Feels a bit weird but given how busy we all are, I bet they’d appreciate the heads up about a publication they might like to read, cite, or put in a course outline.

  9. Lisa Pautler

    I would love to get your feedback on our new site, JournalGuide, which helps you search for journals based on the content of your paper, and then sort and filter those results based on other criteria important to you. Of course it doesn’t make the decision for you, but our goal is to aggregate as much information about journals into one place (from indicies, directly from journals, and from author experiences) to make that decision process easier. You can save searches, save journals, and even compare them side-by-side.

    Note that Impact Factor is temporarily missing from the site, as we’re working with Thomson Reuters to be sure it is presented properly. But we do have SNIP values, which cover more fields than IF. We’re continually adding more data to our site, and am happy to get any recommendations for additional datasets we should consider. For example, we will soon be including the vetted source publication list from Excellence Research Australia. Our next step is to create a “verified” status within the database to help authors know which journals are legitimate and avoid potentially predatory journals.

    Feel free to email me directly with any feedback or suggestions – lisa.pautler@journalguide.com

    Reply
    1. nickhopwood Post author

      Hi Lisa

      Thanks for your message. I’m definitely interested in the site, and will have a look at it soon. Maybe other readers might also post their thoughts on this here?

      Reply

Please join in and leave a reply!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s