The key is the match between your paper and the journal
Choosing a journal for your paper is a complex and nuanced process. Don’t expect to be able to ask anyone else off the cuff and get a sensible answer. Only people who know what you want to say and what you want to achieve in saying it can provide guidance, and even then it’s up to you to judge. In writing this I hope to make this process more transparent, and to help you be as informed as possible about your decisions. If you disagree, or can add more things to consider, or more measures of status please leave a response at the bottom!
Chicken and egg
Which comes first the paper or the choice of journal? Neither. Both. In my view you can’t write a good paper without a sense of the journal you are writing for. How you frame the argument / contribution, how long it is, which literature you locate it within, how much methodological detail, how much theoretical hand-holding is needed for readers, what kind of conclusions you want to present, what limitations you should acknowledge: ALL of these are shaped by the journal. But how do you know the answers to these questions? Usually by writing a draft! See the chicken-egg problem? My process is as follows:
- Come up with a rough idea for a paper – what data am I going to analyse, with what theoretical focus, presenting what new idea?
- Come up with a short list of potential journals (see below)
- Plan the paper down to paragraph level helps me think through the ideas and make good judgements about the fit between it and journals in the short list.
- Choose a journal. If in doubt write the abstract and send it to the editor for initial comment: what’s the worst that could happen? She or he could ignore it!
An ongoing conversation
Most journal editors want to publish papers that join and extend a dialogue between authors that is already happening in their journal. This gives the journal a certain shape and develops its kudos in particular fields or lines of inquiry. If no-one has even come close to mentioning your topic in a particular journal in the last 5 years, I’d think twice about targeting that outlet. Unless you really are planning a major disruption and claiming woeful neglect of your topic (which says something about the editors…)
Check out the editors, and stated aims and scope
Editors have the ultimate say over whether or not to accept your paper. Check out who they are, and do some research. What are their interests? How long have they been on the editorial board? If it’s a new editorial board, are they signalling a broadening, narrowing, or change in scope perhaps? What special issues have come out?
Don’t be stupid
Don’t get the journal equivalent of ‘bright lights syndrome’ and choose somewhere just because it is uber-high status (like Nature). Don’t be a ‘sheep’ either and choose a journal just because someone you know has got their paper accepted in it. Don’t send a qualitative paper to a major stats / quantitative journal. Don’t send a piece of policy analysis from (insert your random country of choice here) to a major US journal (for example) when your paper has nothing to say to a US audience.
The devil is in the detail: yes – more homework
Check out things like word limits, and whether they include references. If the journal allows 3,000 words including references, and your argument takes 5,000 to develop, either change your argument or change the journal. Simples. Also check out the review process. Look under abstracts in published papers for indications as to the timeline for review, and check if there are online preview or iFirst versions published (which massively reduces the time to publication). Don’t be caught out with a whopping fee for publication if your paper is accepted. And don’t be shocked when you read the copyright form and find it costs $3,000 for open access. Some journals publish their rejection rates: you’d be foolish to plough on not knowing 90% of papers are rejected even before review (if this was the case).
Publish where people you want be visible to are reading
Think who you want to read your paper. Forget dreams of people from actual real life reading academic journals. The only people who read them (except some health professionals) are, on the whole, other academics. This isn’t about getting to the masses: there are other, better venues for that. This is about becoming visible among your disciplinary colleagues. Where are the people you like and want to be known to in your field publishing? What journals do they cite in their papers?
Understand the status of the journal you are submitting to and its implications for your career
This is the biggie. So big I’ve written a whole section on how to do this below. But for now a few key points.
- It pays to know what will be counted by universities in terms of outputs, and what will have kudos on your CV. In Australia, for example, journals not on the ERA list are pretty much no-go. In some fields (particularly hard science and health), journals not indexed in Web of Science aren’t recognised as worth the paper (or pixels) they are printed on.
- Remember that status measures only measure what can be measured. A really prestigious journal in your field – with lots of top people publishing lots of great papers in it – might be lower (or not even register at all) in all the various indices and metrics.
- There is no single flawless measure of status. Take a multi-pronged approach to suss out where a particular journal lies between ‘utter crap that publishes anything’ to ‘number 1 journal in the world for Nobel Laureates only’.
- There are many good reasons for publishing deliberately in lower status journals. It may be they have the ‘soft’ status I mentioned above. Maybe that is where you can actually say what you want to say without having to kow-tow to ridiculous reviewers who don’t understand or accept your innovative approach (which they view as floppy, oddball etc.).
How journal status is measured and how to find this information out
A whole book could be written on this, so please forgive my omissions.
This is the one everyone talks about. It is also the bane of many people’s lives outside natural and health sciences. Impact Factor is a measure of the mean number of citations to recent articles published in a particular journal, excluding citations in other papers in the same journal. So an Impact Factor of 2.01 in Journal X means that each paper in X has been cited a mean of 2.01 times in all the other indexed journals, except X, over the past two years (five year figures are also used). The higher the impact factor, the higher the status, because it shows that the papers are not only read but they are cited lots too. Excluding the ‘home’ journal stops editors bumping up their own Impact Factor by forcing authors to cite papers in their journal. Why is this problematic? Where do I start?!
- Not all citations are for the same reason but they all get counted the same. If you cite paper P as one of several that have investigated a topic, and paper Q as a hopeless study with flawed methods, and paper R as hugely influential and formative, shaping your whole approach, they all get counted the same. In theory, publishing a terrible paper that gets cited lots for being terrible can boost an Impact Factor.
- The key is in the reference to other indexed journals. The issue is: what gets to be indexed? There are strict rules governing this, and while it works okay in some fields, lots of important, robust journals in social sciences and humanities aren’t indexed in the list used to calculate Impact Factor; at least that is my experience. This can deflacte Impact Factor measures in these fields because lots of citations simply don’t get counted. The formal ‘Impact Factor’ (as in the one quoted on Taylor and Francis journal websites, for example) is based on Journal Citation Reports (Thomson Reuters), drawing on over 10,000 journals. Seems a lot? In my field, many journals are missed off this index.
- The time taken to be cited is often longer than two years (google ‘citation half-life’ for more). Lets say I read a paper today in the most recent online iFirst. I think it’s brilliant, and being a super-efficient writer, I weave it into my paper and submit it in a month’s time. It takes 9 months to get reviewed, and then another 3 months to get published online. Then someone reads it. Process starts again. If the world was full of people who read papers the day they came out, and submitted papers citing them almost immediately, still the lag-time to publication in many fields prevents citations within the magic 2 year window. There are versions of Impact Factor that take five years into account to try to deal with this problem. This is better, but doesn’t benefit the journals that publish the really seminal texts that are still being cited 10, 15, 20 years later.
- Impact Factors are not comparable across disciplines. An Impact Factor of 1.367 could be very low in some sciences, but actually quite high in a field like Education. So don’t let people from other fields lead your decision making astray.
- Impact Factor may work very well to differentiate highly read and cited for less highly read and cited journals in some fields (where the value range is great, say from 0 to over 20), but in fields when the range for most journals is between 0 and 1.5 its utility for doing so is less good.
- Editors can manipulate Impact Factors to a degree (eg by publishing lots of review articles, that tend to get cited lots). See Wikipedia’s page on impact factor for more.
How do you find out the Impact Factor for a journal? If you don’t know this you haven’t been using your initiative or looking at journal webpages closely enough. Nearly all of them clearly state their Impact Factor somewhere on the home page. What can be more useful though is knowing the Impact Factors for journals in your field. In this case you need to use your go to Web of Science. I recommend downloading the data and importing it into excel so you can really do some digging. In some cases it may not be so obvious to find, in which case try entering ‘Journal title Research Gate’ into google eg ‘Studies in Higher Education Research Gate’. The top result should give the journal title and research gate, and a url like this: http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f . Immediately on clickling the link you will find data on Impact Factor, 5 year Impact Factor and more (based on Thomson Reuters). Note this is not an official database and may be out of date at times.
Alternatives to Impact Factor: SJR
An alternative that may work better in some fields is the Scopus Scimago Journal Rankings (SJR). This includes a range of metrics or measures, and I have found it includes more of the journals I’ve been reading and publishing in (in Education). The SJR indicator is calculated in a different way from Impact Factor (which I admit I don’t fully understand, see this Wikipedia explanation). It has a normalising function as part of the calculation which reduces some of the distortions of Impact Factor and can make it more sensitive within fields where there are close clusters. SJR also has its version of impact called the ‘average citations per document in a 2-year period’. When I compare the SJR and Thomson Reuters measures for journals in my field, some are very similar and some are quite different. So it pays to do your homework. SJR data are also easily exportable to excel and you can then easily find where journals lie in a list from top to bottom by either of these measures (or others that SJR provide). The easiest way to find out the SJR data for a particular journal is simple: type the journal name and SJR into google eg ‘Studies in Higher Education SJR’. Almost always the top result will be from SCImago Journal & Country Rank, something like http://www.scimagojr.com/journalsearch.php?q=20853&tip=sid . If you go there you’ll fild a little graph on the left hand side showing the SJR and cites per doc tracking over 5 years, given to 2 decimal places. There is also a big graph, with a line for each of these two metrics. If you hover over the right hand end, you get the current figure to 3 decimal places. See the screen shot below.
Alternatives to Impact Factor: Zombie Journal Rankings
In Australia, lots of journals were, at one time, ranked A*, A, B or C. This was done using a pool of metrics and also peer-based data with groups of academics providing information based on their expertise. For various reasons (don’t get me started) these have been abolished. However they are a common reference point still in many fields in Australia and New Zealand, and so I call them ‘zombie rankings’. Even if you’re not in Australasia, it might be useful to look up what the rank was, to see if it confirms what you’re finding from other measures. The quickest way to is go to the Deakin University hosted webpage and to check under Historical Data, then Journal Ranking Lists, then 2010 (the rankings were alive in 2010, and abolished shortly afterwards). The direct URL is here: http://lamp.infosys.deakin.edu.au/era/?page=fnamesel10 . Type in the journal name, or a keyword and ta-dah! If you just type in keywords you will get multiple results and may be able to see a range of options. I’ve put an image of what it looks like below. Pretty easy stuff.
Alternatives to Impact Factor: ERA list
Now there are no rankings, ‘quality’ is indicated in a binary way as either included in the ERA list or not. We’ve just had a process in Australia of nominating new journals to be included in the list for 2015. But the current 2012 list is also available through Deakin. http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f .
Alternatives to Impact Factor: rejection rates
The more a journal rejects, the better it must be, right? Well that is the (dubious, in my view) logic underpinning the celebration of high rejection rates in some journals. I’m more interested in what gets in and what difference that makes to scholarly discourse, that what is thrown out. But hey, if you can find this information out (and it’s not always easy to do), then it may be worth taking into consideration. More for your chances of survival than as a status indicator perhaps.
Alternatives to Impact Factor: ask people who know!
While only you can judge the match between your paper and a journal, lots of people in your field can give you a sense of where is good to publish. This ‘sense’, in my view is not to be dismissed because it cannot be expressed in a number or independently verified. It is to be valued because it draws (or should do) on knowledge of all the metrics, but years of experience and reading.