Journal impact factors, rankings, and citations: why I do and don’t bother about them!

I was nudged into writing this blog sooner rather than later by a tweet from a doctoral student who had been encouraged to target high impact factor journals.

Why I care about impact factors

Impact factors are a measure of how many times articles in a particular journal get cited within a particular group (not all) of journals within a particular period of time. They are an average metric and give some useful information. A journal with a really low impact factor, it would seem either isn’t read by many people, or is read, but people don’t then cite articles from that journal in their own work. A high impact factor suggests lots of readers, and that lots of them cite those articles when they themselves write papers.

It’s important to know the quality of the journals we are submitting to. Why?

  1. As part of our general professionalism and maintenance of standards, rigour and pride in our work
  2. Because we cannot escape being measured ourselves against performance indicators – it affects our institutional standing, chances of promotion, getting research grants etc
  3. Because it’s not only nice, but good to be publishing in the best venues alongside other scholars as brilliant as yourself – your work can receive a kind of status boost by virtue of its being in a good journal. Like moving to a posh suburb – people think you’re posh because you live there.

Why impact factors don’t make (much) sense

Impact factors are a very crude metric. They look at the bibliography part of a journal article, to see who has been cited. They do not take into account that we cite other works for very different reasons.

In a paper I’ve been working on, I use Theodore Schatzki’s practice theory as a central conceptual and methodological framework, and so cite his work (lots) in my paper. I also cite a few people who have looked a practice in my field, including noting the limitations of this work. Schatzki’s papers influenced my work profoundly; the others did not. But they get counted the same.

A journal could boost its impact factor by publishing a highly controversial paper that everyone disagrees with and so it gets cited lots.

Impact factors are often based only on citations in other journals. If a paper gets cited lots in other books and book chapters, but less in journal articles, the impact factor goes down. (Reflecting the science-based nature of these metrics, where the proportion of publications in journals is higher than many other fields.)

In sciences, journal papers can be reviewed, accepted and published in very short times (weeks, even). In many social sciences, it can take 3-6 months to get a review, more weeks to make revisions, and then weeks or months before something is in print. Then someone has to read it, write a paper, and submit it so the whole process starts again. Reasonably from the moment you write it could be years before you get a chance to be cited.

And the architecture of knowledge isn’t the same: some sciences work on a cumulative model, where the latest trial result or findings are built on very quickly. Social sciences, in my experience, tend to develop more laterally, and patchily. And we tend to cite old things a lot too.

Impact factors have a finite temporal window that simply cannot attend to or reflect this complexity. Impact factors in social science will invariably be lower because it’s a slower field in terms of publishing timescales.

Impact factors are a form of bean counting. The problem is the ‘beans’ that get counted aren’t particularly meaningful.

Finally, impact factors don’t make (much) sense in my experience, because they don’t have give a very valid or consistent measure of what I care about in terms of journal quality.

One of my most cited papers is in quite a low-impact journal (and it’s cited meaningfully: it presents a methodological framework that other people are actually using). Some of the papers in higher-impact journals aren’t being cited so much. So aggregate measures like impact factors don’t translate into personal victories, nor do they work as proxy measures for the quality of your article. They say something about the journal you’ve published in as it was at a time before you published in it. They say nothing about your writing or its impact.

[But don’t get me wrong: I admit to loving it when I check google scholar and see my citations or h-index have gone up; and it bothers me to see hard work not getting cited too]

So what about citation counts then?

At least these are directly tied to your own work. I look at google scholar a lot (because it includes publication forms that scopus and other more science-based citation systems ignore, but which are important and valued in social science). So I can see which of my papers are most cited, and by whom. All very nice for selling myself up to show off how many times things have been cited.

But the same problem applies: this is still only looking at bibliographies and not how or why my work is cited. It could be that everyone is slating my work as rubbish, or that they think it’s amazing. The citation count would be the same.

Rejection rates?

Some journals (particularly those in the USA) champion their rejection rates: “we reject 75% of our papers, so we must be good!” I’d be worried about a journal with a very low rejection rate because it would imply sloppy reviewing and/or low demand from authors to publish in it. And high rejections might mean tough reviewing and therefore quality in papers. But the really popular journals get loads of submissions, and my guess is many of those will be rubbish to start off with. So why conclude that quality reflects how much dross you turn away? Does not compute.

And journal rankings?

In Australia we used to have a system in which journals were ranked A*, A, B, C. This was in theory an improvement on impact factor because it reflected discussions held among experienced researchers in particular fields. Knowledge based on collective decades of working in a field led to classifications of some journals as better than others. Potentially a much richer and more valid set of evidence being drawn upon here. But there was inevitably politics, and some very odd rankings emerged (at least in my field of education). Nonetheless, overall it seemed to me reasonable and the best of the approaches.

One of the most astonishing things about this ranking system was that it got abolished because the powers that be were surprised and annoyed that universities were putting pressure on academics to target the top tier journals.

Seriously? What did they expect? If you attach more value to some things over others (A* and A vs B and C), and then attach high stakes to that value (eg. rankings being used in grant applications to show good track records), then of course people will behave in a way that seeks to maximise the value attached to their work and increase their chances of getting research funding, promotion etc.

So rankings have been abolished in Australia. But they live on as people still use them when choosing journals, reporting their achievements, selling themselves for promotion etc. In some institutions, A* and A rankings are still formally tied to performance measures, work allocations and so on.

These things matter whether we like them or not.

 So what do I consider when I’m targeting a journal?

  1. Does the scope of the journal fit with what I want to say. Methodologically, substantively, and theoretically. What’s the conversation going on in that journal to which I can contribute? Does the current editorial board want to take the journal in new directions that fit where I’m headed?
  2. My own sense of quality: do I read lots of papers in this journal? Do I think they are good papers, generally? Are the other people writing in this journal people who I respect and think do good work? In other words, will my paper benefit from being seen in this journal, alongside other papers (and therefore people)?
  3. Other measures of quality: I won’t deny I have my eye on my next grant proposal, promotion application etc. I’ve got to make sure I keep my performance up in ways that satisfy the bean counters and show up on the various measures, however flawed they are.
  4. Being part of a conversation that matters to me and my work. I sometimes target low impact or low-ranked journals because I know they’re read by people I care about, and by the people I want to be aware of my work. New journals brought out by gurus I admire, highly specialist journals in small sub-fields can score poorly on metrics but bring kudos in your international research community
  5. Have I published in it before? Do I know who the editors are and what their take on things is
  6. Word length, timescales for review, whether they publish online previews etc. All are taken into account.

And a final comment (one that will surely be repeated in future posts):

Remember, you can’t unpublish. Once it’s out there, it’s out there for good. For ever.

So rather than thinking about publishing early, or publishing lots, or publishing in top journals,  think about publishing well. Publishing well has nothing to do with impact factors and journal rankings, and everything to do with the quality of your research and what you write. I have faith in my research community and peers to recognise (and cite) good research and good writing when they see it.

1 thought on “Journal impact factors, rankings, and citations: why I do and don’t bother about them!

  1. Ibrar Bhatt

    Thanks for the advice Nick. Really helpful.
    There are other ways of developing one’s academic identity in addition to traditional academic publishing, including blogging.
    I’m still doing my PhD, and have had a paper published and other ones sketched out. These issues are pertinent to me as I also want to talk about my research to non-academic audiences too.
    At a recent event I went to in London for ESRC scholars, the shining example of a research project that was presented to us was one that was completed in 16 weeks and had no journal articles; it was conducted with journalists in response to the UK government’s reaction to the riots across the country. Doubtless it churned lots of newspaper articles (=impact?).

    Reply

Please join in and leave a reply!