Category Archives: Academic writing

My Shadow CV

The idea of the shadow CV

I was been inspired to write this blog post Devoney Looser’s article in the Chronicle of Higher Education, in which she asks: What would my vita look like if it recorded not just the successes of my professional life, but also the many, many rejections? After doing some digging I realised this wasn’t the first instance – I found one going back to 2012 by Jeremy Fox, another by Bradley Voytek from 2013 and a piece by Jacqueline Gill from the same year in which she mooted the idea (but refrained from sharing the dirt, yet). There’s also this piece, about the Princeton Professor @JHausfoher who shared his dirty career laundry in April 2016.

I have long been an advocate for more candid and open sharing of the often harsh realities of academic work. Here is my attempt to model the sort of warts and all honesty that I advocate and wish to see in others.

Aren’t I nervous about making this kind of stuff public?

Academia is a highly competitive and often insecure work environment. While I currently have the privilege of an ongoing, full time contract, who knows what the future will bring. It seems reasonable to expect that someday, someone might be looking at my CV and doing some digging around my online scholarly identity, considering whether to appoint me to another job, or perhaps even just as part of a promotion panel.

Devoney wrote about the tendency for us to hide our rejections, arguing: “That’s a shame. It’s important for senior scholars to communicate to those just starting out that even successful professors face considerable rejection.”.

All academics face considerable rejection. I’m not revealing anything that I wouldn’t expect to be broadly true of any colleagues competing with me for whatever job or promotion it might be.

More importantly, if a prospective employer thinks twice about offering me a job because of what they read below, then I probably don’t want to be working for or with that person.

The values I see reflected in presenting a public shadow CV are ones of honesty, openness, and trust. I never claimed to be a perfect academic. Success in academia is not about never failing, never being rejected. It is about not allowing rejections to take hold of you. If I preach this but don’t have the gall to match generalisations with concrete detail, I should just shut up. So here goes.

My career path

My CV has a lovely little paragraph talking about an internationally recognized research profile. It all seems wonderfully coherent, planned, deliberate.

My Shadow CV would say something more like this: Nick started education research doing a MSc and PhD focusing on young people’s learning about geography and sustainability. However there were no jobs in this area when he graduated (see ESRC failure #1 below), so he had to look elsewhere. He got a job looking at doctoral education, and so there was then a period when this was his main focus. When that (4 year contract) job ended, again there were no jobs in that field (or none he could get in a place he was willing to live), so he applied for a postdoc at UTS. To be successful in that, he had to change fields again. In short: Nick’s research interests have gone where the jobs and money are. True, there are some consistent questions and approaches that I’ve been exploring and developing through these broad contexts. But a lot of it was to do with opportunity and constraint.

My employment history

My CV shows how I went from a funded postgrad scholarship to a full time job on a project at Oxford, to my UTS Chancellor’s Postdoctoral Research Fellowship, which was converted into an ongoing position at UTS.

My Shadow CV would mention:

ESRC failure #1 – I applied for an ESRC postdoc, but didn’t get it. I found that out 6 weeks before I was due to finish my PhD, and had no job lined up. Panic stations.

  • Not getting interviewed, twice: about 3 years into my postdoc job at Oxford, I applied for two jobs advertised at Lecturer/Senior Lecturer level. I felt I had a pretty good publication track record, and relevant teaching experience. I wasn’t even called for interview. I had no idea how small a fish I looked in such a big, competitive pond.

My funded research

My CV shows I have consistently been able to get funding for the research I want to do, starting with an ESRC 1+3 scholarship for my postgrad , international funding from the NSF in the USA,  $371,000 from the Australian Research Council for my DECRA project, and some smaller grants more recently.

My Shadow CV would mention:

  • ESRC failure #3 – Part of a team applying for money to look at the education system in Bhutan. Mixed reviews. No funding.
  • ESRC failure #2 – I was part of a team that applied for funding for a project on doctoral education. The reviews were pretty blunt. No cash registers ringing anywhere near me this time!
  • ARC failures #1-5 – The Australian Research Council funding is highly prestigious, and undoubtedly a tough nut to crack. I heard of success rates around 17%. If that is true, then I’m no better than average I was involved in two Linkage submissions that were not funded, and two Discovery submissions that were not funded. I was also part of a proposal that started as a Linkage, fell over before it got submitted, came back to life as a Discovery, got submitted, and then was not funded.
  • Spencer Foundation – Particularly galling because I’d roped in some key international people to join in, and they put some time in… I feel it all falls on my shoulders. Interestingly, both the key people stuck by me and are now involved in my DECRA.
  • ANROWS – yup you guessed: another detailed proposal that took months to put together that resulted in $0.
  • Office of Learning & Teaching – didn’t get through to second round.
  • Norwegian Research Council – a project on innovation, but they didn’t think it was innovative enough.
  • STINT – application for funding to support research collaboration on simulation. $0.

Publishing rejections and other shadowy truths

My CV proudly shows off a number of book, journal article, and book chapter publications, alongside complimentary citation metrics.

My shadow CV would acknowledge that I still get plenty of papers rejected (one only weeks ago, which I did blog about). Off the top of my head I can say I’ve been (sometimes quite rightly!) rejected by British Educational Research Journal, Oxford Review of Education, Journal of Advanced Nursing, Qualitative Research, Vocations and Learning, Advances in Health Sciences Education, Journal of Curriculum Studies, Studies in Higher Education, Australian Journal of Primary Health. (Some of these have also subsequently accepted papers I’ve been involved with, too, proving a rejection doesn’t mean you’re marked for life as useless).

My book proposals didn’t all sail through at the first attempt either. I would hope that my rejections these days tend to be for ‘good’ reasons (foibles of peer review, fact that I’m presenting complex, sometimes challenging arguments) rather than ‘bad’ reasons (failure to do my homework, Early Onset Satisfaction etc.). My shadow CV would also point to the many papers that haven’t been cited by many people, including those that have only been cited by me. My published work is clearly not of uniform or universal appeal or value in the eyes of others.

In conclusion

I could add sections about awards (Shadow CV mentioning those applied or nominated for that I didn’t do so well in), about reviewing (the times I’ve said no, I’m too busy; the reviews where I have been harsher than was warranted), etc. etc.

Well, I doubt this post has achieved much except echoing Devoney’s brilliant piece. I’m just trying to say “Yes, she’s totally right! We need to do more of this kind of thing!”.

A PhD student receives a rejection from a journal. Here is how she and her supervisors responded

I was talking with a colleague recently who described an interaction with one of her students who had been rejected from a journal. The response of her supervisors sounded really interesting, so I asked if she’d mind forwarding the emails onto me for a blog post. Which she kindly did! There’s a lot here that is useful in thinking about how to respond when you get rejected. I should point out this is in a country where many students complete a PhD through publications, and in this case the article was written by the student, with all the supervisors helping her and named as authors.

First the student wrote to her supervisors

Dear supervisors,

At last I have got response from the journal regarding my second manuscript. Unfortunately they are not interested to publish it.

I´m very disappointed about that. I can agree with a lot of the comments, it is useful for me in the future process but it has taken over 6 months to deliver that answer and right now I don´t have so much positive energy to restart the work.

I think I can interpret their comments (at least from the first reviewer) as if I rewrite the manuscript I can try to resubmit it but I´m not really sure if that is their suggestion.

Then one supervisor replied, cc’ing the others

Thank you for your email. Yes that is somewhat disappointing, but from the comments, perhaps it is good that it isn¹t published in its current form: because from what the reviewers saw, I don¹t think the paper did full justice to your work and your thinking! Better to have a stronger paper published, even if it is later.

I have had similarly prickly experiences, particularly in this journal, with reviewers who really want accounts of research to feel as if the research was quantitative (a bit like reviewer 1 worrying about interpretation in ethnographic research etc).

On the plus side:

  1. Both reviewers appear to have read your paper in quite a bit of detail! (which is not always the case)
  2. Both reviewers have offered well-written comments that are quite easy to understand (which is not always the case)
  3. There is lots in the comments that will help to improve the paper.

I think both the reviewers offer largely helpful comments – they are not fighting the kind of story you want to tell, or questioning its importance. They do want to know more concrete detail about the study methods, want a clearer alignment between the question, theory, findings and discussion, and a very clear argument as to what is new and why it matters. They are all very achievable without having to go back and do more analysis!

I think the process now should be to wait a few days until you feel a bit less fed up, and then to start:

  1. Thinking of alternative journals (although R1 seemed to invite this the journal is definitely not asking for a resubmission as I interpret the email). XXX might be one possibility. Or YYY?
  1. Coming up with your own to-do list in terms of changes you think are worth making to the paper – and perhaps differentiating those that are small/easy, and those that require a bit more thought and work. You can also list those points the reviewers made that you¹re not so bothered about and don¹t want to make big changes.

So, when you¹re feeling you have the energy to take it up again, there are my suggestions 🙂

Then another supervisor added her voice

I understand that it feels a bit disappointing, particularly since they kept you waiting so long for the decision. But I can only echo what [Supervisor 1] is suggesting, once you have worked through the comments, your paper will be much stronger.  I think you should let it sit while you are completing the paper on the [different analysis], you are in a good flow with that one at the moment! And we should think of an alternative journal, I agree, we need to aim for one that is included in Web if Science.

And then a third supervisor added his voice

This is the kind of experience that is not only sometimes happening, but rather a rule than an exception. And just as S1 and S2 state; it will in the end improve the paper. But I do agree they could have given us this feedback at least half a year earlier….

I also think S2’s advice is right; go on with the paper on [different analysis] and let this paper rest (just like a wine; it will become better with time and maturation – ask your husband!).

So let this experience take its time and aim for a journal that is indexed in Web if Science, although the IF is not too important.

Then the student replies

Thanks for the support!

I totally agree with you all and as I said, the comments from the reviewers are very good for me in the future process and also for my paper regarding the [different analysis]. I  struggle with the same issues here I guess; clear arguments for the study, evidence for my findings and how to discuss that much more clear.

Brief comment from me

What I like here is:

  1. That we end up with the student being able to take the rejection letter as a way to identify some things that she needs to look out for in another paper
  2. That S3 normalises this kind of experience
  3. That S2 provides very concrete suggestions in terms of not getting distracted by the rejection when work is going well on another paper
  4. That S1 finds positive things to appreciate in the reviewers’ comments, even though it was a rejection
  5. That the student felt comfortable sharing this, and got such strong and immediate support.

New video on effective feedback / peer review in academic writing

Hi

I’ve just posted a short video on some principles and practices for effective giving of feedback  / peer review on academic writing.

It could be relevant to people working together in a writing group – perhaps providing a focus for discussion of your own principles and working rules for commenting on each other’s work.

It is also relevant to anyone who is involved in reviewing / refereeing for academic journals, where, let’s be honest, the feedback authors get isn’t always delivered in the most ethical, constructive and professional way :-0

I argue that effective feedback / reviewer comments can be understood as pedagogic in their effects – helping the writer develop the text and as a writer – rather than only judgemental.

To start I suggest thinking about the effects you want to have on the original author when you review someone’s work or give feedback on their writing. I then suggest some principles and practices that might help achieve certain effects. Important among these are (1) mirroring first (just saying what you think it is that the person is trying to achieve and what they are arguing); (2) being specific in praise and critique and giving reasons to explain your judgements; (3) being careful and sensitive with language, including avoidance of unnecessary emphasis, and use of more speculative phrasing where appropriate; (4) critiquing the text rather than the person; (5) being blatantly subjective – by which I mean writing in a way that acknowledges the review or feedback is the result of an interaction between the text and you (with your own values, history, knowledge, ignorance, privilege, preferences)… not an objective discovery of flaws in text, argument, research, or the scholar at the receiving end!

Current trends in academic publishing and where things might be heading

WARNING! This post may well be out of date already, and if not now, then quite possibly by the time you’ve finished reading it! Not because it’s long, but because things are changing very quickly!

This is my attempt to identify some of the big changes that are happening in academic publishing, and to point to where I think things are going. This is not based on extensive research or systematic reviews of literature, nor amazing insider-insights through industry contacts (my industry contact seems as uncertain as me about much of this)… it’s more a combo of gazing into a crystal ball, and well not exactly wishful thinking, but perhaps my instinct to resist cynicism and hope for a palatable outcome.

Open access

What’s the change? There is more than a groundswell of opinion that academic research should not be locked away behind pay-walls, but freely available to everyone. A crude summation of the logics and values at play here goes something like this:

1. The view from the ‘outside’… Where taxpayers pay for research (through government grants etc) they shouldn’t pay to access the outcomes of that research. The person just diagnosed with cancer should be able to go online and read about treatments and the latest trials without being hit with a bill for doing so. After all she ‘paid’ for the research in the first place through her taxes.

2. The view from the ‘inside’… Hey! There’s heaps of money being made in academic publishing but none of it is coming to me, the poor academic who wrote all the stuff in the first place! So I’m going to thwart those greedy publishers by publishing in open access journals (even though I still make no money!)

In practice what this means is that some researchers or their institutions are now paying a fee to publishers to make their articles open access (no fee, no ‘free’ access for others). Or, some journals (often the more ‘indy’ types) ask authors to pay a fee up front (no fee, no publish).

Where do I see it heading?

Hard to call. Like most of the changes I discuss here, the status quo is pretty much a big mess, and difficult to predict. I’ll start with the most certain: the journals that are both free-to publish and free-to-access will soon be extinct. Often hosted on university websites, it’s hard to see how these will survive the cut and thrust of contemporary higher education funding. Either these will end up charging to publish (as happened with one that I published in while it was still free, phew!), or they’ll get bought out by commercial publishers (when they are established enough that the publishers think people will pay to access content, or pay to have the content opened ‘freely’).

What about stopping people having to pay to read research when they paid for it through taxes, or have some other innate ‘right’ to access it? This argument has gone a fair way in the UK, such that now some funding bodies build in costs for paying the open access fee to publishers. The political winds may mean this catches on, with funding bodies basking in the warm glow of ‘everyone can read what our researchers publish’ feelings. But don’t I see this becoming the norm. Why? Several reasons.

  1. Because it doesn’t change the fact that people are still paying for access, they’re just paying as a collective one step further upstream.
  2. Who wants to read what’s in journal articles anyway? Are there really masses of people desperate to read academic papers? I very much doubt it (even in medical fields). Academic papers work to inform academic debate and are not our most effective or primary means of engaging wider non-academic audiences. (I expect you may disagree with me here). And anyway, will making all our papers open access actually improve things for the masses? I’ve been doing educational research for over a decade now and I still find many if not most papers pretty hard going. Hey, I struggle with understanding and motivation a lot of the time, and I’m paid to be interested in this stuff, and extensively trained to read it, with a masters degree, doctorate, years of practice and thousands of references in my endnote. Why should I expect the proverbial woman or man on the street to be jumping at the bit to read this stuff? And even if she or he is keen now, send them a few dozen papers and see if they’re as keen later on. My guess is Game of Thrones or re-reading Harry Potter will probably look more enticing. I’m not about denying access to knowledge to people. I do doubt whether open access journal articles will result in masses of the masses relishing in their newly found right to roam the academic literature for free.
  3. Because universities paying for open access when they already pay to subscribe to a journal is a hard pill to swallow. Harder still when universities in many countries are facing unprecedented budget cuts, perceived threats from MOOCS (though I think we’ve been unnecessarily spooked by MOOCS, as a sector, but don’t get me started), and uncertain futures. There simply isn’t the proverbial money down the sofa for universities to start paying for open access or paying to publish in the first place. And academics aren’t going to do it out of their own pockets. At least, I’m not.
  4. And research funding bodies are often facing funding cuts, too. And why should they give out less money for research because they’re having to pay more to make it free? Is it better for cancer patients to read journal articles for free, or for that open access fee (which is often not inconsiderable) to have paid for more research to develop and trial treatments? I’m just saying…

The question is, who’s going to blink first? Universities aren’t universities if they’re not producing publications. Commercial publishers can’t exist without profits. And academics are, of course, greedy money-grabbing tight-arses, who refuse to pay a mere few hundred or thousand dollars for every paper so the plebs down below can read their inaccessible waffle. I haven’t blinked yet. Have you?

But there are other changes afoot, and more reasons why I think paid-for publishing or paid-for open access are not going to become the norm very soon.

  1. Institutional repositories: the content (ie pre-proof version) of many papers can already be made freely available to anyone who can be bothered to read it, through institutional repositories. The cancer patient can read your paper, just without the fancy doi numbers and typesetting etc, without paying anything. But institutional repositories are proving a bit slow to catch on, unless institutions mandate their staff to submit.
  2. Maybe the publishers have not quite blinked, but squinted. One BIG publisher has recently released its embargo on the pre-proof version of a paper (the one the academic typed and was accepted by the editor) – we’re now free to put these documents on our blogs, departmental websites. If you don’t know which publisher this is, do some digging!
  3. Heaps of stuff is already open access, although it shouldn’t be if you pay attention to the copyright. If you’re any good at ‘the internet’, it’s not hard to find free versions of papers you’re ‘supposed’ to pay for. Not every paper is freely available this way, but lots are, and the number isn’t getting smaller. I expect academics publish their papers this way out of ignorance of copyright, naivety, as a way to give the evil publishers the proverbial finger gesture, or to enhance their citations and h-index. Or maybe because they lie awake at night worrying about all the people also lying awake because they found an article on the latest poststructural deconstruction of liminality, or a miraculous formula for predicting nearly-prime numbers, and they couldn’t afford the $30 fee to read it.

Vanity publishers, predatory publishers, and the in-between

Vanity publishers are nothing new – paying someone to publish your work (particularly in book form). What is new is the fact that the ‘publish or perish’ climate in academia is leading some researchers to secure their moment in the sun by flexing their credit cards rather than their intellectual muscles. Will this become the norm? Screw peer review. Screw the big commercial publishers, screw the fact it won’t end up on amazon and no-one will ever know it exists, I’ll pay this lovely boutique press to print 200 copies of my book. I think not.

Predatory publishers. “Dear Dr Dr Hopwood Nicholas. I recently read your paper entitled… and know you are an expert in this area. I invite you to submit a manuscript in this new international, peer reviewed journal, with this stellar international editorial board…” Click the url and something’s not quite right. Not only is the email clearly automated (“Dr Dr Hopwood Nicholas,” pah!) but this journal has a mysterious 10 volumes published in the last 2 years by academic celebrities you’ve never heard of who are citing works you’ve never read… Need I say more?

The in-between. I’m not going to name names. You know who they are. They’re the ones saying they’d like to publish your PhD as a book, before they’ve even read it, or who manage to conduct a ‘thorough’ review of your manuscript in about 8 seconds. An interesting business model for now. Is it the future? Put it this way, if I were playing the stockmarket, I’d be selling my shares in these companies quicksmart.

 

Peer review

Another trend, or perhaps a fad, is to claim that peer review is broken. Peer reviewers are getting it wrong, causing embarrassment for journal editors and their publishers, who have to retract papers, apologise to the public, and lick their wounds as their reputation takes a knock (forget the stupid authors who did dodgy research in the first place, they should have been caught earlier!).

Peer review is also showing symptoms of ill health, and the prevailing winds do not look favourable. Most reviewers aren’t paid, but the ‘rewards’ for doing reviews are slim. Our university employers want us to do more, better, faster, for less, and doing reviews isn’t counted very highly (or at all) in the grand scheme of things. So we feel we have less time to do reviews, meaning we may do fewer of them, and do them less well when we do say ‘yes’. Neither are good for our disciplines – the fewer people who do reviewers, the narrower (and more tired, frustrated) the gates controlling and supporting the expression of new knowledge become.

Peer review has historically happened under a cloak of anonymity, often ‘double-blind’, where neither reviewer nor reviewee knows who the other is (as if it’s not often blatantly obvious, or we can’t take an educated guess or do a bit of digging on google)… this anonymity has well-rehearsed benefits, but also results in some otherwise decent and professional folk unleashing torrents of abuse at their peers.

In natural science fields now it is becoming increasingly common for reviews (and authors’ responses to them) to be published, and even for the reviewers to be named. This, it is argued, makes the whole process more transparent, enhances the quality of reviews (referees are more careful writing comments when they know they will be made public), and enables readers to see how the paper came to take the form it reached, and what doubts or criticisms were raised along the way.

Of all the trends I reckon this is the most likely to catch on. It doesn’t have huge cost implications, or many drawbacks as far as I can see (though I admit I’ve not looked hard enough into this and haven’t yet experienced it in my field so I may well revise this view later!). I can see it spreading through the natural sciences pretty quickly, particularly in the current climate where retractions appear to be becoming more common, and there is seemingly strong sense that because some  reviewers are getting it wrong the peer review system can’t be trusted. Even if peer review isn’t ‘broken’ and therefore doesn’t need fixing, this is an interesting idea that seems to have legs. I can imagine the social sciences coming round to this (or perhaps not fighting when norms from natural sciences are inherited or imposed on us). Who will be the last ones standing on the island of opacity as the waves from the sea of transparency lick higher and tides of change push forward? Anyone? Bueller? Anyone? Humanities? Anyone? Anyone?

If peer review is broken, why not pay reviewers? Then they’d review heaps more papers, treat the process seriously, and do it all on time too. Brilliant idea! Except there’s no money. Even if there was some money to pay for this (which there isn’t), it would be like saying “Hey, you know that thing you used to get for free? Well screw you! You’re going to have to pay for it now!” (the fact that this is precisely what has happened in relation to undergraduate tuition fees in many countries is not lost on me, in case you were worried).

Let’s say we do find some extra cash down the back of the lecture seats (which we won’t; I looked, it had already been pillaged by the big publishers, greedy tenured academics, overpaid managers and busybody bureaucrats), I don’t think it would make any difference. In fact it might make things worse – if people were incentivised to do reviews for money, it could distort things quite significantly. And I like to believe that academics still do things for the good of their discipline or field rather than for money anyway. So even if it was a good idea (which it isn’t) and there was the money for it (which there isn’t), it wouldn’t catch on.

Democra-truth

This kind of brings together all the issues so far. The idea that universities should stop being so elitist in claiming their exclusive rights to knowledge. Forget the elbow-patched professors festering slowly amid their piles of self-citing, self-aggrandising and self-plagiarising books full of interminable critique and concluding that “everything is more complex than we thought, so there!”. Let’s storm the university and take knowledge back into our own hands! Vive la revolution!

Except, when made ‘democratic’ or left to the ‘market of the masses’ to sort it out, it doesn’t always go so well. Do some searching about errors in a certain large internet encyclopaedia and you’ll see what I mean. Furthermore, the masses will tend to agree around the knowledge they want to know, that they are comfortable with.

Do you really believe democra-truth wouldn’t end up being ‘media-mogul-truth’ instead? The media would have us believe there is a ‘debate’ about climate change, for example. If by ‘debate’ you mean overwhelming scientific consensus on a global scale, versus vocal and vociferous, cherry-picking dissent, then okay, you’ve got me. [If you’re one of those dissenters, you can still see the point I’m making, just choose any topic where the media holds palpable sway over public opinion]. But we often trust the public with other important things, like in judicial systems with juries, right? Yes, but see how that would work if the judges, clerkes, and lawyers were all pulled off the street too. Oh.

I strongly believe there should be places preserved and reserved where we can ask the really awkward questions that no-one else wants to face up to (particularly governments and the general public), and present the arguments no matter how unpalatable they may be. We also need to cherish the pursuit of knowledge and discovery without necessarily knowing where it will take us. No, I’m not sold on democra-truth (but of course I’m biased, my job kind of depends on universities maintaining certain kind of rights to generating and policing what counts as knowledge).

So, there you have it. As the pilot says when a huge storm appears on the radar screen: “Please fasten your seat belts, it may get a little bumpy”.

Video about journal publishing basics

I’ve been preparing for some workshops on journal publishing for postgraduate research students and early career researchers. Following the idea of Flipped Learning, and the ‘Learning 2014’ strategy at UTS, my home university, I’ve been trying to minimise the time participants spend in the workshops sitting listening to me talk, and to create more time for group discussion and activities instead.

So I created a 30 minute video covering some basic points – many of which I’ve written about in other posts. Although readers of this blog won’t by default be able to come to the workshops I’m running, I thought I’d share the video anyway in the hope it might still be useful. One day I might even put my face in front of the camera!

If you’re interested, the workshops will then go on to look at: why papers get rejected, what reviews look like and how to respond to nasty ones (which are a sad inevitability in academic life), how to frame a response letter when you’re asked to revise and resubmit, and the ethics of peer review.

The main video can be viewed here

https://www.youtube.com/watch?v=1wGIieGeQ9U&feature=youtu.be

There are two supplementary videos

1. How to find out the ‘zombie’ rank of a journal. https://www.youtube.com/watch?v=19b1z50E5Js

2. A bit more about researching the relative rather than absolute impact factor (or other status measure) of a journal. http://youtu.be/z3HhUtfXxUQ

The second one gets a bit more into technical side of using excel once you’ve imported relevant journal metrics data from an external source such as Scopus or SciMago SJR.

Please do add feedback and comments below! Are the videos useful? Do you disagree? Do you choose journals in a different way? Do you assess journal status differently? Am I out of date about copyright issues?

On this last point, a big BUYER BEWARE warning: copyright things are changing very fast. Only this week Taylor and Francis announced AAM (author accepted manuscripts) can be put on personal or departmental websites, free of embargo (this doesn’t mean you can make the final paper pdf freely available, but the pre-proofed word version)… so some of my comments will get out of date quite quickly if things keep changing!

 

A guide to choosing journals for academic publication

The key is the match between your paper and the journal

Choosing a journal for your paper is a complex and nuanced process. Don’t expect to be able to ask anyone else off the cuff and get a sensible answer. Only people who know what you want to say and what you want to achieve in saying it can provide guidance, and even then it’s up to you to judge. In writing this I hope to make this process more transparent, and to help you be as informed as possible about your decisions. If you disagree, or can add more things to consider, or more measures of status please leave a response at the bottom!

Chicken and egg

Which comes first the paper or the choice of journal? Neither. Both. In my view you can’t write a good paper without a sense of the journal you are writing for. How you frame the argument / contribution, how long it is, which literature you locate it within, how much methodological detail, how much theoretical hand-holding is needed for readers, what kind of conclusions you want to present, what limitations you should acknowledge: ALL of these are shaped by the journal. But how do you know the answers to these questions? Usually by writing a draft! See the chicken-egg problem? My process is as follows:

  1. Come up with a rough idea for a paper – what data am I going to analyse, with what theoretical focus, presenting what new idea?
  2. Come up with a short list of potential journals (see below)
  3. Plan the paper down to paragraph level helps me think through the ideas and make good judgements about the fit between it and journals in the short list.
  4. Choose a journal. If in doubt write the abstract and send it to the editor for initial comment: what’s the worst that could happen? She or he could ignore it!

An ongoing conversation

Most journal editors want to publish papers that join and extend a dialogue between authors that is already happening in their journal. This gives the journal a certain shape and develops its kudos in particular fields or lines of inquiry. If no-one has even come close to mentioning your topic in a particular journal in the last 5 years, I’d think twice about targeting that outlet. Unless you really are planning a major disruption and claiming woeful neglect of your topic (which says something about the editors…)

Check out the editors, and stated aims and scope

Editors have the ultimate say over whether or not to accept your paper. Check out who they are, and do some research. What are their interests? How long have they been on the editorial board? If it’s a new editorial board, are they signalling a broadening, narrowing, or change in scope perhaps? What special issues have come out?

Don’t be stupid

Don’t get the journal equivalent of ‘bright lights syndrome’ and choose somewhere just because it is uber-high status (like Nature). Don’t be a ‘sheep’ either and choose a journal just because someone you know has got their paper accepted in it. Don’t send a qualitative paper to a major stats / quantitative journal. Don’t send a piece of policy analysis from (insert your random country of choice here) to a major US journal (for example) when your paper has nothing to say to a US audience.

The devil is in the detail: yes – more homework

Check out things like word limits, and whether they include references. If the journal allows 3,000 words including references, and your argument takes 5,000 to develop, either change your argument or change the journal. Simples. Also check out the review process. Look under abstracts in published papers for indications as to the timeline for review, and check if there are online preview or iFirst versions published (which massively reduces the time to publication). Don’t be caught out with a whopping fee for publication if your paper is accepted. And don’t be shocked when you read the copyright form and find it costs $3,000 for open access. Some journals publish their rejection rates: you’d be foolish to plough on not knowing 90% of papers are rejected even before review (if this was the case).

Publish where people you want be visible to are reading

Think who you want to read your paper. Forget dreams of people from actual real life reading academic journals. The only people who read them (except some health professionals) are, on the whole, other academics. This isn’t about getting to the masses: there are other, better venues for that. This is about becoming visible among your disciplinary colleagues. Where are the people you like and want to be known to in your field publishing? What journals do they cite in their papers?

Understand the status of the journal you are submitting to and its implications for your career

This is the biggie. So big I’ve written a whole section on how to do this below. But for now a few key points.

  1. It pays to know what will be counted by universities in terms of outputs, and what will have kudos on your CV. In Australia, for example, journals not on the ERA list are pretty much no-go. In some fields (particularly hard science and health), journals not indexed in Web of Science aren’t recognised as worth the paper (or pixels) they are printed on.
  2. Remember that status measures only measure what can be measured. A really prestigious journal in your field – with lots of top people publishing lots of great papers in it – might be lower (or not even register at all) in all the various indices and metrics.
  3. There is no single flawless measure of status. Take a multi-pronged approach to suss out where a particular journal lies between ‘utter crap that publishes anything’ to ‘number 1 journal in the world for Nobel Laureates only’.
  4. There are many good reasons for publishing deliberately in lower status journals. It may be they have the ‘soft’ status I mentioned above. Maybe that is where you can actually say what you want to say without having to kow-tow to ridiculous reviewers who don’t understand or accept your innovative approach (which they view as floppy, oddball etc.).

How journal status is measured and how to find this information out

A whole book could be written on this, so please forgive my omissions.

Impact Factor

This is the one everyone talks about. It is also the bane of many people’s lives outside natural and health sciences. Impact Factor is a measure of the mean number of citations to recent articles published in a particular journal, excluding citations in other papers in the same journal. So an Impact Factor of 2.01 in Journal X means that each paper in X has been cited a mean of 2.01 times in all the other indexed journals, except X, over the past two years (five year figures are also used). The higher the impact factor, the higher the status, because it shows that the papers are not only read but they are cited lots too. Excluding the ‘home’ journal stops editors bumping up their own Impact Factor by forcing authors to cite papers in their journal. Why is this problematic? Where do I start?!

  1. Not all citations are for the same reason but they all get counted the same. If you cite paper P as one of several that have investigated a topic, and paper Q as a hopeless study with flawed methods, and paper R as hugely influential and formative, shaping your whole approach, they all get counted the same. In theory, publishing a terrible paper that gets cited lots for being terrible can boost an Impact Factor.
  2. The key is in the reference to other indexed journals. The issue is: what gets to be indexed? There are strict rules governing this, and while it works okay in some fields, lots of important, robust journals in social sciences and humanities aren’t indexed in the list used to calculate Impact Factor; at least that is my experience. This can deflacte Impact Factor measures in these fields because lots of citations simply don’t get counted. The formal ‘Impact Factor’ (as in the one quoted on Taylor and Francis journal websites, for example) is based on Journal Citation Reports (Thomson Reuters), drawing on over 10,000 journals. Seems a lot? In my field, many journals are missed off this index.
  3. The time taken to be cited is often longer than two years (google ‘citation half-life’ for more). Lets say I read a paper today in the most recent online iFirst. I think it’s brilliant, and being a super-efficient writer, I weave it into my paper and submit it in a month’s time. It takes 9 months to get reviewed, and then another 3 months to get published online. Then someone reads it. Process starts again. If the world was full of people who read papers the day they came out, and submitted papers citing them almost immediately, still the lag-time to publication in many fields prevents citations within the magic 2 year window. There are versions of Impact Factor that take five years into account to try to deal with this problem. This is better, but doesn’t benefit the journals that publish the really seminal texts that are still being cited 10, 15, 20 years later.
  4. Impact Factors are not comparable across disciplines. An Impact Factor of 1.367 could be very low in some sciences, but actually quite high in a field like Education. So don’t let people from other fields lead your decision making astray.
  5. Impact Factor may work very well to differentiate highly read and cited for less highly read and cited journals in some fields (where the value range is great, say from 0 to over 20), but in fields when the range for most journals is between 0 and 1.5 its utility for doing so is less good.
  6. Editors can manipulate Impact Factors to a degree (eg by publishing lots of review articles, that tend to get cited lots). See Wikipedia’s page on impact factor for more.

How do you find out the Impact Factor for a journal? If you don’t know this you haven’t been using your initiative or looking at journal webpages closely enough. Nearly all of them clearly state their Impact Factor somewhere on the home page. What can be more useful though is knowing the Impact Factors for journals in your field. In this case you need to use your go to Web of Science. I recommend downloading the data and importing it into excel so you can really do some digging. In some cases it may not be so obvious to find, in which case try entering ‘Journal title Research Gate’ into google eg ‘Studies in Higher Education Research Gate’. The top result should give the journal title and research gate, and a url like this: http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f . Immediately on clickling the link you will find data on Impact Factor, 5 year Impact Factor and more (based on Thomson Reuters). Note this is not an official database and may be out of date at times.

Alternatives to Impact Factor: SJR

An alternative that may work better in some fields is the Scopus Scimago Journal Rankings (SJR). This includes a range of metrics or measures, and I have found it includes more of the journals I’ve been reading and publishing in (in Education). The SJR indicator is calculated in a different way from Impact Factor (which I admit I don’t fully understand, see this Wikipedia explanation). It has a normalising function as part of the calculation which reduces some of the distortions of Impact Factor and can make it more sensitive within fields where there are close clusters. SJR also has its version of impact called the ‘average citations per document in a 2-year period’. When I compare the SJR and Thomson Reuters measures for journals in my field, some are very similar and some are quite different. So it pays to do your homework. SJR data are also easily exportable to excel and you can then easily find where journals lie in a list from top to bottom by either of these measures (or others that SJR provide). The easiest way to find out the SJR data for a particular journal is simple: type the journal name and SJR into google eg ‘Studies in Higher Education SJR’. Almost always the top result will be from SCImago Journal & Country Rank, something like http://www.scimagojr.com/journalsearch.php?q=20853&tip=sid . If you go there you’ll fild a little graph on the left hand side showing the SJR and cites per doc tracking over 5 years, given to 2 decimal places. There is also a big graph, with a line for each of these two metrics. If you hover over the right hand end, you get the current figure to 3 decimal places. See the screen shot below.

Scimago info

A screen shot from SJR showing the Indicator and cites per paper data

Alternatives to Impact Factor: Zombie Journal Rankings

In Australia, lots of journals were, at one time, ranked A*, A, B or C. This was done using a pool of metrics and also peer-based data with groups of academics providing information based on their expertise. For various reasons (don’t get me started) these have been abolished. However they are a common reference point still in many fields in Australia and New Zealand, and so I call them ‘zombie rankings’. Even if you’re not in Australasia, it might be useful to look up what the rank was, to see if it confirms what you’re finding from other measures. The quickest way to is go to the Deakin University hosted webpage and to check under Historical Data, then Journal Ranking Lists, then 2010 (the rankings were alive in 2010, and abolished shortly afterwards). The direct URL is here: http://lamp.infosys.deakin.edu.au/era/?page=fnamesel10 . Type in the journal name, or a keyword and ta-dah! If you just type in keywords you will get multiple results and may be able to see a range of options. I’ve put an image of what it looks like below. Pretty easy stuff.

Zombie Ranks

A screen shot from the Deakin website showing former ERA journal rankings

Alternatives to Impact Factor: ERA list

Now there are no rankings, ‘quality’ is indicated in a binary way as either included in the ERA list or not. We’ve just had a process in Australia of nominating new journals to be included in the list for 2015. But the current 2012 list is also available through Deakin. http://lamp.infosys.deakin.edu.au/era/?page=jnamesel12f .

Alternatives to Impact Factor: rejection rates

The more a journal rejects, the better it must be, right? Well that is the (dubious, in my view) logic underpinning the celebration of high rejection rates in some journals. I’m more interested in what gets in and what difference that makes to scholarly discourse, that what is thrown out. But hey, if you can find this information out (and it’s not always easy to do), then it may be worth taking into consideration. More for your chances of survival than as a status indicator perhaps.

Alternatives to Impact Factor: ask people who know!

While only you can judge the match between your paper and a journal, lots of people in your field can give you a sense of where is good to publish. This ‘sense’, in my view is not to be dismissed because it cannot be expressed in a number or independently verified. It is to be valued because it draws (or should do) on knowledge of all the metrics, but years of experience and reading.

Conclusions

Choosing journals is tricky. If you’re finding it quick and easy it’s probably because you’re not doing enough homework, and a bit more time making a really well informed decision will serve you well in the long run. As I said earlier this post is not exhaustive either in terms of things to consider in your choice, or status indicators. But I hope this is useful as a starting place.

Do you have quotitis? How to diagnose, treat, and prevent!

What is quotitis?

Quotitis is a common disease among qualitative researchers. It’s a name I have started using to refer to the tendency for people writing about qualitative data to over-rely on raw quotes from interviews, fieldnotes, documents etc.

 

Why is this a problem?

I used the term over-rely deliberately, implying not only more than is necessary, but too much to the point of being counter-productive by virtue of its excess.

The basic point is this: whether in a journal article, thesis or other scholarly publication, people are giving their time (and quite often paying money, too) to read what you have to say, not what others have said. The value add in your work comes from expressing your thoughts, interpretations, arguments, and ideas.

 

How do I know I have quotitis?

Quotitis can be diagnosed both through its manifestations in writing, but also through reflective questioning of the (often tacitly held) assumptions underpinning your writing.

Symptoms to spot in writing

Look at your findings / discussion section. How much is indented as quotes from raw data? How much is “quoting the delicious phrases of your participants” within a sentence? It would be daft of me to give a fixed proportion to limit this, so I’m not going to. Do you give multiple exemplars to illustrate the same theme? Look at the text around the quotes. Have you given yourself (word) space to introduce quotes appropriately, and to comment on them in detail?

Underlying causes (assumptions)

A full diagnosis requires you to consider what frames your approach to writing up qualitative research. Any of the following assumptions might well give the writing doctor cause for concern:

  1. No-one will trust or accept your claims unless you ‘prove’ each one with evidence in the form of quotes from raw data
  2. Participants express themselves perfectly, and your own words are never as good, and lack authenticity
  3. Not to quote participants directly is to deny them appropriate ‘voice’
  4. Raw data is so amazingly powerful it can ‘speak for itself’.

All of these assumptions are false. Perhaps at times, in certain kinds of research that place high emphasis on sharing knowledge production with participants, you may take issue with point 3. But still, I would suggest that an academic text will be more valuable by virtue of you developing ideas around data rather than just reproducing it.

Of course, the really uncomfortable truths around some cases of quotitis are as follows:

  1. You may have a fear of your own voice and words (whether self-doubt, uncertainty, insecurity), and prefer to rest in the safety of the words of others
  2. Simple laziness, for example using quotes to pad out a text and increase the number of words.
  3. Lack of analytic insight. Lots of cases of quotitis seem to be to reflect the fact that the researcher hasn’t gone much further than coding her or his data, coming up with a bunch of themes, and wishing to illustrate them with quotes from data in the text. Coding is sometimes useful as a starting point. It is rarely an outcome of analysis.

Prevention rather than treatment or cure

It is better to address underlying causes than to treat surface symptoms, so I’ll deal with this first, before presenting some tips for treatment/cure for an existing text.

Let’s challenge those underlying assumptions.

Raw data are needed to convince readers to believe your claims

This is about the ‘evidential burden’ placed on quotes from raw data. Think about it. Does a sentence or two from an interview really prove (or establish credibility) in anything by itself? Surely we have to think about where the quote came from, how it was treated as part of a sophisticated analytic process, how it relates to other features of the data, and what features of it readers are supposed to notice and interpret in particular ways.

Moreover placing the burden of proof on quotes may be utterly illogical and force (or be a symptom) of highly reductive analyses. I doubt very much that many of the most interesting analytical insights into qualitative datasets can be accurately conveyed in someone else’s words (in the case of an interview), or in your own field notes (in the case of observation). In my experience the real value-add ideas can’t be pinpointed to one bit of data or another. They come by looking across codes, themes, excerpts etc.

To prove my point I wrote a paper based on analysis of interviews with doctoral students. It was about relationships they have with other people and their impact on learning and experience. The paper does not contain one single quote from raw data. Admittedly one of the reviewers found this odd, but I argued my case to the editor and the paper stands with no raw data quoted whatsoever. Don’t believe me? Check it out here at the publisher’s website, or here (full text free) from ANU.

The justification was this: I did my analysis by identifying all the relationships between each participant and others around them (supervisors, students, family etc). I then went through and looked for all the data relating to that relationship. After several readings, I was able to write a synoptic text, summarising everything I knew about that relationship, its origins, importance and so on. This drew on all available data, and was shaped by a holistic and synthetic reading of the data. There was no one line or even paragraph from an interview that could demonstrate, illustrate, or even support what I had to say. Because what I had to say was at a different level from what students told me directly.

This is an extreme example, and I’ve written plenty of other papers where I use quotes from raw data. But I use them sparingly and I don’t operate from misplaced assumptions about evidential burden. The problem is, many referees do apply these unfortunate ideas, so be ready to defend yourself when they do!

Participants express themselves perfectly, your words are worse

Do people really speak in the most considered, informed and evocative ways? Sure, sometimes the odd gem of a quote comes out. But I’d suggest that the craft we can put into our written text, playing around with word order, phrasing, vocabulary, emphasis and so on, means we can reach much tighter and considered words than the on-the-spot responses in interviews, or madly rushed field notes.

What are raw data ‘authentic’ expressions of that your words in the paper or not? They may authentically capture what someone said or what you wrote in the field. But is that really what your paper is about? Is it not about reading into what people say, constructing a new argument out of those comments. In which case authenticity lies at a different level: what is authentic to your argument or contribution may not be what is authentic to a participant. Unless your contribution rests solely on reproducing what others say or feel about something, for example.

Not to quote is a denial of participant voice

I never promise participants they will be ventriloquized in my writing about them (though I know in some qualitative approaches this can be important). And anyway, I would never get chance to quote from all participants equally, so there would always be some who are denied more than others. Why should those who happen to say something in a particular way (the ‘real gem’ quotes) be given voice, while those who are less articulate be silenced? Not a useful or valid basis for my writing. Neither is giving everyone blanket the same ‘voice’ because that doesn’t seem likely to be a sound foundation for a balanced, well structured text either.

What’s more as I’ve hinted above, there’s another denial going on when you over-quote from raw data: denying readers access to your opinions and insights. You’re the author of the paper: it’s your interpretations and arguments I’m interested in. Don’t deny me, the reader, chance to benefit from your thoughts by hiding behind the words of others.

Raw data speaks for itself

No it doesn’t. Or at the best this is rarely the case. This is a continuation of the point above. If raw data really was that powerful and self-evident, we would simply present interview transcripts as papers and let it be. But we don’t. Why? Because readers need help and guidance in making sense of those data. You need to hold my hand, shine the light on relevant features, make links, show connections, read between the lines, and provide contextual information that is not contained in the quote itself.

So the way you introduce quotes is important – is this ‘typical’, ‘illustrative’, or chosen for some other reason? How does it relate to other quotes you could have chosen?

And you need to provide a commentary on each quote. What work is it doing in the development of your argument? What do you want readers to take from it? Why is it important?

Raw data speaks most powerfully when you speak on its behalf.

 

Treatment and cure of quotitis

Maybe you’re working on a text and you can diagnose a likely case of quotitis: the symptoms are there in the text itself, and your assumptions are in need of some serious questioning. What can you do? Here are some tips:

Ask yourself some really difficult questions, and be ready for answers you don’t want to hear: Are you over-reliant on quotes because your analysis is half-baked? Are you presenting a list of themes or categories but not doing much with them? Are you hiding behind your data because you aren’t clear about what you actually have to say or want to add to them?

Challenge yourself to sort the wheat from the chaff: are any of your quotes absolutely essential? I promise you, not all of them will be. So bin the one’s that aren’t, and start adding better introductions and commentaries on those that are most crucial. A good way to start the sorting process is by asking: am I giving three (or more) quotes when one would do? You don’t have to prove that three (or more) people said something relating to a theme by presenting three (or more) quotes. You can quote once and say something about the occurrence of these theme across your dataset.

Ask yourself ‘what is going on here’ when you read a bunch of quotes. I mean, in the sense, what do these quotes collectively say about a particular phenomenon or idea. How can you read between the lines, analyse, synthesise, interpret them together? Perhaps you can swap heaps of raw data for paraphrasing and making a higher-level argument.

Address your anxiety about evidential burden by being really clear in your methods section why readers should trust in your evidence (because your methods of data generation were appropriate and high quality) and what you have to say about it (because your methods of analysis are clearly explained so people have a sense of how you arrived at the claims you make without having to have everything ‘proved’ with a quote).

 

In conclusion

Quotitis can be painful, especially for readers. Left undiagnosed and untreated, it can be deadly (for your publications, scholarly reputation etc). Fortunately it is easy to spot, treatable, and its underlying causes can be addressed with some critical and honest reflection. Over to you…