The post by @DeevyBee on 21st March is really worth a read.
A nice example of how a scholar reads a paper critically, identifies some limitations / problems, and invites the authors to respond (the original critique is here). In this case the original authors do respond, and this post is the blogger’s response to their response! The authors defended their methodology (on a point around the validity of a small sample or whether their study was ‘underpowered’ or not), and questioned whether a blog was an appropriate forum for criticising a peer-reviewed paper.
Interestingly the paper authors draw on the idea of the impact factor of the journal being a direct guarantee of quality, and almost like a force field against critique. I’ve written elsewhere on the meaningfulness, or lack thereof, of impact factors.
The blog author, @DeevyBee, raises a number of interesting points that are instructive and illustrative of many important issues in scholarly research:
Small samples in RCTs
1. “The authors reply with an argument ad populum, i.e. many other studies have used equally small samples. This is undoubtedly true, but it doesn’t make it right”. I agree – precedent (no matter how prestigious a journal) doesn’t make for sufficient argument. Prestigious journals have published studies in the past that would now be deemed unethical – does precedent override contemporary ethical considerations? How far back in history can we go in relying on such precedent? Statistics moves very fast, as computing power open up modelling possibilities.
2. “In the field of clinical trials, the non-replicability of large initial effects from small trials has been demonstrated on numerous occasions, using empirical data – see in particular the work of Ioannidis, referenced below. The reasons for this ‘winner’s curse’ have been much discussed, but its reality is not in doubt. This is why I maintain that the paper would not have been published if it had been reviewed by scientists who had expertise in clinical trials methodology.”. Now I’m not sure I would be as bold as the blogger in assuming what the expertise of the reviewers was, but from my editorial experience I do know that we editors have to make lots of compromises in finding reviewers who are available and willing to do a review, which is not necessarily the same as the ideal reviewers with the closest areas of expertise. I think the point that large effects from small trials are hard to replicate is really important. It’s not only a question of empirical generalisabilty in terms of sample:population relationships, but that the degree to which something appears to ‘work’ can be inflated.
The blogosphere as home to scholarly critique
The authors of the original paper took issue with the blogger’s decision to air criticisms in public. The blogger replies “I don’t enjoy criticising colleagues, but I feel that it is entirely proper for me to put my opinion out in the public domain, so that this broader readership can hear a different perspective from those put out in the press releases. And the value of blogging is that it does allow for immediate reaction, both positive and negative. I don’t censor comments, provided they are polite and on-topic, so my readers have the opportunity to read the reaction of Facoetti and Gori.”
In my experience it is relatively rare for journals to publish critical responses and rejoinders (this does happen, and when it does it is really good. Educational Researcher, a journal published by the AERA, does this quite a bit and it’s fantastic. First of all it shows that peer review does not end all matters of dispute. Also it shows how criticism, whether in the shadows of peer review, or more plain daylight, is part of parcel of scholarship.
Of course there are critiques levied within subsequent papers all the time, but these are not the same as more explicitly focused and engaged critiques of the sort that the @DeevyBee offered, and that you see in published responses/rejoinders. So given the rarity of the latter, I’d encourage blog-based critiques. @DeevyBee published the author’s responses, suggesting to me the point was not to have the final word, but to open up a debate, and to enable scholars and practitioners to engage with the findings (which were promising and potentially seductive) in a critical way. Indeed @DeevyBee says she is ready to revise her opinion if persuasive arguments come forward.
The responses to both the original critique and the blogger’s response are well worth reading. Perhaps one of the more direct comments, says “If you don’t want people to critique your work, then don’t publish your work”. Fair enough – being subject to critique is part of scholarship, and we cannot confine this to the privacy (and I would say often cloak-and-dagger) realm of blind peer review. We stand up in conferences and have to be ready for strong criticism. I may come to regret writing this, but I cannot for now think of good reasons why scholars should refrain from offering critiques on blogs, provided they are specific, give justifications, and done in the spirit of advancing scholarly knowledge and conversation (all of which apply to @DeevyBee’s critique in my view).
These kind of things are not, perhaps, about resolution (particularly in qualitative social sciences where hard and fast rules are rarer), and more about enriching our understanding and building collective capacity to engage with research and not accept peer review as a guarantor of truth or quality. Peer review is important, but messy and not free of political, personal, and practical influences. A topic for a post in the future methinks…