Thursday, December 23, 2010

Can Social Media Replace Pre-Publication Peer-Review?

Richard Smith (former editor of BMJ) commented on the case control study of the XMRV (xenotropic murine leukemia virus related virus) as a cause of Chronic Fatigue Syndrome.  The study was published in Science and he comments that there were several problems and people called for a better peer-review process to avoid these problems in the future.

[Added 12/24/2010: There have been several comments for this post, highlighting some of the controversies regarding this topic.  "CFS" has had a lot of recent research and studies with very conflicting results.  The comment by Richard Smith mentioned above was made in March 2010, and a lot of research has been reported since then.  Hopefully we will soon find out the truth and hopefully get closer to providing a cure for our patients.  This post is NOT about appraising the evidence regarding the "CFS" literature and thus this is NOT a commentary on the Science study mentioned above.  It is about the problems with peer review process in general as identified by a former editor of a major journal,  and a tentative exploration for an alternative model, and barriers to such a model. The statements in the paragraph above referring to CFS and XMRV are there just to provide context.  For the purpose of this post, it could well have been another condition and a different study]

Richard Smith points out the problems with the current peer-review process:

  • Faith based not evidence based
  • Slow
  • Expensive
  • Largely a lottery
  • Poor at detecting errors and fraud
  • Stifles innovation
  • Biased
He suggests that we move away from our bias for top journals and move away from the traditional peer-review process and use a "publish and then filter" process.  

This got me thinking about how this could work.

  1. A central resource for online hosting of all research articles in each area of biomedical science.  We would not have multiple journals competing and catering to the same audience
  2. There would be some kind of simple review process to filter out "junk" and "spam" publications
  3. The articles would need to include all the necessary raw data so anyone could rerun the statistical tests and verify the results.
  4. There would be a robust authentication scheme for authors.
  5. Each article would have a place for commenting much like a blog, but you would need to have to be authenticated before submitting your comments.  There would be no anonymous comments.
  6. Readers after logging in could rate each article on various criteria e.g. study design, practical value, etc...  
  7. The comments could also be rated up or down
  8. It would be possible to track how many times the article was cited, tweeted and posted on Facebook; how many times it was downloaded, favorited,  etc.
  9. Other studies on the same topic would also be linked from the article making it easy to find all the studies in one place.
  10. Part of the publication process would be to search for all the previously published related articles in this central repository and provide links to all of these.
  11. Viewers could see a timeline of development of literature on a specific topic 
  12. Over a period of time, some studies, authors, commentators would rise to the top.  
  13. There would be a robust search and tagging system.
  14. Some articles could be accompanied by "editorials".
  15. Every time the IRB at an institution approved a protocol, it would create an entry in this central repository.  Investigators would have to provide their data and a short summary at end of the study even if they did not write it up fully.  This would remove the problem of publication bias for positive studies and make meta-analyses more complete.  If they did not provide this information, their ratings would go down.  
Most of this functionality already exists - just look at YouTube, Ebay, Amazon etc.  It would not take a lot to get this working.  The problem is breaking down the traditions and existing norms.  How can you replace the thrill and ego-boost that authors get from having their article accepted in a "top-tier" journal.  Would the really big multi-center randomized double blinded trials with positive results get submitted to this central resource instead of to a top tier journal?  Would universities change their criteria for promotion and tenure?

We need to break down some of the walled gardens of some of our "top" journals and level the playing field but it will be an uphill battle.

[Added 12/24/2010 - Looking at some of the comments for this post, there is clearly a lot of energy surrounding the research on "CFS".  Would it not be easier for folks looking to study this condition if all the studies reporting on "CFS" and possible connection to XMRV were published in the same repository, so they would not have to go to multiple journals and databases to find this information, all the raw data was available, the pros and cons of each study were transparently viewable and authenticated users could post comments in unmoderated fashion (like to this blog post) to add to the richness of the discussion?  Why do we need to have so many barriers to collaboratively finding solutions to such vexing problems?]

9 comments:

  1. You said : "The study was published in Science but there were several problems and people called for a better peer-review process to prevent such studies from getting published."

    I believe you may be wrong in believing the Science study is skewed. The author of this paper may just earn a Nobel prize in the future. She has proven through many experiments that there is a brand new, infectious retrovirus present in the patients of ME/CFS AND healthy people. She has proven infectivity and showed presence of antibodies to that agent.

    This study has also been replicated by Dr Alter and Shih-Ching Lo in PNAS, another very reputable journal I may point out to you doctor, while our friends in UK and some in the USA are screaming contamination.

    ME/CFS is very controversial and political and most people are putting political and financial agendas ahead of the health of the general population.

    I will point out to you that MS patients, epileptics and stomach ulcers sufferers were believed to be of psychogenic origin, some of them institutionalized as such. This is very shameful.

    I hope you get better education.

    ReplyDelete
  2. What part of the Lombardi et. al. Science 2009 paper leads you to conclude the peer review was faulty and that it was unsuitable to have been published?

    The study spent over 6 months in peer review and was closely scrutinized and scrutinzed again by the peer reviewers. John Coffin said that it was as good as it gets for a first paper. Harvey Alter said his study (Lo et. al., PNAS, 2010) was highly confirmatory of the Science paper.

    I think you may have accidentally swapped your conclusions with the 0/0 XMRV findings and the contamination papers, such as the one that concluded XMRV is not a real human pathogen. Those ones are the faulty papers that shouldn't have passed peer review.

    ReplyDelete
  3. Could you research and share your opinion with us about the Ila Singh, Ph.D., U. of Utah, patent, which found XMRV not only in ME/CFS, but in Prostate Cancer, Breast Cancer, and Lymphoma?

    If your idea is implemented, I agree that it would allow for a'trail' on research, which would be very valuable; however, tweets and FB pages do not inform me as to scientific matters. They may lead me to them, but they do not define what Science is: a search for the truth.

    The WPI, NCI and CC study was scrutinized like no other; that's because of the politics being played to the detriment of science. Obama's instructions today, Dec. 24, 2010 may stop this travesty.

    ReplyDelete
  4. All the claims about Lombardi et al. were settled in the response to comments published in 'Science'. There were no issues with the paper, and this is why it was published in such a highly respected journal.

    Those that are trying to claim that somehow the standing of 'Science' has magically changed over night, tend to be those the research is most likely going to impact on, i.e. embarrass, destroy theories, or result in prison sentences.

    ReplyDelete
  5. Kathryn and Katie,
    thanks for your comments. Please read my addenda within the post. If you do visit this page again, I would love to hear your opinions on the proposed model and what it would mean for you.

    @kathryn,
    I agree about the tweets etc... The reason for having that in the model is to see what is "popular" and thus, for example, a physician trying to stay updated with literature would be able to know what questions their patients might ask her/him next day

    ReplyDelete
  6. As an ME/CFS patient who has seen FAR too much bias and political manipulation of research into this disease, I fully support the idea that there should be a more open publication and review process. On one condition:

    Patients suffering from diseases being discussed MUST be able to comment on whether or not the research accurately reflects the disease it is supposed to be studying. FAR too much research pretending to be about ME/CFS has been published that is not, in fact, about this disease at all. They are trying to imply that research done on patients who DO NOT have an inflamatory neurological disease (such patients are actually excluded from the studies) is somehow relevant to those that do. There has been research fraud on a massive scale in this disease for 25 years or more, that patients have been unable to stop. It has done irreparable harm. Unfortunately, the BMJ has been at the forfront in publishing these fraudulent papers.

    ReplyDelete
  7. Hello,

    I have not seen a response to this, but wanted to add this as a supplement to what I have said above.

    (quoted from http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind1012e&L=co-cure&T=0&F=&S=&P=2781)

    Firstly, from the outset the psychosocial PACE Trial was viewed as highly controversial, had significant difficulty recruiting patients and was plagued with an extraordinary high drop-out rate: just some of the reasons that the PACE results are overdue by years.

    Secondly, The Designers and Principle Investigators of PACE have long-standing links with the health insurance industry and have clear conflicts of interest in that insurers stand to gain if patients are viewed as suffering psychosocial illness that is treatable by CBT/GET.

    Thirdly, One of the main financial sponsors of PACE is the UK Government's Department of Work and Pensions (DWP). The DWP also has a clear conflict of interest in potential welfare benefit savings if patients are viewed as suffering psychosocial illness that is treatable by CBT/GET.

    Fourthly, PACE unscientifically conflates patients suffering from the physical neuroendocrine disease known as Myalgic Encephalomyelitis (categorised by the WHO in ICD-10 section G93.3) with those suffering
    from psychiatric/idiopathic fatigue syndromes (categorised separately by the WHO in ICD-10 section F.48). This is in defiance of good scientific practice and contrary to World Health Organisation medical health taxonomy which the NHS and NICE are legally obligated to adhere to.

    Fifthly, The PACE "Oxford" patient selection criteria are unscientific in that they rule out patients presenting with cardinal symptoms of ME
    and broadly include those with psychiatric symptoms. These criteria are far from widely accepted in the medical profession and were part funded by PACE Principle Investigator Professor Peter White who also has a long professional association with the medical insurance industry[3].

    Sixthly, the internal PACE Trial Manuals obtained by the ME Community under the freedom of information act clearly and unequivocally show that PACE Trial recruiters and operators were inappropriately selecting and filleting patients in a manner that is far from good practice expected of genuine Randomised Control Trials (RCTs) - see extracts and links below[4].



    The concerns above have been expressed repeatedly for years, but rather than being heard, the abuse of science and of patients as a consequence has continued and been institutionalized.

    ReplyDelete
  8. @goneawol
    Thanks for your comments. You raise an important point regarding the applicability of study results to specific patients.
    Practice of Evidence-based-medicine requires practitioners to critically appraise the study results and ask whether their patient is different or similar to the ones described in the study. This exercise requires sufficient detailed information to be provided to the reader of the study to make this judgment.
    A model as I described above would hopefully support this.
    Involving patients in decision making and helping them take ownership of their own chronic health issues is one of the goals of chronic disease management. Having this information be easily and transparently available to patients and their health care providers would be a huge step forward.

    ReplyDelete