WebMedCentral is a “post publication peer review” publication. The purpose of the site is for the fast, open, transparent and free dissemination of biomedical data. The process is to publish your research and the the review process happens after publication. You can see more about this on their site’s peer review policy. I can see the value in this type of model, but I also see the serious issues involved. Two that come to mind (and mentioned in that previous link) is review quality and author response. Does worthy research get the review quality it requires and do the authors respond adequately to review criticism? Looking around at the site, my first answer might be “no, not really” for most of the articles. Much like wiki’s and other community sourced sites, the quality relies heavily on the number and expertise of the viewership. Not impossible but a huge hurdle.
That said, there is some pretty good data there as evidenced by “RNA Structures Affected by Single Nucleotide Polymorphisms in transcribed regions of the Genome” by Andrew Johnson, Heather Trumbower and Wolfgang Sadee *.
As of this post, there has not been any reviews of this article. Which is indicative of many possibly useful articles.I’ve read the article and found it interesting. When I get a moment, I plan to take a closer look and possibly review. But that’s part of the crux of the matter, incentive for review and for author revision. It’s there for pre-publication review, but for post-publication. My incentive is that I know and trust the authors and find the research interesting, but beyond that…? Or is that enough?
I was also pointed to these two reviews of the site and post-publication review: What is WebMedCentral? by Journalology and Wiki-Science and Moliere’s Beast from FASEB Journal. The latter is much more critical (to say the least) of the possibility of WebMedCentral and “wiki-publication.” I have to say, I’m not sold that this model will work. Though I found the latter editorial pushed the point a bit much with this:
WebmedCentral promises new discoveries in biomedical science; and its venture into Wiki publication fulfills that promise. One finds on its site that smelling one’s feet can prevent epileptiform seizures (9), that vehicular accidents might induce fibromyalgia (10) and that beachgoers on Cancun have “a very high percentage of sunscreen use” (11). One can also learn about “Uner Tan syndrome” (quadripedal gait) from the evolutionary biologist who modestly named the syndrome by his own name: Uner Tan (12).
As mentioned and linked above, there is relevant and scientifically valid data to be found there. As expected with any ‘open’ system, there is also detritus. I believe our current pre-publication review system the best system among a bunch of bad ones, but the FASEB editorial does forward on one criticism of the system that I have yet to find an answer for:
The most thorough argument for such a sea change appeared in a PLoS article by Young et al. entitled: “Why current publication practices may distort science” (18). They correctly describe the “extreme imbalance,” between the availability of excess supply (the growing global output of biomedical science and clinical studies) and the limited demand for that output by a coterie of reputable scientific journals. The result is that only a small proportion of all research results are eventually chosen for publication, and these results do not truly represent the larger body of results obtained by scientists world-wide. They argue that
… the more extreme, spectacular results (the largest treatment effects, the strongest associations, or the most unusually novel and exciting biological stories) may be preferentially published. Journals serve as intermediaries and may suffer minimal immediate consequences for errors of over- or mis-estimation (18)
This situation results in what economists who study auction behavior call “The Winner’s Curse.”
My colleague and co-blogger has written about this from a different angle, but it highlights the problem, “The data is not in the papers any more, you know.” As she states:
I was also recently using the International Cancer Genome Consortium site’s new BioMart interface at their Data Coordination Center. With their recent update they added some new features, I was using the new view of “Affected Genes” on that page. I picked a cancer type, I loaded up the Protein Coding genes, and there I was looking at the genes that had been repeatedly found to be affected in patient after patient. Some of the genes were not a surprise, certainly. But I sat there looking at data that a lot of people don’t know about–because it’s not in the papers yet. And it may not be for a long time.
Or ever. I find myself coming across data that might be interesting, conclusions that are useful and possible analysis that would add to the general understanding (if ever so slightly).
There is a deluge of data, even a deluge of analysis and a limited number and bottleneck of review.
I’m not sure WebMedCentral or like publications, sites or wikis are the answer, but there needs to be one.
*disclosure, Mr. Johnson has written for us on this blog before, and we know Heather Trumbower, this is how I knew of the article. ANd if you have a chance, go review the article :D.