Tag Archives: peer review

Friday SNPets

Welcome to our Friday feature link collection: SNPpets. During the week we come across a lot of links and reads that we think are interesting, but don’t make it to a blog post. Here they are for your enjoyment…

WebMedCentral revisit

I posted about WebMed Central and post-publication review last week. I couched it in the terms of the problem with ‘big data’ and traditional publishing: The amount of data available (demand) is far greater than the available number of traditional and open-access pre-review publications (supply). This, along with other factors, is why Mary keeps saying “The data isn’t in the papers any more.”

Even as we are finding that the data isn’t in the papers, neither is much of the analysis. WebMed Central and pre-publication review is a possible partial solution. But, Andrew Johnson revisited this with me (he has a paper at WMC I mentioned … along with traditionally published research), his comments (edited andused with permission :) are interesting and speak to the problems with this approach:

Here is my further experience – the journal sent me an automated notice that the article hadn’t been reviewed. Apparently this gets sent every 2 weeks until 2 reviews are completed. So I spent about an hour scouring the literature looking for people I thought would be qualified and/or interested reviewers and then mass mailing about 50 of them. Net result: 1 review…

This reviewer model definitely takes burden away from the journal and puts it on the authors. How many authors have the time to invite 50 (+15 for the initial set of invitees) individuals to review? And how unbiased can the reviews be? …

I had some replies from trusted colleagues who expressed interest in the topic but indicated they don’t have time. I completely understand that….

On a positive note, I had a message from someone who was interested in discussions based on their current lab work and direction. Of course this type of thing also happens via contact to corresponding authors on traditional papers.

So far it seems to me this WebMedCentral type model is not likely to be highly successful. It may provide a place for negative results and some interesting, worthwhile science but unless PubMed or others index these types of forums I think they will largely be ignored. I think it may provide some remedy for scientists who can’t afford page charges though many journals will give waivers if needed.

The jury is out still on this type of publication, we’ll have to revisit it later.

Preprints (different animal, same solution zoo) have had success like for physics and even biology in the form of Nature Precedings (though with only less than 2,000 ‘preprints’, it’s limited success).

data is not in the papers (nor is analysis): WebMedCentral & post-publication review

WebMedCentral is a “post publication peer review” publication. The purpose of  the site is for the fast, open, transparent  and free dissemination of biomedical data. The process is to publish your research and the the review process happens after publication. You can see more about this on their site’s peer review policy. I can see the value in this type of model, but I also see the serious issues involved. Two that come to mind (and mentioned in that previous link) is review quality and author response. Does worthy research get the review quality it requires and do the authors respond adequately to review criticism? Looking around at the site, my first answer might be “no, not really” for most of the articles. Much like wiki’s and other community sourced sites, the quality relies heavily on the number and expertise of the viewership. Not impossible but a huge hurdle.

That said, there is some pretty good data there as evidenced by “RNA Structures Affected by Single Nucleotide Polymorphisms in transcribed regions of the Genome” by Andrew Johnson, Heather Trumbower and Wolfgang Sadee *.

As of this post, there has not been any reviews of this article. Which is indicative of many possibly useful articles.I’ve read the article and found it interesting. When I get a moment, I plan to take a closer look and possibly review. But that’s part of the crux of the matter, incentive for review and for author revision. It’s there for pre-publication review, but for post-publication. My incentive is that I know and trust the authors and find the research interesting, but beyond that…? Or is that enough?

I was also pointed to these two reviews of the site and post-publication review: What is WebMedCentral? by Journalology and Wiki-Science and Moliere’s Beast from FASEB Journal. The latter is much more critical (to say the least) of the possibility of WebMedCentral and “wiki-publication.” I have to say, I’m not sold that this model will work. Though I found the latter editorial pushed the point a bit much with this:

WebmedCentral promises new discoveries in biomedical science; and its venture into Wiki publication fulfills that promise. One finds on its site that smelling one’s feet can prevent epileptiform seizures (9), that vehicular accidents might induce fibromyalgia (10) and that beachgoers on Cancun have “a very high percentage of sunscreen use” (11). One can also learn about “Uner Tan syndrome” (quadripedal gait) from the evolutionary biologist who modestly named the syndrome by his own name: Uner Tan (12).

As mentioned and linked above, there is relevant and scientifically valid data to be found there. As expected with any ‘open’ system, there is also detritus. I believe our current pre-publication review system the best system among a bunch of bad ones, but the FASEB editorial does forward on one criticism of the system that I have yet to find an answer for:

The most thorough argument for such a sea change appeared in a PLoS article by Young et al. entitled: “Why current publication practices may distort science” (18). They correctly describe the “extreme imbalance,” between the availability of excess supply (the growing global output of biomedical science and clinical studies) and the limited demand for that output by a coterie of reputable scientific journals. The result is that only a small proportion of all research results are eventually chosen for publication, and these results do not truly represent the larger body of results obtained by scientists world-wide. They argue that

… the more extreme, spectacular results (the largest treatment effects, the strongest associations, or the most unusually novel and exciting biological stories) may be preferentially published. Journals serve as intermediaries and may suffer minimal immediate consequences for errors of over- or mis-estimation (18)

This situation results in what economists who study auction behavior call “The Winner’s Curse.”

My colleague and co-blogger has written about this from a different angle, but it highlights the problem, “The data is not in the papers any more, you know.”  As she states:

I was also recently using the International Cancer Genome Consortium site’s new BioMart interface at their Data Coordination Center.  With their recent update they added some new features, I was using the new view of “Affected Genes” on that page. I picked a cancer type, I loaded up the Protein Coding genes, and there I was looking at the genes that had been repeatedly found to be affected in patient after patient. Some of the genes were not a surprise, certainly. But I sat there looking at data that a lot of people don’t know about–because it’s not in the papers yet. And it may not be for a long time.

Or ever. I find myself coming across data that might be interesting, conclusions that are useful and possible analysis that would add to the general understanding (if ever so slightly).

There is a deluge of data, even a deluge of analysis and a limited number and bottleneck of review.

I’m not sure WebMedCentral or like publications, sites or wikis are the answer, but there needs to be one.

*disclosure, Mr. Johnson has written for us on this blog before, and we know Heather Trumbower, this is how I knew of the article. ANd if you have a chance, go review the article :D.

Friday SNPpets

Welcome to our Friday feature link collection: SNPpets. During the week we come across a lot of links and reads that we think are interesting, but don’t make it to a blog post. Here they are for your enjoyment…

Friday SNPpets

Welcome to our Friday feature link collection: SNPpets. During the week we come across a lot of links and reads that we think are interesting, but don’t make it to a blog post. Here they are for your enjoyment…

Peer review again…

Just a quick comment gathered from a link Mary showed me. I have had my run ins with reviewers of papers and grants. Some reviewers definitely have an agenda and, as humans, all reviews have subjective biases. I could tell you stories. But, for the most part, even given the few huge frustrations, the review process have made my research and papers better and tighter science.

Peer review is not without it’s problems, there are entire blogs (like Peer-to-Peer) and communities discussing it. And of course, a lot of substandard research is published, as possiblty evidenced by the recent discussion of the arsenic-eating bacteria paper. I’ve read my quota of really bad research and spurious conclusions from peer-reviewed journals. But again, it’s the sum-total of the peer review system (and the subsequent discussion, more research, rebuttal publications) that has obviously created excellent science and the advancement of our understanding of biology heretofore.

Nature Precedings is not an alternative peer-review, it’s a place to put research before peer-reviewed publication to invite discussion, spur further research and claim priority. But I’ve seen it pointed to as the part of an alternative. Yet, it’s papers like this (and two others by the same author) that make me realize that peer-review is a necessary purgatory. I won’t spend the time eviscerating the issues of this research, they are legion and it’s not worth my time. I can imagine the casual reader of Nature Precedings might come across it and see all the biolingo and think it’s legitimate research, but really that doesn’t concern me. But, the author of these articles uses them to lend legitimacy to his main thesis: “genome data proves false the theory of evolution.” He does this in a press release (http: //www.prweb.com/releases/theory/genome/prweb4896744.htm .. I won’t link so as not to give this any more web legitimacy, but you can take the space out if you want to see it) where he links to all three “publications”:

Using modern genomics, Dr. Senapathy and his team’s work, showed how the abundance and diversity of life on earth originated directly in the prebiotic environment. They have presented the results in three scientific publications in Nature Precedings: publication 1publication 2publication 3.

Research shows that modern genome data completely uproots the evolution model.

He uses the well-deserved respect of the brand “Nature” and the sleight of hand to call them “publications” (when all they are pre-prints with no peer review or review of the science) to lend legitimacy to a counterfeit conclusion.

With all the ‘woo’ in this world, I would suspect that peer-review or some other rigorous solution (which I haven’t yet seen) is more necessary than ever to move science forward.

Edit by Mary: I was just watching some journalists discover this story on MuckRack (http://muckrack.com/sci ) and here’s what they said:

Friday SNPpets

Welcome to our Friday feature link collection: SNPpets. During the week we come across a lot of links and reads that we think are interesting, but don’t make it to a blog post. Here they are for your enjoyment…