This week’s Tip of the Week is a bit different. The database resource that’s the focus of this piece doesn’t exist yet. Parts of it do, but there’s a ways to go before we actually have a Centralized Model Organism Database (CMOD).
The ideas that Andrew Su offers for CMOD in this talk are ones that we have to start moving towards. There has got to be a way to capture more of the annotation information that scientists have–and others need–from and for all of these sequencing projects that are flowing in daily at this point.
Using the current infrastructure of GMOD (Generic Model Organism Database) and the large community of users of the resources like the numerous GBrowses that are already out there, we’ve got access to a lot of organism-specific community-based information (even yak, butterflies, and trees among them; will these all continue to be supported individually?). But some of these species coming along lack the community size or resourcing that the big ones have. And the way we are doing things now just doesn’t scale.
I know there have been multiple attempts to capture the Wikipediean-type model of community curation, with varying success. Personally I still want a group of professional curators involved–but if we can supplement their work with additional information from the wider community that would be great. And if we can have the professionals help to seed and maintain information with new tools and strategies it would help encourage volunteer curation too. So in this talk you’ll hear more about these issues and how Su’s collaborators have approached this so far, with the reasons for the directions they chose and their experiences with Gene Wiki curation.
The video misses a bit of the intro and the questions at the end, but there’s plenty to chew on still. And you can follow along with the slide deck too (I put that in below, but you can also go directly there).
The missing piece still, though, is something that was true in the article about biocuration that Andrew noted in the talk:
To date, not much of the research community is rolling up its sleeves to annotate. What will be the tipping point? The main limitation in community annotation is the perceived lack of incentive.
I think some of the altmetrics strategies could come to support this part of the problem, but I still haven’t seen the real answer to this barrier yet.