In the latest update of Ensembl, the developers added the ability to save configurations. This allows you to set your track views and analysis to a specific configuration and load that configuration at a later time. The blog post linked previously (or here) explains the steps to creating your own configurations you can save and return to. In the future they will be adding the ability to share your configurations with colleagues and other researchers.
Here at OpenHelix we think a lot about the differences between nominally similar software that will accomplish some given task. For example, in our workshops we are often asked about the differences between genome browsers. Although UCSC sponsors our workshops and training materials on their browser, we know they aren’t the only genome browser out there and we can talk about them all–in fact, that’s one of the coolest things about being separate from UCSC or a specific software tool provider/grant–we can talk about everyone! And our answer is usually something like this:
The basic foundation of the “official reference sequence” is usually the same in all the main browsers. However, the way they choose to organize the display, the tools for showing/hiding annotation data, and the custom query and display options vary. But they generally all have some mechanism for this. For me, usually the choice comes down to what data I need to look at–and how a given software tools shows me that and lets me interact with it.
I know that’s largely an end-user perspective, but that’s who is attending our workshops. I can remember talking to one guy at our conference booth who only wanted to use a genome browser with the reference sequence display organized vertically. I gave him Map Viewer. Some people need a specific species–and no matter how good the software is, if your research species isn’t in there, it just doesn’t matter…. I’ve seen super-users on twitter complain about the look of the background at one browser or another. That doesn’t have much bearing on my choice–but I do have to say I really hate “hidden” menus and features you have to hover and dig to find, in general. What you don’t see is just impossible to know as an end-user.
But quite frankly when I’m looking for some details in a given region for a research use, I often explore all the browsers I know because of their differences in display and available data to show–to make sure I’m not missing anything. It doesn’t take that long to use them all (if you know your way around, and I think I do…).
But one group has tried to quantify the differences between software tools in a standardized way with with specific metrics. A group from INRA has collected and assessed various characteristics of genome browsers, and has developed a database where you can look at what they have curated. It’s called CompaGB.
You can assess the features as one of these profiles: biologist, computational biologist, or computer scientist at this time. In this tip of the week I explore the CompaGB interface, from an end-user biologist perspective. I’ll choose a couple of browsers to compare, and we can look at the type of things that the CompaGB team scores to give you a sense of what you can find. For developers you’ll see there are different metrics and you should go back and explore those as well.
In their paper they describe their inspiration for this project–which is QSOS. The Qualification and Selection of Open Sources software project provides a model and framework to standardize descriptions of available software features. The QSOS framework is illustrated in this graphic on their Welcome page:
In short, they have 4 steps: defining frames of reference appropriate for the software tool; assessing the features; qualify the features with a weighting mechanism, and selecting the appropriate tool.
You can easily see how the CompaGB team integrated these ideas in their database of genome browser comparisons. They let you choose criteria you are interested in, and offer a radar plot display as well as a tabular representation of the scores so you can consider the overall view or the details.
There are scores for “full, limited/medium, and poor” but not a lot of detail on that. They assessed the tools in 4 sections: (1) technical features, (2) data content, (3) GUI, and (4) annotation editing and creation. There is apparently no swimsuit competition. Alas.
The paper says that 4 different evaluators examined the tools (at this time 7 different genome browser: MuGeN, GBrowse, UCSC, Ensembl, Artemis, JBrowse, Dalliance). They have version numbers–for example you can compare the 2 widely used GBrowse versions right now. How often these will get re-evaluated I don’t know. And how to compare different installations of GBrowse at different sites is not really clear to me–they can vary a lot by what the project team wants and needs to implement.
One evaluator did each tool in most cases. And reportedly the results were sent to the database providers for checking. I have no idea what was sent to UCSC on the training issue… [*cough* I have issues with the UCSC training score details, for example...Yeah, we do workshops and so does UCSC. Lots of workshops around the world, we have slides and exercises...I'll show you in the tip where I saw it.] They do encourage users to comment or suggest on their web site if you have supplementary information–I may want to add some details later ;) And it appears possible to create new items and curate, but I haven’t tried this. They also say they are re-vamping the evaluation process going forward to simplify it.
But…this statement in their paper surprised me:
The UCSC browser natively displays a broad range of human annotations, including cross-species comparisons. UCSC browser’s underlying strategy focuses upon centralizing data on UCSC servers and, as far as we know, no external lab has installed it locally for the purpose of storing and browsing their own data.
Ummm–no. We talk to people all over the place who maintain local installations of UCSC. Quite often in hospital situations where patient data privacy is a major issue there are local installs; certain companies have them. But there are others as well–among them a bunch of mirrors around the world. There’s a whole separate mailing list where people discuss their issues with their own installations. But we’ve also seen the UCSC infrastructure used for other species that UCSC doesn’t support such as HIV, malaria, phage browsers, and more. Maybe this unusual setup of the UCSC software at the Epigenetics Visualization Hub would be interesting to be aware of. And we know the UCSC team consults with groups and helps them to do it. And by the way–we mention in our tutorials and workshops that we’ve done around the world that other installations are possible and available. And we know that the materials we provide are used in many countries to do local trainings as well.
So it was an interesting attempt to measure software features, and I understand why they attempted it, but it seems challenging to scale and maintain. And the curation strategy will have to be considered when evaluating the data. These are fixable if the project proceeds beyond this early set of browsers and branches out to other types of open source software. It really is hard to know what’s worth spending your time on, I admit. And that’s why we hope end-users have a look at our training materials to get introduced to a specific site and see if it suits their needs, and they can kick the tires with the exercises.
There are a number of genome browsers out there–we’ve covered that a number of times. And there are always new ones coming along. With the onslaught of sequence data we’re about to get from high-throughput sequencing, more and more research groups, communities, and individuals are going to need to choose a genome browser to use to display their data.
One time I stumbled across the survey results for a group that was choosing a new platform to display their community’s data: MaizeGDB. I wrote about it then because I thought it was interesting, and because I know people are facing this pretty regularly now. We get asked. But since that time they have progressed, implemented, and they wrote up their experience. It’s now been published in Database.
It’s a pretty straighforward paper. They describe their needs and their assessment of the resources their community had and used. They surveyed likely users to see what they wanted, and how they felt about the pieces that already existed. One piece they specifically noted–when asked, many users did not say they used Ensembl, but the Ensembl software was the foundation of one of the items they did say they used. MaizeGDB writes:
This result shows that users may not be aware of the underlying browser software that the various web sites use.
Ah, yeah. Here’s another thing this shows: database end users are definitely not thinking about browser software the same way database developers are. And I do not mean end users are stupid. They just do not think about this stuff the way software providers think they do. We keep trying to tell providers this. It’s not always well received.
So anyway, they move on to assess the candidates for their new implementation. The focus on Ensembl, GBrowse, Map Viewer, UCSC Genome Browser, and xGDB. They describe the framework, possibilities, and limitations of each for their purposes. I think this is a nice look at the various options that lots of people considering the issue should find useful. They also address that there are other browser that have since, or may still, come along in the future that could be considered, but at the time these were the focus.
They go on to describe their implementation experience. They seem pleased with it. And they highlight a one of their favorite pieces, a Locus Lookup tool, that they have added as well. It sounds like it’s serving their community really nicely.
This is a highly useful paper for the people in the market for genome browsers. It’s not for everyone, for sure. Well, at least not yet. But your day is coming. You’ll need a browser eventually….
References: Sen, T., Harper, L., Schaeffer, M., Andorf, C., Seigfried, T., Campbell, D., & Lawrence, C. (2010). Choosing a genome browser for a Model Organism Database: surveying the Maize community Database, 2010 DOI: 10.1093/database/baq007
Andorf, C., Lawrence, C., Harper, L., Schaeffer, M., Campbell, D., & Sen, T. (2010). The Locus Lookup tool at MaizeGDB: identification of genomic regions in maize by integrating sequence information with physical and genetic maps Bioinformatics, 26 (3), 434-436 DOI: 10.1093/bioinformatics/btp556
EDIT: added links to a couple of older blog posts, should have had them in before….