Tip of the week: JBrowse, a game changer?

jbrowse_tipIn most of software and database development the changes that are coming along all the time seem to be tweaks and polishes on the existing strategies. Every so often, though, there’s a big shift in the strategy or mechanism. This week the JBrowse paper I read made me realize that is now firmly underway. Today’s tip of the week will introduce JBrowse, and here I’ll describe some of the reasons this is a game changer.

JBrowse is a JavaScript-based genome browser that has been under development for some time, but the paper in Genome Research this month marked it’s real debut. JBrowse is different from most of the browsers you are used to using because it divides up the computational work differently. With your current browsers, most of the work is done on the resource provider’s servers (and generally this is a decent sized server farm in a room somewhere). You ask for something, the pages get built by the server, and it gets delivered to your computer and you look at it. JBrowse turns this around. It does have to access the data from the provider, but it saves most of the work of building the pages/images/etc for your computer to do. Because of this strategy, it can draw stuff really fast on your computer, you can move around much more easily along a genomic region, and you have a really smooth experience.  (I should note that the JBrowse team acknowledges this isn’t the first of these strategies–XMap and Genome Projector are already out there piloting this in different ways, and a few other tools have developed other methods around this–new versions of GBrowse and the NCBI Sequence Viewer for example.)

JBrowse is a really nice piece of software. It is very easy to move around. It is very cool and simple to drag the things in you want to see, and drag them back out again to remove them. There’s also a corresponding wiki editing piece that could make wiki editing for custom annotations really easy. It has many appealing features.

This shifting of the overhead is good news and bad news, I think. The good news is that you may really like interaction with genome maps the way you do with Google Maps now. There’s certainly advantages to this if you are looking around at chromosomal regions in bulk, or even zooming in to a single gene and wanting to scrutinize it without reloading page after page. Reduced burdens on the provider’s servers will mean more people get quick responses, and save resources there. Also–we are moving to bigger and bigger data. More detailed data points in the genome. More whole individual genomes, or more species to compare. This “big data” is going to put more burdens on the keepers of the data to handle serving that up for everyone. So, this could assist with that problem.

The bad news is that your computer will have to do more of the work. For many of us, that won’t be a huge crisis. If you have a current computer, you probably have sufficient processing power and updated software to run what you need. But not everyone has current equipment.  In addition, as someone who has extensive experience with various hardware and software configurations in training rooms, I think this may be a problem. It won’t be insurmountable, of course, but it will be an issue.

Here’s the thing: we see a lot of dated computers in training rooms. We also see a lot that are locked down for updates, or wiped clean with the settings each night. Some corporate systems are standardized on software that could be called antique.  We have seen some thin clients that are very lightweight (physically and otherwise), and increasingly netbooks.  Sometimes people bring their own computers and this means extensive variation of browsers and operating systems. Ensuring that everyone has sufficient processing and the correct software (or even the capability of the correct software) is not a new problem. It has always been an issue in software testing to make sure the major browsers can handle a provider’s software–I know, I’ve been on the testing teams. But when the provider had more control over this it was somewhat easier, and it was stable for longer periods.

Also, as the development cycles of various projects varies widely, if someone develops their browser with a certain version of JavaScript + browser that works, and another species project you needs is 4 versions back, what do you do?  And someone else is 2 versions off that?  I know some of this isn’t entirely new–it exists today with various browsers and platforms.  But I think this amplifies that issue as more small groups develop their own stuff.  There’s no way they can anticipate every possible combination of OS + browser + other plugin/supporting software/etc versions, etc.  And this affects the user support that has to be in place to accommodate this.  And user support is not always something that gets high priority from funding agencies, and sometimes developers don’t have the bandwidth for that.  And making it even more variable by end user platform–this could be rough.

None of these problems are insurmountable, I know.  And they are no reason to not move forward on the idea of shifting the processing to the user.  But it has downstream implications that we have to be thinking about.

But go over and try the JBrowse sample set up and get a sense of what this means.  It is really easy to use, and browsing is really nice.  Start thinking about how this affects the way we do this in the future.

Visit and use JBrowse:

Publication: Skinner, M., Uzilov, A., Stein, L., Mungall, C., & Holmes, I. (2009). JBrowse: A next-generation genome browser Genome Research, 19 (9), 1630-1638 DOI: 10.1101/gr.094607.109

4 thoughts on “Tip of the week: JBrowse, a game changer?

  1. Pingback: GMOD summer school–homework! | The OpenHelix Blog

  2. Pingback: Cutest genome yet | The OpenHelix Blog

  3. Pingback: Tip of the Week: Gaggle Genome Browser | The OpenHelix Blog

  4. Pingback: Their genomes, unzipped | The OpenHelix Blog

Comments are closed.