The terrific folks at NCBI have been increasing their outreach with a series of webinars recently. I talked about one of them not too long ago, and I mentioned that when I found the whole webinar I’d highlight that. This recording is now available, and if you are interested in using these medical genetics resources, you should check this out.
I was reminded of this webinar by a detailed post over at the NCBI Insights blog: NCBI’s 3 Newest Medical Genetics Resources: GTR, MedGen & ClinVar. There’s no reason for me to repeat all of that–I’ll conserve the electrons and direct you over there for more details about the features of these various tools. There is a lot of information in these resources, and the webinar touches on these features and also describes the relationships and differences among them.
Acland A., R. Agarwala, T. Barrett, J. Beck, D. A. Benson, C. Bollin, E. Bolton, S. H. Bryant, K. Canese, D. M. Church & K. Clark & (2013). Database resources of the National Center for Biotechnology Information, Nucleic Acids Research, 42 (D1) D7-D17. DOI: http://dx.doi.org/10.1093/nar/gkt1146
Although I had other tips in my queue already, over the last week I’ve seen a lot of talk about the new Ebola virus portal from the UCSC Genome Browser team. And it struck me that researchers who have worked primarily on viral sequences may not be as familiar with the functions of the UCSC tools. So I wanted to do a tip with an overview for new folks who may not have used the browser much before.
There is great urgency in understanding the Ebola virus, examining different isolates, and developing possible interventions to help tackle this killer. Jim Kent was made aware of the CDC’s concerns from his sister–who edits the CDC’s “Morbidity and Mortality Weekly Report”, according to this story:
“It wasn’t until talking to Charlotte that I realized this one was special,” Jim Kent said. “It had broken out of the containments that had worked previously, and really, if a good response wasn’t made, the entire developing world was at risk.”
Jim Kent redirected his team of 30 genome analysts to devote all resources toward developing the Ebola genome. They worked through the night for a week to develop a map for other scientists to determine where on the virus to target treatment.
So the folks at UCSC have created a portal where you can explore the sequence information and variations among different isolated strains, annotations about the features of the genes and proteins, and they even added a track for the Immune Epitope Database (IEDB, which happened to be a video tip not long ago)–where antibodies have been shown to bind Ebola protein sequences. The portal also provides links to publications and further research related to these efforts.
The reference sequence that forms the framework for the browser is a sample from Sierra Leone: http://www.ncbi.nlm.nih.gov/nuccore/KM034562.1 It was isolated from a patient this past May, and I don’t see a publication attached to it–the submission is from the Broad’s Viral Hemorrhagic Fever Consortium. There are more details and thanks to the Pardis Sabeti lab for the sequence, you can read in the announcement email. So, as we keep seeing, we need to have access to the data long before publications become available. The work happens in the databases now, we can’t wait for traditional publishing.
In a side note, I also just learned that the NLM (National Library of Medicine) has a disaster response function, and they have a special Ebola section now because of the needs: Ebola Outbreak 2014: Information Resources. And for more of Jim Kent’s thoughts on Ebola, check out the blog that the UCSC folks have just started: 2014 Ebola Epidemic.
The goal of this tip was to provide an overview of the layout and features for folks who might be new to the UCSC software ecosystem. If you already know how to use it, it won’t be new to you. But if you are interested in getting the most out of the UCSC tools, you can also explore our longer training videos. UCSC has sponsored us to provide free online training materials on the existing tools, and this portal is based on the same underlying software. So you can go further, including using the Table Browser for queries beyond just browsing, if you learn the basics that we cover in the longer suites.
Karolchik D., G. P. Barber, J. Casper, H. Clawson, M. S. Cline, M. Diekhans, T. R. Dreszer, P. A. Fujita, L. Guruvadoo, M. Haeussler & R. A. Harte & (2013). The UCSC Genome Browser database: 2014 update, Nucleic Acids Research, 42 (D1) D764-D770. DOI: http://dx.doi.org/10.1093/nar/gkt1168
This week’s tip of the week highlights the MEGA tools–MEGA is a collection of tools that perform Molecular Evolutionary Genetics Analysis. MEGA tools are not new–they’ve been developed and supported over many years. In fact, on their landing page you can see the first reference to MEGA was in 1994. How much computing were you doing in 1994, and what kind of computer was that on?
As they describe their tools on their homepage–here’s a summary:
MEGA is an integrated tool for conducting sequence alignment, inferring phylogenetic trees, estimating divergence times, mining online databases, estimating rates of molecular evolution, inferring ancestral sequences, and testing evolutionary hypotheses.
But you can see they’ve progressed regularly and deeply since 1994, continuing to add new features and tools, and the current version is MEGA6. Although we usually focus on web-based interfaces, there are some tools that run on a desktop installation instead. So you will have to download and install MEGA to try it out, but the number of things you can do with it make it worth your time.
The first video illustrates a file conversion and preparation–getting your data into the right format for MEGA. I won’t embed that here, but when you are ready to kick the tires yourself you should have a look. I’ll jump right to the second video, that includes a bit more action about the things you can do with MEGA. This covers generating a neighbor-joining tree, and several subsequent options for modifying and saving it.
But this is just one aspect of what you can do with the MEGA tools. Be sure to explore the range of things you can do. Their documentation contains a section aimed at the “first time user” that can help you to understand various options you have. They also have sample data for you to try out the tools.
References: Tamura K., N. Peterson, G. Stecher, M. Nei & S. Kumar (2011). MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods, Molecular Biology and Evolution, 28 (10) 2731-2739. DOI: http://dx.doi.org/10.1093/molbev/msr121
Tamura K., G.Stecher, D. Peterson, A. Filipski & S. Kumar (2013). MEGA6: Molecular Evolutionary Genetics Analysis Version 6.0, Molecular Biology and Evolution, 30 (12) 2725-2729. DOI: http://dx.doi.org/10.1093/molbev/mst197
The Calyedo team and the tools they develop have been on my short list of favorites for a long time. I’ve been talking about their clever visualizations for years now. My first post on their work was in 2010, with the tip I did on their Calyedo tool that combined gene expression and pathway visualization. They’ve continued to refine their visualizations, and enable new data types to be brought into the analysis, and earlier this year we featured Entourage, enRoute, LineUp, and also StratomeX. They have lots of options for wrangling “big data”. But recently they published a paper on StratomeX and a nice video overview, so I wanted to bring it to your attention again now that the paper is out.
The emphasis in this paper is cancer subtype analysis, using some data from The Cancer Genome Atlas (TCGA). But it’s certainly not limited to cancer analysis–any research area that’s currently flooded with multiple types of data and outcomes could be run through this stratification and visualization software. I find the weighting of the lines and connections among the subsets to be really effective for me when thinking about relationships among the data types. That schizophrenia work that recently did that sort of stratification and clustering thing to suss out the relationships among different sub-types, was the kind of thing that’s going to be really useful (but I don’t know what software they used, because paywall…). And I expect that strategy to become increasingly important for a lot of conditions.
So have a look at this new paper (below), and their well-crafted video with examples.
If you are going to start working with StratomeX, be sure to also see their documentation pages. There are some features and options there that aren’t covered in the intro video and that you’ll want to know about.
The team is a cross-institutional and international bunch: this is a joint project between a lab at Harvard, led by Hanspeter Pfister, Peter Park’s lab at the Center for Biomedical Informatics at Harvard Medical School, and collaborators at Johannes Kepler University in Linz and the Graz University of Technology (both in Austria). And look for upcoming tools from them as well–there’s new stuff over at their site. They keep developing useful items, and I expect to be highlighting those in future Tips of the Week.
Marc Streit, Alexander Lex, Samuel Gratzl, Christian Partl, Dieter Schmalstieg, Hanspeter Pfister, Peter J Park & Nils Gehlenborg (2014). Guided visual exploration of genomic stratifications in cancer, Nature Methods, 11 (9) 884-885. DOI: http://dx.doi.org/10.1038/nmeth.3088
Yes, I know some people suffer from YAGS-malaise (Yet Another Genome Syndrome), but I don’t. I continue to be psyched for every genome I hear about. I even liked the salmon lice one. And Yaks. The crowd-funded Puerto Rican parrot project was so very neat. These genomes may not matter much for your everyday life, and may not exactly be celebrities among species. But we’ll learn something new and interesting from every one of them. It’s also very cool that it’s bringing new researchers, trainees, and citizens into the field.
The good news is there is opportunity still for many, many more species. And decreasing costs will make it possible for more research teams to do locally-important species. But–it would be a shame if we wasted resources by doing 30 versions of something cute, rather than tackling new problems. A central registry for sequencing projects may help to manage this. Genomes OnLine Database has been cataloging projects for years, and it would be great if folks would register their research there.
I was reminded of this by a tweet I saw come through my #bioinformatics column. This is what I saw flying by:
As much as I enjoy Twitter and think that science nerds are pretty good at it, it’s hard to know if the right people will see a tweet. Anyway, I suggested that this researcher check out GOLD and BioProject to see if anyone had registered anything.
I realized that although we have talked about GOLD in the past, it hadn’t been highlighted in our Tips of the Week before. So here I will include a video from a talk about GOLD. Ioanna Pagani gives an overview of GOLD, the foundations and the purpose. And then she goes on to demonstrate how to enter project metadata into their registry (~12min). Watching this will help you to understand the usefulness of GOLD, and what you can expect to find there. She describes both single-species project entry, and another option for entering metagenome data projects (~25min).
In the News at GOLD, they mention that their update this summer resulted in some changes to the interface–so the specifics might be a bit different from the video. But the basic structural features are still going to be useful to understand the goals and strategies. It may also help to convey the importance of appropriate metadata for genome projects. If you are involved with these projects, checking out the team’s paper on the structure and use of metadata is certainly worthwhile.
In times of all this sequencing capacity, people are going to start looking for new organisms to cover. Of course, some people will want to look at another strain, isolate, geographical sample for good reasons–but keeping a lot of unnecessary duplication from happening would be nice too. And it would be great if submitters also conformed to the standards for genome metadata–the ‘Minimum Information about a Genome Sequence’ (MIGS, now in the broader collection of standard checklists in the MIxSproject) standards being developed by the Genomic Standards Consortium. (You can see how GOLD conformed to this in their other paper below.) Let’s spread the resources around to get new knowledge when we can. I would like to see a more formal mechanism that connects people who have some genome of interest with researchers who might have the bandwidth to do it, as well. Social sequencing?
References: Pagani I., J. Jansson, I.-M. A. Chen, T. Smirnova, B. Nosrat, V. M. Markowitz & N. C. Kyrpides (2011). The Genomes OnLine Database (GOLD) v.4: status of genomic and metagenomic projects and their associated metadata, Nucleic Acids Research, 40 (D1) D571-D579. DOI: http://dx.doi.org/10.1093/nar/gkr1100
Liolios K., Lynette Hirschman, Ioanna Pagani, Bahador Nosrat, Peter Sterk, Owen White, Philippe Rocca-Serra, Susanna-Assunta Sansone, Chris Taylor & Nikos C. Kyrpides & (2012). The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness, Standards in Genomic Sciences, 6 (3) 444-453. DOI: http://dx.doi.org/10.4056/sigs.2675953
Field D., Tanya Gray, Norman Morrison, Jeremy Selengut, Peter Sterk, Tatiana Tatusova, Nicholas Thomson, Michael J Allen, Samuel V Angiuoli & Michael Ashburner & (2008). The minimum information about a genome sequence (MIGS) specification, Nature Biotechnology, 26 (5) 541-547. DOI: http://dx.doi.org/10.1038/nbt1360
Breaking into the zeitgeist recently, Docker popped into my sphere from several disparate sources. Seems to me that this is a potential problem-solver for some of the reproducibility and sharing dramas that we have been wrestling with in genomics. Sharing of data sets and versions of analysis software is being tackled in a number of ways. FigShare, Github, and some publishers have been making strides among the genoscenti. We’ve seen virtual machines offered as a way to get access to some data and tool collections*. But Docker offers a lighter-weight way to package and deliver these types of things in a quicker and straightforward manner.
To get a better handle on the utility of Docker, I went looking for some videos, and these are now the video tip of the week. This is different from our usual topics, but because users might find themselves on the receiving end of these containers at some point, it seemed relevant for our readers.
The first one I’ll mention gave me a good overview of the concept. The CTO of Docker, Solomon Hykes, talks at Twitter University about the basis and benefits of their software (Introduction to Docker). He describes Docker of being like the innovation of shipping containers–which don’t really sound particularly remarkable to most of us, but in fact the case has been made that they changed the global economy completely. I read that book that Bill Gates recommended last year, The Box, and it was quite astonishing to see how metal boxes changed everything. This brought standardization and efficiencies that were previously unavailable. And those are two things we really need in genomics data and software.
Hykes explains that the problem of shipping stuff–coffee beans, or whatever, had to be solved, at each place the goods might end up. This is a good analogy–like explained in the shipping container book. How to handle an item, appropriate infrastructure, local expertise, etc, was a real barrier to sharing goods. And this happens with bioinformatics tools and data right now. But with containerization, everyone could agree on the size of the piece, the locks, the label position and contents, and everything standardized on that system. This brought efficiency, automation, and really changed the world economy. As Hykes concisely describes [~8min in]:
“So the goal really is to try and do the same thing for software, right? Because I think it’s embarrassing, personally, that on average, it’ll take more time and energy to get…a collection of software to move from one data center to the next, than it is to ship physical goods from one side of the planet to the other. I think we can do better than that….”
This high-level overview of the concept in less than 10min is really effective. He then takes a question about Docker vs a VM (virtual machine). I think this is the essential take-away: containerizing the necessary items [~18min]:
“…Which means we can now define a new unit of software delivery, that’s more lightweight than a VM [virtual machine], but can ship more than just the application-specific piece…”
After this point there’s a live demo of Docker to cover some of the features. But if you really do want to get started with Docker, I’d recommend a second video from the Docker team. They have a Docker 101 explanation that covers things starting from installation, to poking around, destroying stuff in the container to show how that works, demoing some of the other nuts and bolts, and the ease of sharing a container.
So this is making waves among the genomics folks. This also drifted through my feed:
Fantastic demo from http://t.co/MwCQNKy2nD in the office today. Docker for better collaboration between scientists. More of that please.
Check it out–there seem to be some really nice features of Docker that can impact this field. It doesn’t solve everything–and it shouldn’t be used as an escape mechanism to not put your data into standard formats. And Melissa addresses a number of unmet challenges too. But it does seem that it can be a contributor to reproducibility and access to data issues that are currently hurdles (or, plagues) in this field. Docker is also under active development and they appear to want to make it better. But sharing our stuff: it’s not trivial–there are real consequences to public health from inaccessible data and tools (1). But there are broader applications beyond bioinformatics, of course. And wide appeal and adoption seems to be a good thing for ongoing development and support. More chatter on the larger picture of Docker:
The other day I was joking about how I was 3D-printing a baby sweater–the old way, with yarn and knitting needles. And I also mentioned that I assumed my niece-in-law was 3D-printing the baby separately. I’ve been musing (and reading) about 3D printing a lot lately–sometimes the plastic model part, sometimes the bioprinting of tissues part. So when I came across this new NIH 3D Print Exchange information, it seemed worthy of highlighting.
Although I haven’t had access to a 3D printer setup yet (although I’m planning to take a course soon at the local Artisan Asylum), I’ve been seeing quite a bit of chatter about it. Some folks are designing gel combs (rather than paying ridiculous catalog prices). Some folks print skulls and other bones. There is so much opportunity for a wide range of helpful scientific applications across many fields that it seems an introduction to this topic would be wise for a lot of folks.
So when someone pointed me to the 3D printing initiative at NIH, I was hooked. The public announcement and site launch was in mid-June, according to their blog and press release. I was catching up by reading other items on their site, including some press coverage that provides context for this and other government initiatives on 3D printing. Make Magazine’s piece “The Scramble To Build Thingiverse.gov is On!” notes that the Smithsonian and NASA also have projects underway. But for me, molecules in 3D are what I’m most interested in, so I’ll focus on this NIH version below.
An intro video provides an overview of the kinds of things that will be available on their site. But there’s also a YouTube channel with more.
At the site now you will find a number of ways to get started. At the “Share” navigation area you will find already there is a section for custom lab gear, anatomical stuff, and biological structures and even some organisms. So if you have models to share, you can load ‘em up. With the “Create” space you can quickly generate some items with a handy quick start feature. Because I’m fascinated with the beautiful structures of hemolysins (have you seen these things?) I picked one out, entered a PDB ID, and within a half hour I was notified that the printable model was available to me–and you can see it here. But you can build your own from scratch as well, of course. There are other tutorials that will help you get some foundations in place.
Or you can look around–from the “Discover” page you can browse or search for examples of models people have done. At this time, there are 347 (including the one I just did yesterday). But there will be more. I want to get mine printed up, and then see some other proteins too.
Ok, so it’s not like I made a kidney or something (although we know that day is coming). Being able to think about the 3D printing process, file types, and various options are probably worth noodling on. Getting your feet wet with a little protein structure or organelle might be a good way to get started. Check it out, and start thinking in other dimensions.
Development of the skeleton is a good example of a process that is highly regulated, requires a lot of precision, is conserved and important relationships across species, and is fairly easy to detect when it’s gone awry. I mean–it’s hard to know at a glance if all the neurons in an organism got to the right place at the right time or if all the liver cells are in the right place still. But skeletal morphology–length, shape, location, abnormalities can be apparent and are amenable to straightforward observations and measurements. Some of these have been collected for decades by fish researchers. This makes them a good model for creating a searchable, stored, phenotype collection.
The team at Phenoscape is trying to wrangle this sort of phenotype information. I completely agree with this statement of the need:
Although the emphasis has been on genomic data (Pennisi, 2011), there is growing recognition that a corresponding sea of phenomic data must also be organized and made computable in relation to genomic data.
They have over half a million phenotype observations cataloged. These include observations in thousands of fish taxa. They created and used an annotation suite of tools called Phenex to facilitate this. They describe Phenex as:
Annotation of phenotypic data using ontologies and globally unique taxonomic identifiers will allow biologists to integrate phenotypic data from different organisms and studies, leveraging decades of work in systematics and comparative morphology.
That’s great data to capture to provide important context for all the sequencing data we are now able to obtain. I think this is a nice example of combining important physical observations, mutant studies, and more, with genomics to begin to get at questions about evolutionary relationships among genes and regulatory regions that aren’t obvious only from the sequence data. You may not be personally interested in fish skeletons–but as an informative way to think about structuring these data types across species to make them useful for hypothesis generation–this is a useful example.
Here’s a intro video provided by the Phenoscape team that walks you through a search starting with a gene of interest, and taking you through the kinds of things you can find.
So have a look around Phenoscape to see a way to go from the physical observations of phenotype to gene details, or vice versa.
References: Mabee B.P., Balhoff J.P., Dahdul W.M., Lapp H., Midford P.E., Vision T.J. & Westerfield M. (2012). 500,000 fish phenotypes: The new informatics landscape for evolutionary and developmental biology of the vertebrate skeleton., Zeitschrift fur angewandte Ichthyologie = Journal of applied ichthyology, PMID: http://www.ncbi.nlm.nih.gov/pubmed/22736877
Balhoff J.P., Cartik R. Kothari, Hilmar Lapp, John G. Lundberg, Paula Mabee, Peter E. Midford, Monte Westerfield & Todd J. Vision (2010). Phenex: Ontological Annotation of Phenotypic Diversity, PLoS ONE, 5 (5) e10500. DOI: http://dx.doi.org/10.1371/journal.pone.0010500
Much of the meeting was live-streamed, which was really great. You can see the video segments and sometimes the slides are available on the workshop page. One of the great things about this meeting was that there’s so much excitement about what scientists want to do, and all the terrific ideas that are out there. One of my personal favorites was the Human Cell Atlas presented by Aviv Regev. I’d love to work on that. I loved working on the Adult Mouse Anatomical Dictionary and Gene Expression Database at Jax.
But for today’s focus, I’ll turn to a totally different aspect of genomics research that intrigues me–the immune system. As an undergraduate in microbiology and immunology, the fact that microbes and their teeny genomes could wreak havoc on large mammals fascinated me (Ebola–I mean, seriously, it’s not that big). And that the hosts have developed the mix-and-match adaptable response and antibody system to do battle–clever stuff, as long as it doesn’t turn into an autoimmune situation…. But this could also be turned to good use if you want to battle cancer cells with immunotherapies. So when David Haussler’s talk brought that back around–the idea of the complexity of the immune response genomics which is not well characterized yet–I connected with that idea as well. And it struck me that I had not ever featured the Immune Epitope Database before, which Haussler had mentioned in his talk. It was also noted that this is an interesting system because it is also a hybrid of proteomics and genomics information that’s required to be wrangled. And if this is a direction that NHGRI will emphasize, it’s important to know what’s out there, and think about the ways to go forward.
So here’s Haussler’s talk to set the foundation, but there’s another video about the database I’ll point to below.
In this talk he mentioned NetMHC for peptide binding prediction as well, and ImmPort at NIAID. There was a quick mention of an unfunded prototype UCSC immunobrowser to keep an eye out for. And for the most part these resources aren’t new–you can find a number of publications that go back and describe the foundations and development over the years. And it seems to be a good solid foundation, and with appropriate support can continue to keep this important information coming.
To learn more about IEDB, you can access their documentation, which includes a whole list of video tutorials. Here I’ll highlight the intro/overview one–but there are others that offer specific guidance on other tasks. I can’t embed this one, so the link will take you over to the video at their site.
Click the image to visit the video page.
So have a look at the IEDB resources, and think about the future directions of this important aspect of genomics.
Vita R., J. A. Greenbaum, H. Emami, I. Hoof, N. Salimi, R. Damle, A. Sette & B. Peters (2010). The Immune Epitope Database 2.0, Nucleic Acids Research, 38 (Database) D854-D862. DOI: http://dx.doi.org/10.1093/nar/gkp1004
Kim Y., Z. Zhu, D. Tamang, P. Wang, J. Greenbaum, C. Lundegaard, A. Sette, O. Lund, P. E. Bourne & M. Nielsen & (2012). Immune epitope database analysis resource, Nucleic Acids Research, 40 (W1) W525-W530. DOI: http://dx.doi.org/10.1093/nar/gks438
Lundegaard C. & M. Nielsen (2008). Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers, Bioinformatics, 24 (11) 1397-1398. DOI: http://dx.doi.org/10.1093/bioinformatics/btn128
Bhattacharya S., Linda Gomes, Patrick Dunn, Henry Schaefer, Joan Pontius, Patty Berger, Vince Desborough, Tom Smith, John Campbell & Elizabeth Thomson & (2014). ImmPort: disseminating data to the public for the future of immunology, Immunologic Research, 58 (2-3) 234-239. DOI: http://dx.doi.org/10.1007/s12026-014-8516-1
This is the browser I’ve been waiting for. Stop what you are doing right now and look at EpiViz. I’ll wait.
I spend a lot of time looking at visualizations of various types of -omics data, from a number of different sources. I’ve never believed in the “one browser to rule them all” sort of thing–I think it’s important for groups to focus on special areas of data collection, curation, and visualizion. Although some parts can be reused and shared, of course, some stuff just should be viewed win certain species or strategies that don’t always end up nicely in a “track” of data that you can slap on some browser.
My dreams of this began in earnest with the Caleydo tools I’ve been talking about for a long time. Years ago I began imagining genome browser data in one panel, pathway maps in the nearby one, TF motifs, an OMIM page loaded up, and other stuff that was all part of my train-of-thought on some issue. They Caleydo team has continued on this path, and their EnRoute and Entourage tools get part of that way too. You can do some of that with the nifty BioGPS layouts. I also love the idea of looking at multiple genomic regions at the same time, in the manner that the Multi-Image Genome viewer (MIG) enables.
So we are getting closer and closer. And this EpiViz tool is an excellent demonstration of how to combine necessary genome track data visualizations and other analysis strategies into one viewer. It also allows other types of data to come in, with the Data-Driven Documents tools. You should read the paper, you should try out their software, and have a look at this overview video the EpiViz team has provided to get started.