## Tuesday, April 21, 2015

### R: single plot with two different y-axes

I forgot where I originally found the code to do this, but I recently had to dig it out again to remind myself how to draw two different y axes on the same plot to show the values of two different features of the data. This is somewhat distinct from the typical use case of aesthetic mappings in ggplot2 where I want to have different lines/points/colors/etc. for the same feature across multiple subsets of data.
For example, I was recently poking around with some data examining enrichment of a particular set of genes using a hypergeometric test as I was fiddling around with other parameters that included more genes in the selection (i.e., in the classic example, the number of balls drawn from some hypothetical urn). I wanted to show the -log10(p-value) on one axis and some other value (e.g., “n”) on the same plot, using a different axis on the right side of the plot.
Here’s how to do it. First, generate some data:
``````set.seed(2015-04-13)

d = data.frame(x =seq(1,10),
n = c(0,0,1,2,3,4,4,5,6,6),
logp = signif(-log10(runif(10)), 2))
``````
x n logp
1 0 1.400
2 0 0.590
3 1 1.200
4 2 1.500
5 3 0.028
6 4 0.380
7 4 2.500
8 5 0.067
9 6 0.041
10 6 0.360
The strategy here is to first draw one of the plots, then draw another plot on top of the first one, and manually add in an axis. So let’s draw the first plot, but leave some room on the right hand side to draw an axis later on. I’m drawing a red line plot showing the p-value as it changes over values of x.
``````par(mar = c(5,5,2,5))
with(d, plot(x, logp, type="l", col="red3",
ylab=expression(-log[10](italic(p))),
ylim=c(0,3)))
``````
Now, draw the second plot on top of the first using the `par(new=T)` call. Draw the plot, but don’t include an axis yet. Put the axis on the right side (`axis(...)`), and add text to the margin (`mtext...`). Finally, add a legend.
``````par(new = T)
with(d, plot(x, n, pch=16, axes=F, xlab=NA, ylab=NA, cex=1.2))
axis(side = 4)
mtext(side = 4, line = 3, 'Number genes selected')
legend("topleft",
legend=c(expression(-log[10](italic(p))), "N genes"),
lty=c(1,0), pch=c(NA, 16), col=c("red3", "black"))
``````

## Friday, April 10, 2015

### Translational Bioinformatics Year In Review

Per tradition, Russ Altman gave his "Translational Bioinformatics: The Year in Review" presentation at the close of the AMIA Joint Summit on Translational Bioinformatics in San Francisco on March 26th.  This year, papers came from six key areas (and a final Odds and Ends category).  His full slide deck is available here.

I always enjoy this talk because it routinely points me to new collections of data and new software tools that are useful for a variety of analyses; as such, I thought I would highlight these resources from his talk this year.

GRASP: analysis of genotype-phenotype results from1390 genome-wide association studies and corresponding open access database
Some of you may have accessed the Johnson and O'Donnell catalog of GWAS results published in 2009.  This data set was a more extensive collection of GWAS findings than the popular NHGRI GWAS catalog, as it did not impose a genome-wide significance threshold for reported associations.  The GRASP database is a similar effort, reporting numerous attributes of each study.
A zip archive of the full data set (a flat file) is available here.

Effective diagnosis of genetic disease by computational phenotype analysis of the disease associated genome
This paper tackles the enormously complex task of diagnosing rare genetic diseases using a combination of genetic variants (from a VCF file), a list of phenotype characteristics (fed from the Human Phenotype Ontology), and a few other aspects of the disease.
The online tool called PhenIX is available here.

A network based method for analysis of lncRNA disease associations and prediction of lncRNAs implicated in diseases
Here, Yang et al. examine relationships between known long non-coding RNAs and disease using graph propagation.  Their underlying database, however, was generated using PubMed mining along with some manual curation.
Their lncRNA-Disease database is available here.

SNPsea: an algorithm to identify cell types, tissuesand pathways affected by risk loci
This tool is a type of SNP set enrichment, designed to specifically look at functional enrichment in the context of specific tissues and cell types.  The tool is a C++ executable, available for download here.
The data sources underlying the SNPsea algorithm are available here.

Human symptoms-disease network
Here Zhou et al. systematically extract symptom-to-disease network by exploting MeSH annotations.  They compiled a list of 322 symptoms and 4,442 diseases from the MeSH vocabulary, and document their occurrence within PubMed.  Using this disease-symptom network, the authors explore the biological underpinnings of certain symptoms by looking at shared genomic elements between diseases with similar symptoms.
The full list of ~130,000 edges in their disease-symptom network is available here.

A circadian gene expression atlas in mammals: implications for biology and medicine
This fascinating paper explores the temporal impact on gene expression traits from 12 mouse organs.  By systematically collecting transcriptome data from these tissues at two hour intervals, the authors construct a temporal atlas of gene expression, and show that 43% of proteins have a circadian expression profile.
The accompanying CircaDB database is available online here.

dRiskKB: a large-scale disease-disease riskrelationship knowledge base constructed frombiomedical text
The authors of dRiskKB use text mining across MEDLINE citations using a controlled disease vocabulary, in this case the Human Disease Ontology, to generate pairs of diseases that co-occur with specific patterns in abstract text. These pairs are ranked with a scoring algorithm and provide a new resource for disease co-morbidity relationships.
The flat file data driving dRiskKB can be found online here.

A tissue-based map of the human proteome
In this major effort, a group of investigators have published the most detailed atlas of human protein expression to date.  The transcriptome has been extensively studied across human tissues, but it remains unclear to what extent transcriptional activity reflects translation into protein.  But most importantly, the data are searchable via a beautiful website.
The underlying data from the Human Protein Atlas is available here.

### R User Group Recap: Heatmaps and Using the caret Package

At our most recent R user group meeting we were delighted to have presentations from Mark Lawson and Steve Hoang, both bioinformaticians at Hemoshear. All of the code used in both demos is in our Meetup’s GitHub repo.

### Making heatmaps in R

Steve started with an overview of making heatmaps in R. Using the iris dataset, Steve demonstrated making heatmaps of the continuous iris data using the `heatmap.2` function from the gplots package, the `aheatmap` function from NMF, and the hard way using ggplot2. The “best in class” method used `aheatmap` to draw an annotated heatmap plotting z-scores of columns and annotated rows instead of raw values, using the Pearson correlation instead of Euclidean distance as the distance metric.
``````library(dplyr)
library(NMF)
library(RColorBrewer)
iris2 = iris # prep iris data for plotting
rownames(iris2) = make.names(iris2\$Species, unique = T)
iris2 = iris2 %>% select(-Species) %>% as.matrix()
aheatmap(iris2, color = "-RdBu:50", scale = "col", breaks = 0,
annRow = iris["Species"], annColors = "Set2",
distfun = "pearson", treeheight=c(200, 50),
fontsize=13, cexCol=.7,
filename="heatmap.png", width=8, height=16)
``````

### Classification and regression using caret

Mark wrapped up with a gentle introduction to the caret package for classification and regression training. This demonstration used the caret package to split data into training and testing sets, and run repeated cross-validation to train random forest and penalized logistic regression models for classifying Fisher’s iris data.
First, get a look at the data with the `featurePlot` function in the caret package:
``````library(caret)
set.seed(42)
data(iris)
featurePlot(x = iris[, 1:4],
y = iris\$Species,
plot = "pairs",
auto.key = list(columns = 3))
``````

Next, after splitting the data into training and testing sets and using the caret package to automate training and testing both random forest and partial least squares models using repeated 10-fold cross-validation (see the code), it turns out random forest outperforms PLS in this case, and performs fairly well overall:
setosa versicolor virginica
Sensitivity 1.00 1.00 0.00
Specificity 1.00 0.50 1.00
Pos Pred Value 1.00 0.50 NaN
Neg Pred Value 1.00 1.00 0.67
Prevalence 0.33 0.33 0.33
Detection Rate 0.33 0.33 0.00
Detection Prevalence 0.33 0.67 0.00
Balanced Accuracy 1.00 0.75 0.50
A big thanks to Mark and Steve at Hemoshear for putting this together!