Thursday, December 27, 2012

The Reproducible Research Guilt Trip May Finally Be Paying Off

We might be closer to killing off the "Just take my word for it - I'm pretty sure I did this right" methods section

There is no shortage of well-reasoned articles filled persuasive arguments about the need for higher reproducible research standards in the scientific literature. With so many good posts about the virtues of reproducible research, they all boil down to one overarching concept:

write shit down

Why is this even an issue? Biologists in particular seem to be collectively and subconsciously reacting to those awful General Chemistry labs where they had you copy down pages of instructions verbatim into your lab notebook. It should come as no surprise that bioinformatics is ground zero for reproducibility activism.

It is unfortunate reproducible research is tied up with all sorts of other holier-than-thou practices: open access, open source, open data, literate programming, blogging, functional programming. This all-encompassing evangelism tends to polarize people. While wonky ├╝ber-programmers like C. Titus Brown lay out fundamental practices for reproducibility, most PIs have been publicly giving lip service to the idea of reproducible research, belying a "I don't wanna eat my vegetables"-type disdain. There are now "corsortia" and an "initiative" to compel scientists to actually write their shit down, preferably with door prizes. If you think this has a "posture pals" (video) feel to it, you're not alone. As the number of pro-RR articles has steadily increased, few take these to heart.

This head against wall bashing has been the pattern for many years - better tools are now available (RStudio, knitr, Galaxy, cloud computing, figshare, github, bitbucket) and more rah-rah from the blogosphere - but little enforcement from major journals. But now a recent development has raised my hopes, because it indicates editors have been tightening the screws enough to cause discomfort:

People have actually started to argue against reproducible research!

Hearts and Minds

The founder of the irreplicability movement is Christopher Drummond, author of “Reproducible Research: a Dissenting Opinion”. I will attempt to paraphrase his arguments here:
  1. Richard Feynman never had a Github account.
  2. No one is really going to read your damn code anyway.
  3. Writing shit down == A big drag, man.
  4. The Anil Potti incident proves liars always lie about their Rhodes Scholarships first. We should crack down on curricula vitae, not veritas curat.
Drummond's points are challenged here by statistician and Coursera favorite Roger Peng.

A precursor to the dissenting opinion article is Drummond's "Replicability is not Reproducibility: Nor is it Good Science". A distinction is drawn between reproducibility and replicability, the former being what is advocated and the latter being more generalizable or scientifically provable. The idea we require researchers to submit their data and code, replicable research, is a narrow concept really only useful for ferreting out scientific misconduct.

Black-footed Ferret
I would argue that ignorance of biological sequence analysis, and even moreso statistics, is a bigger threat than the outright fraud seen in the Duke case. Most bioinformatics manuscripts feature analysis which is not replicable, which is frightening to consider when GWAS and exome NGS variant papers implicate so many genes in disease, many of them residing along a razor thin p-value threshold tweaked by several incomprehensible cherry picked program parameters.

It is not clear science can efficiently self-correct. So while replicability is not reproducibility, reproducibility is too slow to substitute for replicability. A manuscript that describes real reproducible biological phenomena is essentially conjecture until it can be repeated. The greatest ferret-legger the world has ever known will live in obscurity until they buy a ferret. We have a culture of scientists who refuse to buy a ferret.

Accounting for Tastes

The other dissenting opinion (here) is from UCSC's Kevin Karplus, who replies to Iddo Friedberg's post recommending a panel of white coat mechanics to help biologists get their code ready for publication. Karplus raises two points:
  • It is difficult to make polished software for others to use and that is not the point of research.
  • Replicability is not reproducibility.
Regardless of Friedberg's proposal, railing against "polished software" is simply a straw man argument. Reproducible research in 2012 2013 does not mean robust, extensible, or even well documented code. Most sequence analysis papers feature very little compiled code, but rely on using a series of executable programs glued together using scripting languages, producing intermediate data then digested into a report, often written in R.

Getting these sequence analysis workflows to be reproducible will not require a highly skilled platoon of developers. Any willing researcher can submit a shell script or a build script of commands provided they avoid these common pitfalls:
  • Using bioinformatics web applications with no web service capability
  • Using desktop bioinformatics software with no logging capability
  • Relying on proprietary institutional databases, perhaps with stored procedures that prove too unwiedly to dump
  • Using command line programs without a directory-based bash history
  • Using Excel to manually manipulate data
As our toolset and research community matures, these excuses obstacles will eventually disappear. But there is one scenario which will always be true in some of the more competitive arenas of bioinformatics programming (e.g. structure prediction, de novo assembly):
  • The researcher was perfectly capable of submitting code but decided to retain a competitive advantage.
"Over-CASPed" researchers who are unwilling to divulge their secret sauce should be relegated to appropriate sandboxes.

Replication does not prove a biological truth but we often don't even have the fleeting proof that a scientist did what they said they did.

Which brings us back to those damn chemistry labs. While many public access talk shows find chemists willing argue against evolution, you would be hard pressed to find a one who would argue against writing shit down.

In other words: Not writing shit down is an even worse idea than creationism.


There, I blogged in 2012.

Monday, February 20, 2012

AGBT: digesting diposable MinIONs in diaspora

Despite my current ranking of 15th in Biostar, myriad page views of my BAS™ post (albeit mostly misdirected perverts), and positive response for my celebrated campaign against more microarray papers, for some reason I was not "comped" an all-expenses paid trip as honorary blog journalist to this year's Advances in Biology and Genome Technology, which is kind of like CES for sequencing people, except AGBT is still worth attending. Normally the oversight would not bother me, as bioinformatics itself is not the focus of this meeting, but the flood of #AGBT tweets would not let me forget this fact and I was forced to stew and blog in envy.

The first game changing disruptive revolutionary thing from England since 1964

Even from my distant perch it was obvious all the scientific presentations at AGBT were overshadowed by a 17-minute showstopping demo from Clive Brown of Oxford Nanopore, a company that by all appearances would either die, focus on some minor stuff, or bring it. They chose the third option, and in so doing boosted the "Clive index" to unprecedented levels. OxN's recent decision to enlist famed geneticist and serial startup advisor George Church struck me as a huge gamble, as the string of Route 128 flameouts touting his name lead me to assume long ago that Church had stowed away some cursed Tiki idol in his luggage like Bobby in that episode of the Brady Bunch. However, after reading up on OxN, I had to admit I was just bitter about Dr. Church's refusal to invest in my chain of Polonator-based paternity testing clinics, Yo Po'lonatizz!™

Two new sequencer platforms were announced:
A MinION. Forget to hit eject before removing this
and you will instantly lose $900.
  • The MinION, a $900 "disposable" USB drive which detects minute changes in voltage incurred by the passage of DNA through a robust and delicious lipid bilayer. Finally a device capable of sequencing filthy rabbit blood right on the spot!
  • The GridION system, a scalable rackmounted sequencer, which despite some lack of pricing clarity, should produce an actual $1000 15-minute human genome by 2013.
These exotic machines must be truly game-changing because they made properly expanding Albert Vilella's NGS sequencer spreadsheet quite difficult. The MinION, in particular, could be viewed as a free device with $900 of consumables. This effectively lowers the bar to getting high-throughput sequence in the doctor's office to a 100% unamortized billable transaction. These things also claim fucking unlimited read lengths.

Expression microarrays, SAGE, 454, ABI SOLiD, and now Pacific Biosciences have all left bad tastes of uncertainty and dissatisfaction in the mouths of scientists. It is easy to disappoint people on a grand scale with a $700,000 machine, but $900 worth of chemicals in a USB drive is a different animal, and it seems likely this invention will find a following if it even delivers on a fraction of what it promises.

The GridION - put it in a rack or right on the floor.
Good information on this sequencer-on-a-stick is to be found at Nick Loman's blogGenomes Unzipped, and official press releases. An excellent discussion of the nanopores themselves can be found at Omically Speaking.

More cringeworthy marketing from the West coast

The Oxford Nanopore machines are so jaw-dropping, in fact, that Jonathan Rothberg is already crying vaporware. His complaints do seem warranted, given disappointments from past year's announcements and the lack of publicly available sequence from these devices.

Unfortunately Ion Torrent has spent all of its goodwill on an inane and hamfisted advertising war against Illumina's MiSeq, an intentionally crippled opponent. Seemingly orchestrated by castoffs from the Celebrity Apprentice, this assault began with cringe-inducing derivations of Apple commercials, and has expanded to include a sort of "feature combover." Through some convoluted logic involving consensus, a professional whiteboard artist attempts to convince the public how the homopolymer error rate is actually lower using Ion Torrent PGM than MiSeq. This is the sequencing equivalent of having your mom try to convince you two apples is better than one devil dog, or some such utter nonsense.

My response was predictably measured and cerebral.

This is not the first time I have tweet-confronted Ion Torrent over its odious approach. All this is rather unnecessary because overall, and despite the homopolymer issues, the utility of the PGM has been more or less within expectations. The MiSeq is also exactly within expectations, since it is basically a transparent, measly 1/50th slice of a HiSeq. The same cannot really be said for the RS, whose error rate is clearly far above what was expected at the outset. So if anyone requires an aggressive smokescreen-type marketing campaign (or a new machine) it is Pacific Biosciences.

Wednesday, January 18, 2012

When can we expect the last damn microarray paper?

With bonus R code

It came as a shock to learn from PubMed that almost 900 papers were published with the word "microarray" in their titles last year alone, just 12 shy of the 2010 count. More alarming, many of these papers were not of the innocuous "Microarray study of gene expression in dog scrotal tissue" variety, but dry rehashings along the lines of "Statistical approaches to normalizing microarrays to the reference brightness of Ursa Minor".

It's an ugly truth we must face: people aren't just using microarrays, they're still writing about them.

See for yourself:

  #Data Mashups in R, pg17

    ngs=sapply(years,function(x){'"next generation sequencing"[title] OR "high-throughput sequencing"[title]'),list(x))})
#papers with "microarray" in title
> df[,c("year","mic")]
   year  mic
1  1995    2
2  1996    4
3  1997    0
4  1998    7
5  1999   28
6  2000  108
7  2001  273
8  2002  553
9  2003  770
10 2004 1032
11 2005 1135
12 2006 1216
13 2007 1107
14 2008 1055
15 2009  981
16 2010  909
17 2011  897
Reading another treatise on microarray normalization in 2012 would be just tragic. Who still reads these? Who still writes these papers? Can we stop them? If not, when can we expect NGS to wipe them off the map?
#97 is a fair start

p<-c+geom_point(aes(y=value,color=citation)) +
  ylab("papers") +
  stat_smooth(aes(y=value,color=citation),data=subset(mdf,citation=="mic"),method="loess") +
Here I plot both microarray and next-generation sequencing papers (in title). We see kurtosis is working in our favor, and LOESS seems to agree!
But when will the pain end? Let us extrapolate, wildly.
#Return 0 for negative elements
# noNeg(c(3,2,1,0,-1,-2,2))
# [1] 3 2 1 0 0 0 2

#Return up to the first negative/zero element inclusive
# toZeroNoNeg(c(3,2,1,0,-1,-2,2))
# [1] 3 2 1 0

#return index of first zero

#let's peer into the future
df.lo.mic<-loess(mic ~ year,df,control=loess.control(surface="direct"))

#when will it stop?
cat(concat("LOESS projects ",sum(toZeroNoNeg(mic_predict))," more microarray papers."))
cat(concat("The last damn microarray paper is projected to be in ",(zero_year-1),"."))

#predict ngs growth
df.lo.ngs<-loess(ngs ~ year,df,control=loess.control(surface="direct"))



p2<-c2+geom_point(aes(y=value,color=citation,shape=type),size=3) +
    ylab("papers") +

LOESS projects 2038 more microarray papers.
The last damn microarray paper is projected to be published in 2016.

Yeah, right...

Full R code here: