- A new paper publishes every 20 seconds…but retractions are plentiful
- Publishing (and retracting) at an unprecedented pace
- How to cope with the never-ending flood of information?
- Some mind-numbing stats for nucleic acid-related publications in 2013
A recent series of perspectives in the October 4th, 2013 issue of Science dealt with the accelerating volume of scientific literature due in part to the advent of the web and proliferation of journals—some good, some not so good—and the trend toward free (open-access) journals online, which in 2011 reached the “tipping point” of accounting for 50% of new research. It also stated that a new paper is now published about every 20 seconds, which equates to 1,576,800 papers each year—yikes!
Ok, I know, these include many subjects that don’t directly impact your area of expertise; however, as you’ll see at the end of this posting, even narrowing these subjects down to nucleic acid-related terms involves many more publications in 2013 than you probably would guess.
So, with this year coming to an end, and the next one just around the corner, I began to reflect on the sheer volume of literature that I’ve “read”—mostly scanning titles, occasionally abstracts, but rarely reading in detail—to select and cobble into these postings. Among the latter articles was one of the monthly opinion-pieces by Derek Lowe in Chemistry World entitled The never-ending story—keeping up with the literature is impossible, which strongly resonated with my volume-driven ruminations. He usually has a cleverly humorous way of communicating his thoughts, and this item was no exception. So it’s best to quote him verbatim rather than have his clever humor “lost in translation”, so to speak, by paraphrasing:
“You should be keeping up with the literature, you know. And you should be flossing your teeth. And checking the air pressure on your tyres [sic]. There’s probably some insurance or tax paperwork that you’ve been putting off, too, so you might as well get on with all of these at once. You’ll feel better about yourself, honestly.”
“The literature situation isn’t quite that bad, but if you get chemists [actually, any scientist] in a confessional frame of mind, they’ll probably tell you that they really don’t read the current journals as well as they ought to. In fact, I don’t think I’ve ever met anyone who thought that they were keeping up well enough. One of the problems, of course, is that the literature itself is a geyser, a phalanx of firehoses [sic] and it never stops gushing out information. There are more journals than ever before, publishing more papers, and they’re coming from all directions at once.”
“But most of this is junk. If you find that too strong a term, then lower the size of the junk pile and reassign some of it to the ‘doesn’t have to get read’ pile. That’s surely the largest one; it’s where all the reference-data papers go, the ones that no one looks at until their own research bumps into the same compound or topic.”
Derek Lowe goes on to recommend “filtering and prioritizing” as the key to coping, which I also recommend and, in fact, do on a daily basis. While he continues by praising the utility of the now familiar web-feed icon for Really Simple Syndication (RSS) technology, I suggest—in addition or alternatively—taking advantage of equally simple feeds “freely” available via PubMed (paid for by U.S. taxes) and Google (paid for by advertisers, etc.).
The NIH provides a short instructional video (Quick Tour) on how to easily search and register with NCBI to be provided with über-convenient email links to “What’s new for ‘[your search item]’ in PubMed” on a daily, or less frequent, basis as you wish. Alternatively, search in Google Scholar and then click the Create Alert icon.
Keep in mind that PubMed excludes patents, while Google Scholar will include them (if you wish) as well as find various types of websites. Another benefit of Google Scholar is that, for any publication, it provides the number of citations and links thereto.
Dark Side of Open-Access Journals
Some 8,250 open-access scientific journals are now listed in a directory supported by publishers. Unlike traditional science journals that charge for subscriptions or fees from those who wish to read their contents, open-access journals make research studies free to the public. In return, study authors pay up-front publishing costs if the paper is accepted for publication.
“From humble and idealistic beginnings a decade ago, open-access scientiﬁc journals have mushroomed into a global industry, driven by author publication fees,” says journalist John Bohannon, writing in an October 4th 2013 Science report entitled Who’s Afraid of Peer Review? Basically, a “spoof paper” concocted by Science claiming that a cancer drug discovered in a humble lichen, and ready for testing in patients, revealed little or no scrutiny at many open-access journals.
“The goal was to create a credible but mundane scientiﬁc paper, one with such grave errors that a competent peer reviewer should easily identify it as ﬂawed and unpublishable,” Bohannon says. He submitted versions of his study to 304 open-access journals; of 255 open-access journals that said they would review his study, 157 accepted the fake study for publication. “Acceptance was the norm, not the exception,” he writes. While spoof papers are not new, the Bohannon study represents a first systematic test of review practices, or their absence, across many journals at once.
The spoof study had at least three problems which should have been caught by reviewers:
- The study drug killed cancer cells with increasing doses, even though its data didn’t show any such effect.
- The drug killed cancer cells exposed to medical radiation with increasing effect, even though the study showed the cells weren’t exposed to radiation.
- The study author concluded the paper by promising to start treating people with the drug immediately, without further safety testing.
“If the scientiﬁc errors aren’t motivation enough to reject the paper, its apparent advocacy of bypassing clinical trials certainly should be,” Bohannon writes. But in many cases, it appears the study wasn’t peer-reviewed at all by the journals that responded to the spoof submission. Many of the reviews were just requests to format the study for publication.
This raises the question of how many legitimate but fundamentally flawed manuscripts get through the reviewing system for either traditional or open-access journals due to poor reviewing. In the end, scientists realize that publications are “self-correcting” in that erroneous results—and obviously phony data—are not reproducible, and eventually will be revealed as such.
In the next section, you’ll see the impact of these flawed manuscripts –it’s both sad and scary. There are so many publications being retracted that a daily blog tracks these, and provides back-stories ranging from “honest mistakes” to intentional fraud.
Retractions Run Rampant
You might think that retractions are relatively rare events, but there are now so many that a website, Retraction Watch, gives daily accounts of these obviously not rare events. Adam Marcus and Ivan Oransky have provided a great service to the scientific community by starting Retraction Watch. They deserve kudos for putting in what must be lots of time and thought to carefully research and write this blog. Retraction Watch is well worth visiting, and if you’re so inclined, subscribe to it as I did for daily email postings that oftentimes prompt numerous reader comments. As of November 20th there were 5,751 subscriber-followers, which is an indication of quite widespread interest. Some retractions involve articles that had been cited hundreds of times—the current “mega-correction” record is 319—while other retractions involve bizarre—if not sad—circumstances.
Why write a blog about retraction? This FAQ at Retraction Watch is answered by referring to the first post in 2010, which in part reads as follows:
“First, science takes justifiable pride in the fact that it is self-correcting — most of the time. Usually, that just means more or better data, not fraud or mistakes that would require a retraction.”
“Second, retractions are not often well-publicized. Sure, there are the high-profile…[b]ut most retractions live in obscurity in Medline and other databases. That means those who funded the retracted research — often taxpayers — aren’t particularly likely to find out about them. Nor are investors always likely to hear about retractions on basic science papers whose findings may have formed the basis for companies into which they pour dollars.”
“Third, they’re often the clues to great stories about fraud or other malfeasance….”
“Finally, we’re interested in whether journals are consistent. How long do they wait before printing a retraction? What requires one? How much of a public announcement, if any, do they make? Does a journal with a low rate of retractions have a better peer review and editing process, or is it just sweeping more mistakes under the rug?”
Another FAQ is why are so many of the retractions you cover from the life sciences?
The answer given is that “[t]here are a number of reasons for this. The two most important are that 1) we’re both medical reporters in our day jobs, so our sources and knowledge base are both deeper in the life sciences and 2) there are more papers published in the life sciences than in other areas.”
Also, is there a reliable database of retractions? The reply is “[n]o. There are ways to search Medline and the Web of Science for retractions, but there’s no single database”.
So what are people saying about Retraction Watch? Here’s a sampling of what’s been posted about this:
- Columbia Journalism Review Regret the Error columnist Craig Silverman calls Retraction Watch “a new blog that should be required reading for anyone interested in scientific journalism or the issue of accuracy.”
- Retraction Watch is a “somewhat addictive” blog, writes radiation oncology journal editor-in-chief Anthony Zeitman.
- A “fascinating and worthwhile blog,” writes Andrew Revkin in Dot Earth, the New York Times’ environmental blog. Revkin has also called Retraction Watch “invaluable.”
Well, enough said about this topic, so let’s switch to some stats I collected.
Mind-numbing stats for nucleic acid-related publications in 2013
As I mentioned at the outset, a new paper published about every 20 seconds—or 1,576,800 papers per year—prompted me to look into some stats for nucleic acid-related publications during 2013. Because practical applications of science are reflected in patents, I decided to use SciFinder, which covers academic publications of all sorts as well as patents. The search statistics obtained on Oct 30th were multiplied by 12 and divided by 10 to arrive at an estimated total publication number for 2013. These totals are listed below in decreasing order:
So, if you think you can keep up with all of the gene expression, or PCR, or any of this literature, FORGET ABOUT IT!! There will be approximately 191,200 nucleic-acid related publications in 2013.
Although it’s admittedly over simplistic, this rank ordering kind of made sense to me inasmuch as gene expression tools and technology have been available longer, utilize PCR, and are less expensive than those for sequencing. And sequencing—especially new high-throughput methods—are used for gene expression analysis rather than genomics, which also employ PCR. Similarly, hybridization has been used for quite some time, giving way to high-throughput by microarrays for both gene expression and single nucleotide polymorphism (SNP) analyses, also employing PCR. Finally, my rationalization of why primers and oligonucleotides are low on this list is that, while integral to the other topics, they are often unnamed components of kits or listed as sequence IDs.
Oligonucleotides…let me count the ways:
Anyway, my oligonucleotide background led me to a “deeper dive” into this segment, and here’s what was found. Through October 30th 2013, there were actually 7,129 items comprised of 1,365 patents and 5,773 non-patents, which included 230 reviews.
Analysis by author nationality—my guess from surnames rather than stated nationality—gave the following “top-10” rank order:
tied at 22 each
Aside from the somewhat surprising—at least to me—prevalence of Korean authors, I was pleased to personally know one of the authors tied for 10th, namely, Jesper Wengel, whose unlocked nucleic acid (UNA)-modified oligonucleotides are offered by TriLink as flexible RNA mimics that enable modulation of affinity and specificity. Also through the amazing “small world of science”, the U.S. author tied for 10th is Eric Swayze at Isis Pharmaceuticals in Carlsbad, California near TriLink.
Analysis by company or organization was likewise surprising—at least to me—in that two Chinese entities ranked highest in this list, followed by Isis, then a Korean entity, and then various U.S. entities I culled out.
Peoples Republic of China
Chinese Academy of Science
Korean Inst. Biosci. & Biotechnol.
University of California
Ohio State University
National Institutes of Health
Yale University School of Med.
University of Utah
The middle of this list provides yet another example of the sometimes scarily “small world of science”. During my professional career I first did a postdoc at Ohio State University, then worked at NIH, and eventually was with Life Technologies, before joining TriLink.
Quality Not Quantity
While the aforementioned has dealt with quantity, it’s perhaps more important to ask about quality. How to meaningfully measure quality or impact of scientific publications has been a topic of discussion and debate for a long time, certainly at institutions dealing with promotions and tenure.
One well-known metric is the h-index, which attempts to measure both the productivity and impact of the published work of a scientist or scholar based on the set of the scientist’s most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department, university or country, as well as a scholarly journal. The index was suggested by Jorge E. Hirsch, a physicist at UCSD, as a tool for determining theoretical physicists’ relative quality and is sometimes called the Hirsch index or Hirsch number.
While somewhat out of date, this 2011 compilation taken from ecnmag.com is worth considering as a “snapshot in time”.
Good Bye 2013, Hello 2014
While I end this year with a bit of skepticism about the current rapid rate of scientific publications, I enter 2014 with renewed optimism and excitement about the scientific discovery that awaits us in the coming year. It’s been a fun year for me researching and writing this blog, which I sincerely hope you have found to be interesting. If you’ve missed any of this year’s posts, I direct you to the archives at the top of this page to view all of the blog activity. I look forward to another year of research and commentary and I hope you will continue to follow in 2014.
As always, your comments are welcomed.