DNA Day 2014

  • How’s the World Celebrating?
  • My Top 3 “Likes” for 2014 DNA Day
  • Where’s the Love for RNA Day?
DNA Day

Taken from www.ashg.org

The Why, What and Where of DNA Day

DNA Day is celebrated on April 25 and commemorates the day in 1953 when James Watson, Francis Crick, Maurice Wilkins, Rosalind Franklin and colleagues published papers in Nature on the double-helix structure of DNA. Furthermore, on that day in 2003 it was declared that the Human Genome Project was nearly 100% complete. “The remaining tiny gaps are considered too costly to fill,” according to BBC News, this due to technical issues—hence the now popular term “accessible genome”.

Book of Life: the sequence of the human genome is published in Science and Nature (taken from lifesciencesfoundation.org).

Book of Life: the sequence of the human genome is published in Science and Nature (taken from lifesciencesfoundation.org).

In the USA, DNA Day was first celebrated on April 25, 2003 by proclamation of both the Senate and the House of Representatives. However, they only declared a one-time celebration. Every year from 2003 onward, annual DNA Day celebrations have been organized by the National Human Genome Research Institute (NHGRI). April 25 has since been declared “International DNA Day” and “World DNA Day” by several groups.

Metal working model used by James Watson and Francis Crick to determine the double-helical structure of the DNA molecule in 1953 (taken from lebbeuswoods.wordpress.com via Bing Images).

Metal working model used by James Watson and Francis Crick to determine the double-helical structure of the DNA molecule in 1953 (taken from lebbeuswoods.wordpress.com via Bing Images).

Researching DNA Day 2014 for this post revealed the following sampling of major international conferences, country-centric activities, local happenings, and social media—all of which struck me as a remarkable testament that elucidation of the structure and role of DNA has unquestionably had a profound influence on science and society.

The 5th World DNA and Genome Day will be held during April 25-29, 2014 in Dalian, China. Its theme is World’s Dream of Bio-Knowledge Economy. This event aims to promote life science and biotech development, and accelerate international education and scientific information exchange in China. Eleven Nobel Laureates representing various disciplines and countries are showcased in a forum that will undoubtedly provide stimulating discussion. In addition, there will be eight concurrent tracks said to cover “major hot fields” in genetics and genomics.

The NHGRI website for National DNA Day enthusiastically proclaims it as “a unique day when students, teachers and the public can learn more about genetics and genomics! Featured are an online chatroom (with transcripts back to 2005), various educational webcasts and podcasts, loads of great teaching tools for all levels, and even ambassadors—NHGRI researchers, trainees, and other staff who present current topics in genetics, the work they do, as well as career options in the field to high school audiences. As a former teacher—and continuing tax payer—it’s gratifying to see all these educational resources and outreach!

NHGRI Ambassadors for National DNA Day educational outreach (taken from nih.gov via Bing Images)

NHGRI Ambassadors for National DNA Day educational outreach (taken from nih.gov via Bing Images)

The American Society of Human Genetics (ASHG) held its 9th Annual DNA Day Essay Contest for students in grades 9-12, in parallel with the same contest sponsored by the European Society of Human Genetics. Cash prizes to students—and teaching material grants to their teachers—are awarded to winning essays that address the “2014 Question” quoted as follows:

Complex traits, such as blood pressure, height, cardiovascular disease, or autism, are the combined result of multiple genes and the environment. For ONE complex human trait of your choosing, identify and explain the contributions of at least one genetic factor AND one environmental factor. How does this interplay lead to a phenotype? Keep in mind that the environment may include nutrition, psychological elements, and other non-genetic factors. If the molecular or biological basis of the interaction between the genetic and environmental factors is known, be sure to discuss it. If not, discuss the gaps in our knowledge of how those factors influence your chosen trait.

I don’t know what life-sciences education you received in grades 9-12, but mine were limited to dissecting a smelly, formaldehyde-laced worm and starfish, and definitely not genetic factors and phenotype! Thanks to the “DNA Revolution,” teaching introductory genetics has markedly progressed!

The pervasiveness of the ‘DNA Revolution’ extends to social media as well. National DNA Day has its own Facebook page chock full of all sorts of interesting and informative links. Back in January of this year there were already 13,000 “likes” and 250 “talking about this”. I found these two items to be interesting enough to read more about them.

Taken from redorbit.com via Facebook

Taken from redorbit.com via Facebook

Something that smells wonderful to you could be offensive to your friend, but why this is so has been a mystery. The answer could lie in your genetic makeup, says a research team from Duke University. Their findings, published in the early online edition of Nature Neuroscience, reveal that a difference at the smallest level of DNA — one amino acid on one gene — determines whether or not you like a smell.

Taken from bbc.co.uk via Facebook

Taken from bbc.co.uk via Facebook

Behavior can be affected by events in previous generations which have been passed on through a form of genetic memory, animal studies suggest. Experiments showed that a traumatic event could affect the DNA in sperm and alter the brains and behavior of subsequent generations. A Nature Neuroscience study shows mice trained to avoid a smell passed their aversion on to their “grandchildren.”

My Top 3 “Likes” for DNA Day this Year

Reflecting on DNA Day 2014 led me to musing over which DNA-related topics were especially noteworthy for this post. It wasn’t easy, but I’ve narrowed it down to my Top 3 “Likes” à la Facebook jargon. I was going to reveal these in beauty pageant manner going from runners-up to the winner, but decided that they are completely different and each a winner in a unique way.

likeI’m admittedly biased about DNA synthesis, which I’ve done for many years, so I’ll start with the next-generation 1536-well oligonucleotide synthesizer with on-the-fly dispenser reported by a team led by Prof. Ronald Davis at the Stanford University’s Genome Technology Center. While this is a “must read” for synthetic oligonucleotide aficionados, the following snippets are significant “punch lines”—especially regarding throughput, scale, and cost that collectively drive applications such as the emerging field of synthetic biology.

  • Produces 1536 samples in a single run using a multi-well filtered titer plate, with the potential to synthesize up to 3456 samples per plate, using an open-well system where spent reagents are drained to waste under vacuum.
  • During synthesis, reagents are delivered on-the-fly to each micro-titer well at volumes ≤ 5μl with plate speeds up to 150 mm/s [that’s fast!].
  • Using gas-phase cleavage and deprotection, a full plate of 1536 60-mers may be processed with same-day turnaround with an average yield per well at 3.5 nmol. Final product at only $0.00277/base [that’s cheap!] is eluted into a low-volume collection plate for immediate use in downstream applications via robotics.
  • Crude oligonucleotide quality is comparable to that of commercial synthesis instrumentation, with an error rate of 1.53/717 bases. Furthermore, mass spectral analysis on strands synthesized up to 80 bases showed high purity with an average coupling efficiency of 99.5%.                     

Synthetic biology is the segue into my next “like” that is somewhat controversial, namely Do-It-Yourself Biology (DIYbio), which is explained at the DIYbio organization’s website where various activities are accessed—and information is available about the DIYbio logo & “DIYbio revolution” shown below. The website provides links to global discussions, local groups and events, the DIYbio blog, “ask a biosafety expert your safety question”, and subscribe to a quarterly “postcard update”.

An Institution for the Do-It-Yourself Biologist

DIYbio.org was founded in 2008 with the mission of establishing a vibrant, productive and safe community of DIY biologists.  Central to its mission is the “belief that biotechnology and greater public understanding about it has the potential to benefit everyone.”

DIYbio.org was founded in 2008 with the mission of establishing a vibrant, productive and safe community of DIY biologists. Central to its mission is the “belief that biotechnology and greater public understanding about it has the potential to benefit everyone.”

like

While this “democratization” of biology is a fascinating “grassroots movement”, a GenomeWeb article reported that some think “it is enabling weekend bioterrorists, disaffected teens, and inventive supervillains to use synthetic biology tools to whip up recipes of synthetic super viruses as easy as grandma’s ragout sauce. It’s only a matter of time until this is the reality, isn’t it?”

Probably not, according to a new report called “Seven Myths and Realities about Do-It-Yourself Biology.” Most of the fears about DIYbio are based on a “miscomprehension about the community’s ability to wield and manipulate life,” says the survey, which was conducted by the Synthetic Biology Project at the Woodrow Wilson International Center for Scholars in Washington DC.

  • The survey of 305 DIYers found that many of them work in multiple spaces, with 46 percent working at a community lab, 35 percent at hackerspaces, 28 percent at academic, corporate, or government labs, and 26 percent at home.
  • This finding goes against the ‘myth’ that most DIYers work anonymously and in solitude. The survey found that only 8 percent of respondents work exclusively at home labs.
  • The project says it is a myth that DIYers are capable of unleashing a deadly epidemic.
  • “The community survey suggests that, far from developing novel pathogens, which would require the skill set of a seasoned virologist and access to pathogens, most DIYers are still learning basic biotechnology,” it says.
  • DIYers also are not averse to oversight or ethical standards, the survey found. So far, they have largely been left out of conversations about government oversight concerning things like dangerous pathogens, though they do lack a formalized checking system. However, the survey found, in part because most of them work in shared spaces, there are informal checks that exclude the use of animals or pathogens.
  • Lastly, group labs are not necessarily going to become havens for bioterrorists, the report says, as DIY community labs have strict rules about access. At Brooklyn’s Genspace, for example, lab community directors evaluate new members and project safety, and consult with a safety committee.
  • The Synthetic Biology Project report also lays out several policy proposals and recommendations for ways to nurture DIYbio and to keep it safe. Education programs should be fostered, academic and corporate partners should get engaged, benchmarks and risk limits should be set, and governments should fund networks of community labs, the report says.

likeLast but not least of my top 3 Likes, is Illumina’s recent announcement of achieving the $1,000 Genome! As noted in my March 31st post, this truly amazing milestone—albeit with some cost caveats—has been realized some 12 years after Craig Venter convened and moderated a diverse panel of experts to discuss The Future of Sequencing: Advancing Towards the $1,000 Genome as a ‘hot topic’ at the 14th International Genome Sequencing and Analysis Conference (GSAC 14) in Boston on Oct 2nd 2002. The pre-conference press release presciently added that “the panel will explore new DNA sequencing technologies that have the potential to change the face of genomics over the next few years.” Indeed it has, and getting there has provided very powerful DNA sequencing tools that transformed life science, and enabled a new era of personalized medicine.

Congratulations to everyone who contributed in some way to make this happen!

This book published has received a “4-out-5 star” rating at Amazon (taken from Bing Images).

This book published has received a “4-out-5 star” rating at Amazon (taken from Bing Image

Where’s the Love for RNA Day?

While writing this post, it struck me that RNA should get its day, too! Here’s why:

While DNA encodes the “blueprint” for life, its transcription into messenger RNA (mRNA) literally translates this blueprint into proteins and, ultimately, all living organisms. But mRNA and other requisite RNAs, such as ribosomal RNA (rRNA) and transfer RNA (tRNA), are only part of the story of life. The existence and critical roles for a host of additional classes of RNA, namely short and long non-coding RNA, are now recognized to be critical. A 2014 review in Nature puts it this way:

The importance of the non-coding transcriptome has become increasingly clear in recent years—comparative genomic analysis has demonstrated a significant difference in genome utilization among species (for example, the protein-coding genome constitutes almost the entire genome of unicellular yeast, but only 2% of mammalian genomes). These observations suggest that the non-coding transcriptome is of crucial importance in determining the greater complexity of higher eukaryotes and in disease pathogenesis. Functionalizing the non-coding space will undoubtedly lead to important insight about basic physiology and disease progression.

DNA and RNA are wonderfully intertwined in the molecular basis of life, why shouldn’t RNA have its day like DNA? Any suggested dates for RNA Day? Let’s start the celebration!

Your comments about this or anything else in this post are welcomed.

Sequence Every Newborn?

  • Envisaged in the 2002 Challenge for Achieving a $1,000 Genome
  • Are We There Yet? Yes…and No
  • So Where Are We, Actually?

The notion of metaphorically ‘dirt cheap’ genome sequencing is now so prevalent that it seems to have been always available—and have virtually unlimited utility—as previously touted in a provocative article in Nature Biotechnology rhetorically entitled What would you do if you could sequence everything? The notion of ‘everything’ obviously includes all people, and—as we’re reminded by the now familiar t-shirt statement—‘babies are people too.’ Unlike the ‘big bang’ origin of the universe, cheap-enough-sequencing-for-everything, including all babies (newborns, actually) just didn’t happen spontaneously, so when and how did this come about?

In the Beginning…

According to the bible, the history of creation began when, “in the beginning God created the heavens and the earth.” The history of cheap-enough-sequencing-for-everything, according to my tongue-in-cheek reckoning, is that “In the beginning Craig Venter convened and moderated a diverse panel of experts to discuss The Future of Sequencing: Advancing Towards the $1,000 Genome.” This was a ‘hot topic’ at the 14th International Genome Sequencing and Analysis Conference (GSAC 14) in Boston on Oct 2nd 2002. The pre-conference press release added that “the panel will explore new DNA sequencing technologies that have the potential to change the face of genomics over the next few years.”

While it’s taken considerably more than a ‘few years’ to achieve the $1,000 genome, continual decrease in sequencing costs has enabled large-scale sequencing initiatives such as the 1000 Genomes Project. Nick Loman’s 2013 blog entitled The biggest genome sequencing projects: the uber-list! outlines the 15 largest sequencing projects to date and takes a look at some of the massive projects that are currently in the works.

According to GenomeWeb on Jan 14th 2014, Illumina launched a new sequencing system that can now produce a human genome for under $1,000, claiming to be the first to achieve this ‘long sought-after goal’—roughly 12 years in the making, by my reckoning. A LinkedIn post on Jan 15th by Brian Maurer, Inside Sales at Illumina, quotes Illumina’s CEO as saying that “one reagent kit to enable 16 genomes per run will cost $12,700, or $800 per genome for reagents. Hardware will add an additional $137 per genome, while sample prep will range between $55 and $65 per genome.” While Illumina is staking claim to the $1,000 genome and customers acknowledge the new system provides a drastic price reduction, the actual costs of sequencing a genome are being debated. Sequencing service provider, AllSeq, discusses the cost breakdown in their blog. While initially a bit negative, their outlook on the attainable costs seems to be improving.

So, with affordable genome sequencing apparently being a reality, are we ready to sequence every newborn?

Yes…and No

Why this conflicting answer of ‘yes…and no’? The ‘yes’ part is based on the fact that NIH has recently funded four studies on the benefits—and risks—of newborn genome sequencing. The ‘no’ part reflects the added facts that these studies will take five years, and will look at how genome testing of newborns could improve screening, and address what some geneticists view as their most sensitive ethical questions yet.

Put another way, and as detailed in the following section, the requisite low-cost DNA sequencing technology is now available but it needs to be demonstrated—through technical feasibility investigations and in clinical pilot studies—that newborns receive health benefits of the type expected, and that numerous ‘tricky’ ethical issues can be dealt with in an ‘acceptable manner’—although to whom it is acceptable is not obvious to me at this time. Likely controversial views on reimbursement also have to be addressed, but that’s a whole other topic—dare I say ‘political football’—vis-à-vis Obamacare, ooops, I mean the Affordable Care Act.

So What’s the Plan?

The following answer is an adaptation of the Sep 13th 2013 Science News & Analysis by Jocelyn Kaiser entitled Researchers to Explore Promise, Risks of Sequencing Newborns’ DNA.

The National Institute of Child Health and Human Development (NICHD) is rolling out a $25 million, 5-year federal research program to explore the potential value of looking at an infant’s genome to examine all of the genes or perhaps a particularly informative subset of them. This genome testing could significantly supplement the decades-old state screening programs that take a drop of blood from nearly every newborn’s heel and test it for biochemical markers for several dozen rare disorders. Diagnosing a child at birth can help prevent irreversible damage, as in phenylketonuria, a mutant-gene metabolic disorder that can be controlled with diet.

Sequence Every Newborn?

Handle with care. Genome testing could enhance newborn screening, but it raises ethical issues [credit: Spencer Grant/Science Vol. 341, p. 1163 (2013)].

Screening for biochemical markers often turns up false positives, however, which genetic tests might help avoid. Moreover, genome sequencing of a single sample could potentially look for all of the ~4,000 (some estimate ~10,000) monogenic diseases—i.e., those caused by defects in single genes. For more information on the subject, the World Health Organization website provides a good introduction.

The ethical concern here is that genome sequencing, unlike the current newborn screening tests, could potentially reveal many more unexpected genetic risks, some for untreatable diseases. Which of these results should be divulged is already controversial, according to Kaiser, who added that “sparks are still flying” over an earlier report described in Science as follows:

Geneticists, ethicists, and physicians reacted with shock to recommendations released last week by the American College of Medical Genetics and Genomics: that patients undergoing genomic sequencing should be informed whether 57 of their genes put them at risk of serious disease in the future, even if they don’t want that information now. The recommendations also apply to children, whose parents would be told even if illness wouldn’t strike until adulthood. The advice runs counter to the long-standing belief that patients and parents have the right to refuse DNA results. This is the first time that a professional society has advised labs and doctors what to do when unanticipated genetic results turn up in the course of sequencing a patient’s genome for an unrelated medical condition.

Given this background, it’s reassuring—in my opinion—that NICHD is taking a ‘go slow’ approach. I’m further reassured that NICHD is funding research of technical and ethical/social issues in four different but interrelated studies (see table below). Kaiser adds that “all will examine whether genomic information can improve the accuracy of newborn screening tests, but they differ in which additional genes they will test and what results they will offer parents.”

New ground: four projects funded at a total of $25 million over 5 years will look at how genome testing could improve newborn screening and other questions [credit: Spencer Grant/Science Vol. 341, p. 1163 (2013)].

New ground: four projects funded at a total of $25 million over 5 years will look at how genome testing could improve newborn screening and other questions [credit: Spencer Grant/Science Vol. 341, p. 1163 (2013)].

More specifically, Stephen Kingsmore at Children’s Mercy Hospital in Kansas City, Missouri, wants to halve the time for his current 50-hour test—discussed in the next section—which he has used to diagnose genetic disorders in up to 50% of infants in his hospital’s neonatal intensive care unit. The test hones in on a subset of genes that may explain the baby’s symptoms. While his group may ask parents if they’re interested in unrelated genetic results, the focus is on ‘a critically ill baby and a distressed family who wants answers,’ Kingsmore told Kaiser.

A team at the University of North Carolina is studying how to return results to low-income families and others who might not be familiar with genomics. It is also dividing genetic findings into three categories—mutations that should always be reported; those that parents can choose to receive, which might include risk genes for adult cancers; and a third set that should not be disclosed, e.g. untreatable adult-onset diseases such as Alzheimer’s.

A team at Brigham and Women’s Hospital in Boston and Boston Children’s Hospital hopes to learn how doctors and parents will use genomic information. ‘We’re trying to imagine a world where you have this information available, whether you’re a sick child or healthy child. How will it change the way doctors care for children?’ asks co-principal investigator Robert Green.

Ethicist Jeffrey Botkin of the University of Utah opined that sequencing might never replace existing newborn screening because of its costs and the complexity, according to Kaiser. However, Kaiser said that Botkin and others believe that it’s important to explore these issues because wealthy, well-informed parents will soon be able to mail a sample of their baby’s DNA to a company to have it sequenced—regardless of whether medical experts think that’s a good idea. ‘There’s an appetite for this. It will be filled either within the medical establishment or outside of it,’ Kaiser quotes Green as saying.

I should add that this parental ‘appetite’ doesn’t seem to be easily satisfied at the moment, based on my—admittedly superficial—survey of what’s currently available in the commercial genome sequencing space. For now, companies such as Personalis and Knome restrict their offerings to researchers and clinicians, not direct-to-consumers, such as parents-on-behalf-of-newborns—yet.

Sample-to-Whole-Genome-Sequencing-Diagnosis in Only 50 Hours

Having been a laboratory investigator during the stunning evolution from manual Maxam-Gilbert sequencing to highly automated Sanger sequencing—the title of this section ‘blows my mind’ and seems impossible—but it’s not! Stephen Kingsmore and collaborators at the Children’s Mercy Hospital in Kansas City, Missouri reported this remarkable achievement in Science Translational Medicine in 2012 and, as noted in the aforementioned table, aim to cut this turnaround time to within 24 hours!

In that 2012 report, they make a compelling case for whole-genome sequence-based diagnostics—and super speedy sample-to-result by noting that monogenic diseases are frequent causes of neonatal morbidity and mortality, and disease presentations are often undifferentiated at birth. Of the ~4,000 monogenic diseases that have been characterized, clinical testing is available for only a handful of them and many feature clinical and genetic heterogeneity. Hence, an immense unmet need exists for improved molecular diagnosis in infants. Because disease progression is extremely rapid—albeit heterogeneous—in  newborns, molecular diagnoses must occur quickly to be relevant for clinical decision-making.

Using the workflow and timeline outlined below, they describe 50-hour differential diagnosis of genetic disorders by whole-genome sequencing (WGS) that features automated bioinformatics analysis and is intended to be a prototype for use in neonatal intensive care units. I should add that an automated bioinformatics analysis is critical for clinical utility, and has been the subject of ‘musings’ by Elaine Mardis in Genome Medicine entitled The $1,000 genome, the $100,000 analysis?

Sequence Every Newborn?Summary of the steps and timing (t, hours) resulting in an interval of 50 hours between consent and delivery of a preliminary, verbal diagnosis [taken from Saunders et al. Sci Transl Med 4, 154ra135 (2012).

To validate the feasibility of automated matching of clinical terms to diseases and genes, they entered retrospectively the presenting features of 533 children, who have received a molecular diagnosis at Children’s Mercy Hospital within the last 10 years, into symptom- and sign-assisted genome analysis (SSAGA)—a new clinico-pathological correlation tool that maps the clinical features of 591 well-established, recessive genetic diseases with pediatric presentations to corresponding phenotypes and genes known to cause the symptoms. Sensitivity was 99.3%, as determined by correct disease and affected gene nominations.

Rapid WGS was made possible by two innovations. First, a widely used WGS platform was modified to generate up to 140 Gb [Gb = giga base pairs = 1,000,000,000 base pairs] of sequence in less than 30 hours (Illumina HiSeq 2500). Secondly, sample preparation took 4.5 hours, while 2 × 100 base pairs of genome sequencing took 25.5 hours. The total ‘hands-on’ time for technical staff was 5 hours.

Readers who are interested in more technical details for sample prep, sequencing, and bioinformatics/analytics should read the full text. However, the authors’ abstract provides the following succinctly described diagnostic ‘payoff’, so to speak:

Prospective WGS disclosed potential molecular diagnosis of a severe GJB2-related skin disease in one neonate; BRAT1-related lethal neonatal rigidity and multifocal seizure syndrome in another infant; identified BCL9L as a novel, recessive visceral heterotaxy gene (HTX6) in a pedigree; and ruled out known candidate genes in one infant. Sequencing of parents or affected siblings expedited the identification of disease genes in prospective cases. Thus, rapid WGS can potentially broaden and foreshorten differential diagnosis, resulting in fewer empirical treatments and faster progression to genetic and prognostic counseling.

These are compelling results, in my opinion. Let me know if you also find this compelling.  As always, your comments are welcomed.

Postscript

After finishing the above post, Andrew Pollack at The New York Times published a fascinating article giving examples of how pharmaceutical companies are heavily investing in genetic studies that employ exome-sequencing of large study groups in a search for clues to aid drug development. Regeneron is conducting one such study that includes 100,000 genomes.

Searching for ‘Genius Genes’ by Sequencing the Super-Smart

  • Brainchild of a high-school dropout
  • Joined by two renowned Professors in the USA and UK
  • Enabled by the world’s most powerful sequencing facility
  • Jonathan Rothberg to do same for math ability 

Prologue

Before plunging into this post, those of you who follow college basketball are eagerly awaiting the start of “March Madness” and its “bracketology” for predicting all the winners, with odds of 1-in-9.2 quintillion—that’s nine followed by 18 zeros—and is why Warren Buffet will almost certainly not have to pay out the $1 billion he offered for doing so.

The following short story of how basketball came about is worth a quick read before getting to this posting’s DNA sequencing projects, which are not “madness” but definitely long-shot bets—and criticized by some. 

The original 1891 "Basket Ball" court in Springfield College used a peach basket attached to the wall (taken from Wikipedia).

The original 1891 “Basket Ball” court in Springfield College used a peach basket attached to the wall (taken from Wikipedia).

James Naismith (1861 – 1939) was a Canadian American sports coach and innovator. He invented the sport of basketball in 1891 and wrote the original basketball rulebook. At Springfield College, Naismith struggled with a rowdy class which was confined to indoor games throughout the harsh New England winter and thus was perpetually short-tempered. Under orders from Dr. Luther Gulick, head of Springfield College Physical Education, Naismith was given 14 days to create an indoor game that would provide an “athletic distraction.” Gulick demanded that it would not take up much room, could help its track athletes to keep in shape and explicitly emphasized to “make it fair for all players and not too rough.” Naismith did so using the actual “basket and ball” pictured below.

SNPs and GWAS assist in finding the roots of intelligence

Many studies indicate that intelligence is heritable, but to what extent is yet uncertain (taken from the Wall Street Journal via Bing Images).

Many studies indicate that intelligence is heritable, but to what extent is yet uncertain (taken from the Wall Street Journal via Bing Images).

Many of you are well aware of—if not actually involved in—the use of DNA sequence analysis to identify common single nucleotide polymorphisms (SNPs) that are associated with diseases or traits in a study population, relative to a normal control population. Examples of these genome-wide association studies (GWAS) included and were principally enabled in the 1990s by high-density “SNP chips” developed by Affymetrix and then Agilent. While technically straightforward, there’s a lot of genetics and not-so-simple statistics to deal with in designing GWAS and—especially—properly interpreting the results.

In the future, Junior’s DNA sequence could implicate other reasons for his failing academic performance, e.g. not studying enough (taken from dailymail.co.uk via Bing Images).

In the future, Junior’s DNA sequence could implicate other reasons for his failing academic performance, e.g. not studying enough (taken from dailymail.co.uk via Bing Images).

Now, following the advent of massively parallel “next generation” sequencing (NGS) platforms from Illumina and Life Technologies, whole genomes of larger populations (i.e. “many” 1,000s of individuals) can be studied, and less common (aka rare) SNPs can be sought. All of this has fueled pursuit of more challenging—and controversial—GWAS.

So it is the following two ongoing stories that I’ve referred to as the search for genius genes. One conceived by Bowen Zhao—a teenaged Chinese high-school dropout—aiming to find the roots of intelligence in our DNA by sequencing the “off-the-chart” super-smarties, and a newer project by Jonathan Rothberg—über-famous founder of Ion Torrent, which commercialized the game-changing semiconductor-sequencing technology acquired for mega millions by Life Tech—aimed at identifying the roots of mathematical ability by, need I say, Ion Torrent sequencing.

From Chinese high-school dropout to founder of a Cognitive Genomics Unit

It’s a gross understatement to say that Mr. Bowen Zhao is an interesting person—he’s actually an amazing person. As a 13 year old in 2007, he skipped afternoon classes at his school in Beijing and managed to get an internship at the Chinese Academy of Agricultural Sciences where he cleaned test tubes and did other simple jobs. In return, the graduate students let him borrow genetics textbooks and participate in experiments, including the sequencing of the cucumber genome. When the study of the cucumber genome was published in Nature Genetics in 2009, Mr. Zhao was listed as a co-author at the age of 15.

Tantalized by genomics, Mr. Zhao quit school and began to work full-time at BGI Shenzhen (near Hong Kong), one of the largest genomics research centers in the world. BGI (formerly known as the Beijing Genomics Institute) is a private company—partly funded by the Chinese government—that significantly expanded its sequencing throughput last year by acquiring Complete Genomics of Mountain View, California.

Mr. Bowen Zhao is a young researcher with amazing accomplishments (taken from thetimes.co.uk via Bing Images)

Mr. Bowen Zhao is a young researcher with amazing accomplishments (taken from thetimes.co.uk via Bing Images)

The BGI project is sequencing DNA from IQ outliers comparable to Einstein (taken from rosemaryschool.org via Bing Images).

The BGI project is sequencing DNA from IQ outliers comparable to Einstein (taken from rosemaryschool.org via Bing Images).

In 2010, BGI founded the Cognitive Genomics Unit and named Mr. Zhao as its Director of Bioinformatics. The Cognitive Genomics Unit seeks to better understand human cognition with the goal of identifying the genes that influence intelligence. Mr. Zhao and his team are currently using more than 100 state-of-the-art next generation sequencers to decipher some 2,200 DNA samples from some of brightest people in the world—extreme IQ outliers. The majority of the DNA samples come from people with IQs of 160 or higher, which puts them at the same level as Einstein. By comparison, average IQ in any population is set at 100, and the average Nobel laureate registers at around 145. Only one in every 30,000 people (0.003%) would qualify to participate in the BGI project.

In an article by Gautam Naik of the Wall Street Journal, Mr. Zhao is quoted as saying that “people have chosen to ignore the genetics of intelligence for a long time.” Mr. Zhao, who hopes to publish his team’s initial findings this year, added that “people believe it’s a controversial topic, especially in the West [but] that’s not the case in China,” where IQ studies are regarded more as a scientific challenge and therefore are easier to fund.

According to Naik, the roots of intelligence are a mystery, and studies show that at least half of IQ variation is inherited. While scientists have identified some genes that can significantly lower IQ—in people afflicted with mental retardation, for example—truly important genes that affect normal IQ variation have yet to be pinned down.

The BGI researchers hope to crack the problem by comparing the genomes of super-high-IQ individuals with the genomes of people drawn from the general population. By studying the variation in the two groups, they hope to isolate some of the hereditary factors behind IQ. Their conclusions could lay the groundwork for a genetic test to predict a person’s inherited cognitive ability. Although such a tool could be useful, it also might be divisive.

“If you can identify kids who are going to have trouble learning, you can intervene” early on in their lives, through special schooling or other programs, says Robert Plomin, Professor of Behavioral Genetics at King’s College, London, who is involved in the BGI project and quoted by Naik.

Critics, however, worry that genetic data related to IQ could easily be misconstrued—or misused. Research into the science of intelligence has been used in the past “to target particular racial groups or individuals and delegitimize them,” said Jeremy Gruber, President of the Council for Responsible Genetics, a watchdog group based in Cambridge, Massachusetts. “I’d be very concerned that the reductionist and deterministic trends that still are very much present in the world of genetics would come to the fore in a project like this,” Gruber added.

Obtaining access to ‘genius genes’ wasn’t easy

Getting DNA to sequence from super-smart people was easier said than done. According to Naik, Zhao’s first foray into the genetics of intelligence was a plan to collect DNA from high-achieving kids at local high schools. It didn’t work. “Parents were afraid [of giving consent] because their children’s blood would be taken,” Zhao told Naik.

In the spring of 2010, Stephen Hsu—a theoretical physicist from the University of Oregon (now at Michigan State University) who was also interested in the genetics of cognitive ability—visited BGI and joined Zhao to launch the BGI intelligence project. One part of the plan called for shifting to saliva-based DNA samples obtained from mathematically gifted people, including Chinese who had participated in mathematics or science Olympiad training camps. Another involved the collection of DNA samples from high-IQ individuals from the U.S. and other countries, including those with extremely high SAT scores, and those with a doctorate in physics or math from an elite university. In addition, anyone could enroll via BGI’s website—if they met the criteria—as have about 500 qualifying volunteers to date.

Interestingly, most of the samples so far have come from outside of China. The main source is Prof. Plomin of King’s College, who for his own research had collected DNA samples from about 1,600 individuals whose IQs were off the charts. Those samples were obtained through a U.S. project known as the Study of Mathematically Precocious Youth, now in its fourth decade. Dr. Plomin tracked down 1,600 adults who had enrolled as kids in the U.S. project, now based at Vanderbilt University. Their DNA contributions make up the bulk of the BGI samples.

Frequently asked questions about the BGI intelligence project, as well as a link to the detailed project proposal, can be read by clicking here. The penultimate and last paragraphs of the introductory section of this proposal are the following:

The brain evolved to deal with a complex, information-rich environment. The blueprint of the brain is contained in our DNA, although brain development is a complicated process in which interactions with the environment play an important role. Nevertheless, in almost all cases a significant portion of cognitive or behavioral variability in humans is found to be heritable—i.e., attributable to genetic causes.

The goal of the BGI Cognitive Genomics Lab (CGL) is to investigate the genetic architecture of human cognition: the genomic locations, allele frequencies, and average effects of the precise DNA variants affecting variability in perceptual and cognitive processes. This document outlines the CGL’s proposal to investigate one trait in particular: general intelligence or general mental ability, often referred to a “g.”

On Jan 1st 2014, I contacted Prof. Hsu, who coauthored BGI’s “g” proposal, and asked him to clarify whether genome sequencing was in fact being used, as opposed to SNP genotyping chips that were specified in the aforementioned proposal’s Materials and Methods section. I also inquired as to whether any results have been published. His reply on the same day was that the “initial plan was SNPs but [was] upgraded to sequencing. No results yet.”

Stay tuned.

Jonathan Rothberg’s ‘Project Einstein’ taps 400 top mathematicians

In the October 31st 2013 issue of Nature, Erika Check Hayden reported on ‘Project Einstein,’ Ion Torrent founder/inventor/serial entrepreneur Jonathan Rothberg’s new venture aimed at identifying the genetic roots of math genius.

Jonathan Rothberg, founder of CuraGen, 454 Life Sciences, Ion Torrent, Rothberg Center for Childhood diseases, and RainDance Technologies (taken from nathanielwelch.com via Bing Images).

Jonathan Rothberg, founder of CuraGen, 454 Life Sciences, Ion Torrent, Rothberg Center for Childhood diseases, and RainDance Technologies (taken from nathanielwelch.com via Bing Images).

According to Check Hayden’s news article, Rothberg and physicist/author Max Tegmark at MIT in Cambridge, “will be wading into a field fraught with controversy” by enrolling about 400 mathematicians and theoretical physicists from top-ranked US universities in ‘Project Einstein’ to sequence the participants genomes using Ion Torrent machines that Rothberg developed. Critics claim that the study population, like BGI’s “g” project, is too small to yield meaningful results for such complex traits. Check Hayden adds that “some are concerned about ethical issues. If the projects find genetic markers for math ability, these could be used as a basis for the selective abortion of fetuses or in choosing between embryos created through in vitro fertilization.” She says that Rothberg is pushing ahead, and quotes him as stating, “I’m not at all concerned about the critics.”

On the positive side, Prof. Plomin mentioned above in BGI project “g” is said to believe that there is no reason why genome sequencing won’t work for math ability. To support this position, Plomin refers to his 2013 publication entitled Literacy and numeracy are more heritable than intelligence in primary school, which indicates that as much as two-thirds of a child’s mathematical aptitude seems to be influenced by genes.

I’ll be keeping tabs on the project to see how it progresses and how the ethics issue plays out.

Genetics of intelligence is complex and has foiled attempts at deciphering

After reading about the scientifically controversial aspects of both project “g” and ‘Project Einstein,’ I became curious about the outcomes of previous attempts to decipher the genetic basis of intelligence. There was way too much literature to delve into deeply, but a 2013 New Scientist article by Debora MacKenzie entitled ‘Intelligence genes’ evade detection in largest study is worth paraphrasing, as it distills out some simplified takeaways from the referenced study by Koellinger and 200 (!) collaborators published in Science.

  • This team of researchers assembled 54 sets of data on more than 126,000 people who had their genomes analyzed for 2.5 million common SNPs, and for whom information was available on length and level of education. Study organizer Koellinger admits that educational achievement is only a rough proxy for intelligence, but this information was available for the requisite large number of people.
  • Three SNPs from 100,000 people correlated significantly with educational achievement, and were tested against SNPs from the other 26,000 people. The same correlations held, replicating the first analysis. However, the strength of the correlations for each SNP accounted for at most 0.002% of the total variation in educational attainment.
  • “Probably thousands of SNPs are involved, each with an effect so small we need a much larger sample to see it,” says Koellinger. Either that, or intelligence is affected to a greater degree than other heritable traits by genetic variations beyond these SNPs—perhaps rare mutations or interactions between genes.
  • Robert Plomin adds that whole genome sequencing, as being done by BGI, allows researchers to “look for sequence variations of every kind.” Then, the missing genes for intelligence may finally be found, concludes MacKenzie.

Parting Thoughts 

Most, if not all of you, will agree with the contention that a human being is not merely a slave to his or her genes. After all, hasn’t determinism been swept away by the broom of quantum mechanical probabilities as a physical basis of free will? If so, then what role does inherited genetics actually play in intelligence? While the answer to this rhetorical question is obviously not simple, and still hotly debated, I found my thoughts to be largely reflected by a posting at Rose Mary School paraphrased as follows, keeping in mind that all analogies are imperfect:

Human life has been compared to a game of cards. At birth, every person is dealt a hand of cards—i.e., his or her genetic make-up. Some receive a good hand, others a less good one. Success in any game, however, is almost always a matter of education, learning, and culture. For sure, there are often certain innate qualities that will give one person an advantage over another in a specific game. However, without having learned the game and without regular and rigorous practice, nobody will ever become a champion at any game. In the same way the outcome of the game of life is not solely determined by the quality of a person’s initial hand of cards, but also by the way in which he or she takes part in the game of life. His or hers ability to take part in the game of life satisfactorily, perhaps even successfully, will be determined to a very large extent by the quality and quantity of education that he or she has enjoyed.

When I gave advice to students, as a teacher, it was very simple and what I did—and still do—myself: “study hard, work harder, and success will follow.”

As always, your comments are welcomed.

In Search of RNA Epigenetics: A Grand Challenge

  • Methylated riboA and riboC are the most commonly detected nucleobases in epigenetics research
  • Powerful new analytical methods are key tools for progress
  • Promising PacBio sequencing and novel “Pan Probes” reported   

In a Grand Challenge Commentary published in Nature Chemical Biology in 2010, Prof. Chuan He at the University of Chicago opined that “[p]ost-transcriptional RNA modifications can be dynamic and might have functions beyond fine-tuning the structure and function of RNA. Understanding these RNA modification pathways and their functions may allow researchers to identify new layers of gene regulation at the RNA level.”

Like other scientists who get hooked by certain Grand Challenges, I became fascinated by this possibility of yet “new layers” of genetic regulation involving RNA, either as conventional messenger RNA (mRNA) or more recently recognized long noncoding RNA (lncRNA). Part of my intellectual stimulation was related to the fact that some of my past postings have dealt with both lncRNA as well as recent advances in DNA epigenetics, so the notion of RNA epigenetics seemed to tie these together.

After doing my homework on recent publications related to possible RNA epigenetics, it became apparent that this posting could be logically divided into commentary on the following three major questions: what are prevalent epigenetic RNA modifications, what might these do, and where is the field going? Future directions were addressed by interviews with two leading investigators: Prof. Chuan He, who is mentioned above, and Prof. Tao, who has been involved in cutting edge methods development.

RNA Epigenetic Modifications

More than 100 types of RNA modifications are found throughout virtually all forms of life. These are most prevalent in ribosomal RNA (rRNA) and transfer RNA (tRNA), and are associated with fine tuning the structure and function of rRNA and tRNA. Comments here will instead focus on mRNA and lncRNA in mammals, wherein the most abundant—and far less understood—modifications are N6-methyladenosine (m6A) and 5-methylcytidine (m5C).

structures

Three Approaches to Sequencing m6A-Modified RNA

Discovered in cancer cells in the 1970s, m6A is the most abundant modification in eukaryotic mRNA and lncRNA. It is found at 3-5 sites on average in mammalian mRNA, and up to 15 sites in some viral RNA. In addition to this relatively low density, specific loci in a given mRNA were a mixture of unmodified- and methylated-A residues, thus making it very difficult to detect, locate, and quantify m6A patterns. Fortunately, that has changed dramatically with the advent of various high-throughput “deep sequencing” technologies, as well as other advances.

(1.) Antibody-based m6A-seq 

An impressive breakthrough publication in Nature in 2012 by a group of investigators in Israel reported novel methodology called m6A-seq for determining the positions of m6A at a transcriptome-wide level. This approach, which is a variant of methylated DNA immunoprecipitation (MeDIP or mDIP), combines the high specificity of an anti-m6A antibody with Illumina’s massively parallel sequencing of randomly fragmented transcripts following immunoprecipitation. These researchers summarize their salient findings as follows.

“We identify over 12,000 m6A sites characterized by a typical consensus in the transcripts of more than 7,000 human genes. Sites preferentially appear in two distinct landmarks—around stop codons and within long internal exons—and are highly conserved between human and mouse. Although most sites are well preserved across normal and cancerous tissues and in response to various stimuli, a subset of stimulus-dependent, dynamically modulated sites is identified. Silencing the m6A methyltransferase significantly affects gene expression and alternative splicing patterns, resulting in modulation of the p53 (also known as TP53) signaling pathway and apoptosis. Our findings therefore suggest that RNA decoration by m6A has a fundamental role in regulation of gene expression.”

Moreover, their concluding sentence refers back to He’s aforementioned Grand Challenge Commentary about RNA epigenetics in 2010, just two years earlier.

“The m6A methylome opens new avenues for correlating the methylation layer with other processing levels. In many ways, this approach is a forerunner, providing a reference and paving the way for the uncovering of other RNA modifications, which together constitute a new realm of biological regulation, recently termed RNA epigenetics.”

(2.) Promising PacBio Single-Molecule Real-Time (SMRT) Sequencing of m6A

In a previous post, I praised PacBio (Pacific Biosciences) for persevering in development of its SMRT sequencing technology that uniquely enables, among other things, direct sequencing of various types of modified DNA bases via differentiating the kinetics of incorporating labeled nucleotides. Attempts to extend the SMRT approach to sequencing m6A have been recently reported by PacBio in collaboration with Prof. Pan (see below) and others in J. Nanobiotechnology in April 2013. Using model synthetic RNA templates and HIV reverse transcriptase (HIV-RT) they demonstrated adequate discrimination of m6A from A, however, “real’ RNA samples having complex ensembles of tertiary structures proved to be problematic. Alternative engineered RTs that are more processive and accommodative of labeled nucleotides were said to be under investigation in order to provide longer read lengths and appropriate incorporation kinetics.

The authors are optimistic in being able to solve these technical problems, and concluded their report by stating:

  “[w]e anticipate that the application of our method may enable the identification of the location of many modified bases in mRNA and provide detailed information about the nature and the dynamic RNA refolding in retroviral/retro-transposon reverse transcription and in 3’-5’ exosome degradation of mRNA.”

Let’s hope that this is achieved soon!

(3.) Nanopore Sequencing of m6A?

It’s too early to be sure, but continued incremental advances in possible approaches to nanopore sequencing suggest applicability to m6A. As pictured below, Bayley and coworkers describe a method that uses ionic current measurement to resolve ribonucleoside monophosphates or diphosphates (rNDPs) in α-hemolysin protein nanopores containing amino-cyclodextrin adapters.

Taken from Bayley and coworkers in Nano Lett. (2013)

Taken from Bayley and coworkers in Nano Lett. (2013)

The accuracy of base identification is further investigated through the use of a guanidino-modified adapter. On the basis of these findings, an exosequencing approach for single-stranded RNA (ssRNA) is envisioned in which a processive exoribonuclease (polynucleotide phosphorylase, PNPase) presents sequentially cleaved rNDPs to a nanopore. Although extension of this concept to include m6A has yet to be demonstrated, earlier feasibility studies by Ayub & Bayley have shown discrimination of m6A (and other modified bases) from unmodified ribobases.

Two Probe-Based Methods for Detecting Specific m6A Sites

1.) “Pan Probes”

As the saying goes, “what goes around comes around”, and in this instance its repurposing 2’-O-methyl (2’OMe) modified RNA/DNA/RNA oligos. This general class of chemically synthesized chimeric “gapmers” was originally used for RNase H-mediated cleavage of mRNA in antisense studies. Very recently, however, Pan and coworkers have cleverly adapted these probes—which I like to alliteratively refer to as “Pan Probes”—to m6A detection in mRNA and lncRNA.

For details see SCARLET workflow; taken from Pan and coworkers RNA (2013)

Pan Probes are comprised of “7-4-7 gapmers” having seven 2’OMe RNA nucleotides flanking four DNA nucleotides, the latter of which straddle known (or suspected) m6A sites, as depicted in the cartoon shown. The indicated series of steps, which involve site-specific cleavage and radioactive-labeling followed by ligation-assisted extraction and thin-layer chromatography, is thankfully called SCARLET by these investigators.

SCARLET was used by Pan and coworkers to determine the m6A status at several sites in two human lncRNAs and three human mRNAs, and found that the m6A fraction varied between 6% and 80% among these sites. However, they also found that many m6A candidate sites in these RNAs were not modified. Obviously, while much more work needs to be done to collect data for deciphering dynamic patterns and implications of m6A RNA epigenetic modifications, these investigators note that SCARLET is, in principle, applicable to m5C, pseudouridine, and other types of epigenetic RNA modifications.

Readers interested in designing and investigating their own Pan Probes can obtain these 7-4-7 gapmers by using TriLink’s OligoBuilder® and simply selecting “PO 2’OMe RNA” from the Primary Backbone dropdown menu, typing the first 7 bases in the Sequence box, selecting the 4 DNA bases from the Chimeric Bases menu and then typing the remaining 7 2’OMe RNA bases.

(2.) Probes for High-Resolution Melting

In a new approach very recently reported by Golovina et al. at Lomonosov Moscow State University, the presence of m6A in a specific position of mRNA or lncRNA molecule is detected using a variant of high-resolution melting (HRM) analysis applicable to, for example, single-nucleotide genotyping. The authors suggest that this method lends itself to screening many samples in a high-throughput assay following initial identification of loci by sequencing (see above). The method uses two labeled probes—one with 5’-FAM and another with 3’-BHQ1 (both available from Trilink’s OligoBuilder®)—that hybridize to a particular query position in a total RNA sample, as shown below for a 23S rRNA model system. The presence of m6A lowers the melting temperature (Tm), relative to A, with a magnitude that is sequence-context dependent.

Taken from Golovina et al. Nucleic Acids Res. (2013).

Taken from Golovina et al. Nucleic Acids Res. (2013).

The authors studied various probe-target constructs, and recommend 12–13-nt-long probes containing a quencher, and >20-nt long probes containing a fluorophore.  They also could advise that the quencher-containing oligonucleotide hybridizes to RNA such that m6A be directly opposite the 3′-terminal nucleotide carrying the quencher. The authors point out that relatively low-abundant, non-ribosomal targets need partial enrichment by, for example, simple molecular weight-based purification or commercially available kits. In this regard, they estimate that, if a particular type of mRNA was present at 10,000 copies per mammalian cell, 107 cells would be required to analyze m6A by this HRM method.

m5C Analysis by Sequencing of Bisulfite-Converted RNA

Selective reaction of bisulfite with C but not m5C in RNA, analogous to that long used for DNA, provides the basis for determining C-methylation status by sequencing. As detailed by Squires et al. in Nucleic Acids Res. in 2013, bisulfite-converted RNA can be sequenced by either of two methods: conversion to cDNA, cloning, and conventional sequencing, or conversion to a next-generation sequencing library. These authors described their salient findings as follows.

“We confirmed 21 of the 28 previously known m5C sites in human tRNAs and identified 234 novel tRNA candidate sites, mostly in anticipated structural positions. Surprisingly, we discovered 10,275 sites in mRNAs and other non-coding RNAs. We observed that distribution of modified cytosines between RNA types was not random; within mRNAs they were enriched in the untranslated regions and near Argonaute binding regions… Our data demonstrates the widespread presence of modified cytosines throughout coding and non-coding sequences in a transcriptome, suggesting a broader role of this modification in the post-transcriptional control of cellular RNA function.”

“Writing, Reading, and Erasing” RNA Epigenetic Modifications

Enzyme-mediated post-transcriptional RNA methylation (aka “writing”) and demethylation (aka “erasing”) are critical processes to identify and fully characterize in order to elucidate RNA epigenetics, and are formally analogous to those operative for DNA epigenetics.

RNA epigenetic “writing” mechanisms have focused on N6-adenosine-methyltransferase 70 kDa subunit, an enzyme that in humans is encoded by the METTL3 gene, and is involved in the posttranscriptional methylation of internal adenosine residues in eukaryotic mRNAs to form m6A. According to Squires et al., two m5C methyltransferases in humans, NSUN2 and TRDMT1, are known to modify specific tRNAs and have roles in the control of cell growth and differentiation.

As for “erasing”, in 2011, He’s lab discovered the first RNA demethylase, abbreviated FTO, for fat mass and obesity-associated protein, which has efficient oxidative demethylation activity targeting m6A in RNA in vitro. They also showed for the first time that this erasure of m6A could significantly affect gene expression regulation. In 2013, He’s lab discovered the second mammalian demethylase for m6A, ALKBH5, which affects mRNA export and RNA metabolism, as well as the assembly of mRNA processing factors, suggesting that reversible m6A modification has fundamental and broad functions in mammalian cells.

So, if Mother Nature evolved these mechanisms for writing and erasing RNA epigenetic modifications, what about the equally important, in between process of “reading” them? He and Pan and collaborators have very recently reported insights to such reading. They showed that m6A is selectively recognized by the human YTH domain family 2 (YTHDF2) “reader” protein to regulate mRNA degradation. They identified over 3,000 cellular RNA targets of YTHDF2, most of which are mRNAs, but also include non-coding RNAs, with a conserved core motif of G(m6A)C. They further establish the role of YTHDF2 in RNA metabolism, showing that binding of YTHDF2 results in the localization of bound mRNA from the translatable pool to mRNA decay sites. The carboxy-terminal domain of YTHDF2 selectively binds to m6A-containing mRNA, whereas the amino-terminal domain is responsible for the localization of the YTHDF2–mRNA complex to cellular RNA decay sites. These findings, they say, indicate that the dynamic m6A modification is recognized by selectively binding proteins to affect the translation status and lifetime of mRNA.

Expert Opinions of the Future for RNA Epigenetics

As I’ve said here before, there is no crystal ball for accurately predicting the future in science, although scientists do enjoy imagining that there is. Opinions of two “hands on” experts in the emerging field of RNA epigenetics are certainly of interest in this regard. Below are some comments offered by the aforementioned Prof. Tao Pan and Prof. Chuan He provided via an email interview in which I posed the question, ‘What do you see as the most important developments for RNA epigenetics?’ These experts have  thrown down the gauntlet, so to speak, by asserting RNA epigenetics as a Grand Challenge.

Prof. Tao Pan

Prof. Tao Pan

“In my opinion, the biggest current challenge for the field is to develop methods that can perturb m6A modification at specific sites to assess m6A function directly in specific genes. RNA interference or overexpression of an mRNA may simply decrease or increase modified and unmodified RNA alike. In a few cases, mutation of a known m6A site in an mRNA resulted in additional modification at a nearby consensus site, so that one cannot simply assume that mutation of a known site would not lead to cryptic sites nearby that may perform the same function. Further, functional understanding of a specific site should also take into account that all currently known m6A sites in mRNA and viral RNA are incompletely modified, so that one may need to explain why cells simultaneously maintain two RNA species that differ only at the site of m6A modification.”   

Prof. Chuan He

Prof. Chuan He

The m6A modification is much more abundant than other RNA modifications in mammalian and plant nuclear RNA and is currently the only known reversible RNA modification. The m6A maps of various organisms/cell types need to be obtained. High-resolution methods to obtain transcriptome-wide, base-resolution maps are important. A future focus should be to connect the reversible m6A methylation with functions, in particular, the studies of the reader proteins that specifically recognize m6A and exert biological regulation. The first example of the YTHDF2 work just published in Nature (above) is a good example. We believe many other reader proteins exist and impact almost all aspects of mRNA metabolisms or functions of lncRNA. 

Besides m6A, there are m5C, pseudoU, 2′-OMe, and potentially other modifications in mRNA and various non-coding RNAs (such as the recently discovered hm6A and f6A). The methods to map these modifications (except m5C) need to be developed and their biological functions need to be elucidated. 

Lastly, potential reversal of rRNA and tRNA modifications needs to be studied. As I stated in the Commentary in 2010, dynamic RNA modifications could impact gene expression regulation resembling well-known dynamic DNA and histone modifications. I think now we have enough convincing data to indicate this is indeed the case. The future is bright.”

Very bright, indeed! Your comments about this posting are welcomed.

DNA Barcoding Exposes Scary Data for Herbal Products

  • Americans spend $5 billion annually for products with unproven benefits
  • Recent studies show majority of herbal products are contaminated or inauthentic
  • Dietary supplements account for ~20% of drug-related liver injuries
  • What can be done to protect consumers?

In August 1873, Darwin had ‘a fit’, in which he temporarily lost his memory and could not move. He recovered, and busied himself in work on the cross- and self-fertilization of plants. Published in 1876, Effects of Cross and Self Fertilisation in the Vegetable Kingdom provided ample proof of his belief that cross-fertilized plants produced vigorous off-spring that were better adapted to survive. Taken from otago.ac.nz via Bing Images

Since this post comes on the heels of Charles Darwin Day (February 12th ), it’s apropos to recognize Darwin’s plant-related scientific contributions dealing with plant movement and cross-fertilization—largely as the result of his keen observations and powerful reasoning that we should strive to emulate in our scientific pursuits. Darwin’s sense of scientific integrity seems to have been lost on the world of herbal products according to several recent studies and reports. It appears this industry is in need of some policing and compliance that may well come from DNA barcoding.   

According to Anahad O’Connor’s sobering article in the NY Times, Americans spend $5 billion a year on unproven herbal supplements—promising everything from fighting colds to boosting memory. Given the size of this market, it’s alarming that these products are oftentimes contaminated and/or mislabeled. Hmmm, sound familiar?  You may be recalling one of my posts from last year that explored analogous DNA-based findings for meat and fish products indicating global gross negligence or fraud. To quote Yogi Berra, “it’s like déjà vu, all over again”. Before delving into the details of this unsettling—if not downright scary—situation, let’s briefly discuss some background concepts that will enable us to understand how DNA barcoding may help bring honesty and transparency to the herbal products industry.

What is DNA Barcoding?

Since the herbal products exposé described below involves use of DNA barcoding, some readers may want to know what this methodology involves. Fortunately, Cold Spring Harbor Laboratory provides a website called DNA Barcoding 101 that provides the following snippets of information, as well as detailed “how to” protocols in a downloadable PDF.

Taxonomy is the science of classifying living things according to shared features. Less than two million of the estimated 5-50 million plant and animal species have been identified. Scientists agree that the yearly rate of extinction has increased from about one species per million to 100-1,000 per million. This means that thousands of plants and animals are lost each year, and most of these have not yet been identified.

Classical taxonomy falls short in this race to catalog biological diversity before it disappears. Specimens must be carefully collected and handled to preserve their distinguishing features. Differentiating subtle anatomical differences between closely related species requires the subjective judgment of a highly trained specialist—and few are being produced in colleges today.

Now, DNA barcodes allow non-experts to objectively identify species—even from small, damaged, or industrially processed material. Just as the unique pattern of bars in a universal product code (UPC) identifies each consumer product, a “DNA barcode” is a unique pattern of DNA sequence that identifies each living thing. DNA barcodes, about 700 nucleotides in length, can be quickly processed from thousands of specimens and unambiguously analyzed by computer programs. Barcoding relies on short, highly variably regions of the genome. With thousands of copies per cell, mitochondrial and chloroplast sequences are readily amplified by PCR, even from very small or degraded specimens, to enable DNA sequencing of the barcode.

Herbal product DNA barcodes are, in principle, similar to those pictured below that exemplify unique characterization of two different cryptic species of a butterfly, which appear visually to be nearly identical, and two genera of owl that are visually distinct.

Four-color DNA sequencing trace showing sequence of T (red), C (blue), G (black), and A (green) bases that comprises a barcode and can be redrawn as depicted below (taken from srmgenetics.info via Bing Images).

Four-color DNA sequencing trace showing sequence of T (red), C (blue), G (black), and A (green) bases that comprises a barcode and can be redrawn as depicted below (taken from srmgenetics.info via Bing Images).

DNA barcodes identify all living things (taken from boomersinfokiosk.blogspot.com via Bing Images).

DNA barcodes identify all living things (taken from boomersinfokiosk.blogspot.com via Bing Images).

Selling Herbal Products is BIG Business…But Who’s Watching Out for You?

The international trade in herbal products is a major force in the global economy and the demand is increasing in both developing and developed nations. According to a recent report, there are currently more than 1,000 companies producing medicinal plant products with annual revenues in excess of $60 billion. Notably, medicinal herbs now constitute the most rapidly growing segment of the North American alternative medicine market, with over 29,000 herbal substances generating billions of dollars in trade. For those of you who may be interested, I found this list of the top 10 best-selling herbal supplements in the US, among which are Ginseng (#9), Purple coneflower (#6), Ginkgo (#4), Cranberry (#2), and—surprising to me—Soy (#1).

Ginseng berry growth is restricted to certain areas of the world due to climate and weather conditions, making it difficult to cultivate. When this rare berry is picked, it is after a three- to four-year wait, and then the harvest must occur during a short, two-week period. Ginseng berry contains potent antioxidants called ginsenosides—such as Ginsenoside Rg1 pictured here—that are more abundant and different than those found in the root of the ginseng plant. These nutrients are believed to enhance the body's natural defense system to aid against invading free radicals (taken from eexcel.net via Bing Images and Wikipedia.org).

Ginseng berry growth is restricted to certain areas of the world due to climate and weather conditions, making it difficult to cultivate. When this rare berry is picked, it is after a three- to four-year wait, and then the harvest must occur during a short, two-week period. Ginseng berry contains potent antioxidants called ginsenosides—such as Ginsenoside Rg1 pictured here—that are more abundant and different than those found in the root of the ginseng plant. These nutrients are believed to enhance the body’s natural defense system to aid against invading free radicals (taken from eexcel.net via Bing Images and Wikipedia.org).

Although soy has been a staple of Asian cuisine for centuries, Westerners have only recently become aware of its valuable health benefits. Soy is a source of easily digestible protein and contains no saturated fat or cholesterol. Scientists have demonstrated through over 40 studies, spanning a 20-year period that eating 25 grams of soy per day helps reduce the risk of America's number one killer—heart disease. According to the US Food and Drug Administration (FDA), eating 25 grams of soy per day as a part of a low fat, low-cholesterol diet may reduce the risk of coronary heart disease (taken from eexcel.net via Bing Images).

Although soy has been a staple of Asian cuisine for centuries, Westerners have only recently become aware of its valuable health benefits. Soy is a source of easily digestible protein and contains no saturated fat or cholesterol. Scientists have demonstrated through over 40 studies, spanning a 20-year period that eating 25 grams of soy per day helps reduce the risk of America’s number one killer—heart disease. According to the US Food and Drug Administration (FDA), eating 25 grams of soy per day as a part of a low fat, low-cholesterol diet may reduce the risk of coronary heart disease (taken from eexcel.net via Bing Images).

Unfortunately, product adulteration and ingredient substitution is not uncommon in the medicinal herb and dietary supplement markets, as species of inferior quality are often substituted for those of a higher value. This practice constitutes not only product fraud, but according to the World Health Organization (WHO), it is a serious threat to consumer safety, as commented on below.

Currently, there are no best practices in place for identifying the species of the various ingredients used in herbal products. This is because the diagnostic morphological features of the plants cannot typically be assessed from powdered or otherwise processed biomaterials. As a result, the marketplace is prone to contamination and possible product substitution, which dilute the effectiveness of otherwise potentially useful remedies. Fortunately, DNA barcoding can now be used to combat this serious situation.

Report Reveals Rampant Contamination and Substitution

Dr. Steven G. Newmaster, Associate Professor, Centre for Biodiversity Genomics, Biodiversity Institute of Ontario, University of Guelph, Guelph, Ontario, Canada (taken from uoguelph.ca).

Dr. Steven G. Newmaster, Assoc. Prof., Centre for Biodiversity Genomics, Biodiversity Institute of Ontario, University of Guelph, Ontario, Canada (taken from uoguelph.ca).

In October of 2013, a team of Canadian and Indian scientists led by Dr. Steven G. Newmaster published in BMC Medicine a report entitled DNA barcoding detects contamination and substitution in North American herbal products that has received widespread media attention. This study utilized blind sampling of commercially available herbal products, which were tested for authentication of plant ingredients using a Standard Reference Material (SRM) herbal DNA barcode library. The research questions focused on the following three areas. (1) Authentication: is the herbal species on the label found in the product? (2) Substitution: is the main herbal ingredient substituted by other species? (3) Fillers: are any unlabeled fillers used?

They tested the authenticity of 44 herbal products (41 capsules; 2 powders; 1 tablet) representing 12 companies. The samples were collected in the greater Toronto area in Canada, with several samples mailed from distributors in the US. All products are available to consumers in both Canada and the US. The herbal product samples represented 30 herbal species that were each represented by 2 or 3 different companies. The samples were submitted in a blind test for authentication (product no. label only) using PCR-BigDye® Sanger sequencing-based DNA barcoding at the Centre for Biodiversity Genomics within the Biodiversity Institute of Ontario, University of Guelph.

The following are some of their sobering findings:

  • Only 2 of the 12 companies tested provided authentic products without substitutions, contaminants or fillers.
  • Nearly 60% of the herbal products contained plant species not listed on the label.
  • Product substitution was detected in 32% of the samples.
  • More than 20% of the products included fillers such as rice, soybeans and wheat not listed on the label.

This graphical summary of the results taken from the NY Times speaks for itself.

1105-HERBAL-FOR-FRONT-popup

In a follow-up press release from the University of Guelf, Dr. Newmaster said that “[c]ontamination and substitution in herbal products present considerable health risks for consumers.” He added that “[w]e found contamination in several products with plants that have known toxicity, side effects and/or negatively interact with other herbs, supplements and medications.” The statement added that one product labeled as St. John’s Wort contained Senna alexandrina, a plant with laxative properties—not intended for prolonged use, as it can cause chronic diarrhea and liver damage, and negatively interacts with immune cells in the colon. Also, several herbal products contained Parthenium hysterophorus (feverfew), which can cause swelling and numbness in the mouth, oral ulcers and nausea. It also reacts with medications metabolized by the liver.

Furthermore, one ginkgo product was contaminated with Juglans nigra (black walnut), which could endanger people with nut allergies. Unlabeled fillers such as wheat, soybeans and rice are also a concern for people with allergies or who are seeking gluten-free products, Newmaster said. “It’s common practice in natural products to use fillers such as these, which are mixed with the active ingredients. But a consumer has a right to see all of the plant species used in producing a natural product on the list of ingredients.”

Dietary Supplements Account for ~20% of Drug-related Liver Injuries

Consumer safety is not a theoretical concern. According to Anahad O’Connor’s follow-on and lengthier Dec 22nd 2013 NYTimes article entitled Spike in Harm to Liver is Tied to Dietary Aids, “[d]ietary supplements account for nearly 20 percent of drug-related liver injuries that turn up in hospitals, up from 7 percent a decade ago, [that] according to an analysis by a national network of liver specialists. The research included only the most severe cases of liver damage referred to a representative group of hospitals around the country, and the investigators said they were undercounting the actual number of cases.” The article features a 17 year-old male who suffered severe liver damage after using a concentrated green tea extract he bought at a nutrition store as a “fat burning” supplement. The damage was so extensive that he was put on the waiting list for a liver transplant. Fortunately he recovered and the transplant wasn’t necessary.

This NYTimes article is well worth reading, and provided further evidence that tighter regulations are needed.

Conclusions

Given the sobering significance of the above study by Newmaster and coworkers, I thought it was better to directly quote concluding remarks by these authors, rather than selectively paraphrase and oversimplify the situation:

“Currently there are no standards for authentication of herbal products. Although there is considerable evidence of the health benefits of herbal medicine, the industry suffers from unethical activities by some of the manufacturers, which includes false advertising, product substitution, contamination and use of fillers. This practice constitutes not only food fraud, but according to the WHO, serious health risks for consumers. A study of health claims made by herbal product manufacturers on the internet found that 55% of manufacturers illegally claimed to treat, prevent, diagnose or cure specific diseases. Regulators such as the FDA and Canadian Food Inspection Agency (CFIA) may not have the resources to adequately monitor the dietary supplement manufacturers and their advertising claims, and there are concerns that the current regulatory system is not effective in protecting consumers from the risks associated with certain herbal products. Chemical research studies have documented poor quality control and high content variability of active ingredients among products from a variety of manufacturers of herbal supplements. This is partly because herbs contain complicated mixtures of organic chemicals, the levels of which may vary substantially depending upon many factors related to the growth, production and processing of each specific herbal product. Although many manufacturers provide products with consistent levels of active ingredients through a process known as chemical standardization, this technique has uncertain effects on the safety and efficacy of the final product.”

“Many of the dangers of commercial plant medicine have been brought to light by DNA technology based studies that have identified contamination of herbal products with poisonous plants. Eroding consumer confidence is driving the demand for a product authentication service that utilizes molecular biotechnology. One approach to vetting herbal product substitution and contamination is product authentication using DNA barcoding. Research studies such as ours and others reinforce the importance of using DNA barcoding in the authentication of socioeconomically important herbal species. We suggest that the herbal industry should voluntarily embrace DNA barcoding for authenticating herbal products through testing of raw materials used in manufacturing products, which would support sovereign business interests and protect consumers. This would be a minor cost to industry with a limited amount of bulk product testing, which would certify a high quality, authentic product. If the herb is known to have health benefits and it is in the product, then this would provide a measure of quality assurance in addition to consistent levels of active ingredients. Currently we are building an SRM DNA barcode library for commercial herbal species and standard testing procedures that could be integrated into cost effective ‘best practices’ in the manufacturing of herbal products. This would provide industry with a competitive advantage as they could advertise that they produce an authentic, high quality product, which has been tested using DNA-based species identification biotechnology, therefore gaining consumer confidence and preference. This approach would support the need to address considerable health risks to consumers who expect to have access to high quality herbal products that promote good health.”

I for one would gladly pay a premium for herbal products that are “DNA Barcode Certified”.

Wouldn’t you?

As always, your comments are welcomed.

Postscript

After I finished writing this post, I went back to the aforementioned BMC Medicine article by Newmaster and coworkers to use its Google-based link to blogs citing this publication. While most were recaps of the report, one entitled It’s Now Herbal Products’ Turn to (Unfairly) Wear the Scarlet Letter caught my attention. I won’t provide a synopsis here, but you may want to read the blogger’s rather different, über critical commentary that seems (unfairly) biased to me.

If you’re interested in all facets of DNA barcoding, I highly recommend visiting the DNA Barcoding blog by Dirk Steinke at Guelf University that has new posts daily.

For those of you who are educators, the BioBelize website provides an example of how secondary education programs in New York City and Belize came together to define and implement DNA Barcoding curricula to encourage young student interest and involvement in this aspect of science.

Sex, Money and Brains: What do They Have in Common?

  • Human brain connectome: the last frontier?
  • Is the UK making a real HAL 9000?
  • MRI imaging of love and other emotions

To be perfectly honest, writing a blog about USA and UK big-buck backed ‘brain initiatives’ was on my mental—pun intended—‘to do’ list for some time, but I couldn’t see any direct relevance to trends in nucleic acids research. I changed my mind—pun intended—and opted to write this post after a recent publication in highly respected PNAS grabbed my attention. The publication dealt with brains and sex—and since Valentine’s Day is just around the corner, the time seemed right.

In this publication, brains were shown to be ‘hard wired’ very differently based on sex, and—more interestingly—these so-called ‘structural connectomes’ were associated with different behavioral characteristics. Male brains are structured to facilitate perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes. My first reaction was that these ‘connectome’-derived behaviors aligned with males being ‘action’-oriented and females having ‘women’s intuition’.

brain

Brain networks show increased connectivity from front to back and within one hemisphere in males (upper) and left to right in females (lower); Ragini Verma, PNAS 2013 (taken from planet.infowars.com via Bing Images).

I will first put this mapping study into broader context by briefly outlining big ‘brain initiatives’ in the USA and UK as mentioned at the outset. Then I’ll move on to how these ‘hard-wiring’ connections were mapped by Prof. Ragini Verma at the University of Pennsylvania, and the commentary/controversy that has followed.

For historical reference, I should note that Lawrence Summers—Director of the White House US National Economic Council for President Barack Obama in 2009-2010—created a maelstrom when he suggested in 2005, as then Harvard University President, that innate differences in sex may explain why fewer women succeed in science and math careers. Although he subsequently apologized for any misunderstanding his remarks may have caused, I strongly suspect the newly revealed ‘hard wiring’ brain-connection differences between sexes will reignite this hot topic.

What is the NIH BRAIN Initiative?

Rather than paraphrase, here’s what the opening paragraph states at the official NIH.gov website:

  • The NIH Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative is part of a new Presidential focus aimed at revolutionizing our understanding of the human brain. By accelerating the development and application of innovative technologies, researchers will be able to produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space. Long desired by researchers seeking new ways to treat, cure, and even prevent brain disorders, this picture will fill major gaps in our current knowledge and provide unprecedented opportunities for exploring exactly how the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.

This website provides the following information-rich links for BRAIN, which is being backed by $100 million in FY2014 alone:

  • Why is this needed? With nearly 100 billion neurons and 100 trillion connections, the human brain remains one of the greatest mysteries in science and one of the greatest challenges in medicine. Neurological and psychiatric disorders, such as Alzheimer’s disease, Parkinson’s disease, autism, epilepsy, schizophrenia, depression, and traumatic brain injury, exact a tremendous toll on individuals, families, and society. Despite the many advances in neuroscience in recent years, the underlying causes of most neurological and psychiatric conditions remain largely unknown, due to the vast complexity of the human brain. If we are ever to develop effective ways of helping people suffering from these devastating conditions, researchers will first need a more complete arsenal of tools and information for understanding how the brain functions both in health and disease.
  • How will it work? A high-level working group, co-chaired by Dr. Cornelia “Cori” Bargmann (The Rockefeller University) and Dr. William Newsome (Stanford University), was asked to articulate the scientific goals of the BRAIN Initiative and develop a multi-year scientific plan for achieving these goals, including timetables, milestones, and cost estimates. As part of this planning process, input will be sought broadly from the scientific community, patient advocates, and the general public. An interim report was issued identifying high priority research areas for NIH funding in FY2014.
  • Funding Opportunities The following six opportunities can be applied for in March 2014;
    • Transformative Approaches for Cell-Type Classification in the Brain (RFA-MH-14-215)
    • Development and Validation of Novel Tools to Analyze Cell-Specific and Circuit-Specific Processes in the Brain (RFA-MH-14-216)
    • New Technologies and Novel Approaches for Large-Scale Recording and Modulation in the Nervous System (RFA-NS-14-007)
    • Optimization of Transformative Technologies for Large Scale Recording and Modulation in the Nervous System (RFA-NS-14-008)
    • Integrated Approaches to Understanding Circuit Function in the Nervous System (RFA-NS-14-009)
    • Planning for Next Generation Human Brain Imaging (RFA-MH-14-217)

Based on the above, it’s clear that BRAIN is a very big deal, so to speak, scientifically and for public health. President Barack Obama’s 2013 State of the Union Address announcing BRAIN compared this initiative to mapping the human genome—wherein every $1 invested returned $140 to our economy—and the earlier ‘Space Race’. While not a race per se, the next section looks at the UK’s brain-related initiative—The Human Brain Project.

What is the Human Brain Project?

The Human Brain Project (HBP)—headquartered in Switzerland, and soon to be relocated from Lausanne to its new base in Geneva—has 135 partner institutes and is blessed with a plenitude of money and planning, according to an Editorial in Nature in November 2013. Like the USA BRAIN Initiative, HBP is characterized as having “a romantic Moon-landing-level goal,” which for HBP is to simulate the human brain in a computer within ten years—think ‘HAL’ in the sci-fi classic Space Odyssey—and provide it to scientists as a research resource. Program leaders were said to have committed €72 million (US$97 million) to the 30-month ramp-up stage; those monies started to flow into labs after the project’s launch last month. The project has a detailed ten-year road map—and projected €1 billion (US$1.35 billion) budget—laden with explicit milestones, all of which can be perused in detail at the HBP website.

Will HBP name their simulated human brain computer HAL Jr.? HAL 9000 is a fictional character in Arthur C. Clarke's Space Odyssey series. The primary antagonist of 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient computer (aka artificial intelligence) that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew. HAL's physical form is not depicted, though it is visually represented as a red television camera eye located on equipment panels throughout the ship (taken from Wikipedia and Bing Images).

Will HBP name their simulated human brain computer HAL Jr.? HAL 9000 is a fictional character in Arthur C. Clarke’s Space Odyssey series. The primary antagonist of 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient computer (aka artificial intelligence) that controls the systems of the Discovery One spacecraft and interacts with the ship’s astronaut crew. HAL’s physical form is not depicted, though it is visually represented as a red television camera eye located on equipment panels throughout the ship (taken from Wikipedia and Bing Images).

According to Nature, many have raised doubts that HBP will achieve its goal, arguing that too little is understood about how the brain works to make the ambition feasible. Nevertheless, the Editorial adds, the project’s scope was originally conceived as a program whereby ‘bottom-up’ experimental data—electrophysiological, anatomical or cellular—would feed into a supercomputer without preconceived ideas of how the simulated neuronal circuitry might organize itself. A ‘top-down’ element has now been introduced.

An fMRI of the brain. Green areas were active while subjects remembered information presented visually. Red areas were active while they remembered information presented aurally (by ear). Yellow areas were active for both types (taken from stanford.edu via Bing Images)

An fMRI of the brain. Green areas were active while subjects remembered information presented visually. Red areas were active while they remembered information presented aurally (by ear). Yellow areas were active for both types (taken from stanford.edu via Bing Images)

The ‘bottom-up’ data feed—mostly from research on mice—remains a core component, but how it is processed in the brain simulator will be guided by the findings of one of the Human Brain Project’s 15 subprojects, on high-level human cognitive architecture. This will generate data for both animals and humans, describing how cognitive tasks, such as those involving space, time and numbers, are processed in the brain. For example, in one major research project, around ten people will be selected for repeated study during the decade-long project. Their ‘reference brains’ will be measured using a range of non-invasive techniques such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) to work out how the relevant neurocircuitry is organized during specific tasks. The detailed bottom-up data will have to align with this broad architecture.

An EEG measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. In clinical contexts, EEG refers to the recording of the brain's spontaneous electrical activity over a short period of time, usually 20–40 minutes (taken from buzz-master.com via Bing Images).

An EEG measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. In clinical contexts, EEG refers to the recording of the brain’s spontaneous electrical activity over a short period of time, usually 20–40 minutes (taken from buzz-master.com via Bing Images).

Supercomputing has proved too slow for real-time brain simulation, so other subprojects will focus on developing faster supercomputers, as well as neuromorphic computing, which can theoretically simulate brain activity orders of magnitude faster than occurs in a real brain.

HBP may still fail to deliver on its central promise, at least at the desired degree of sophistication, according to the Editorial, and it remains a high-risk initiative—keeping the unwieldy, multidisciplinary consortium of participants—mostly in Europe but also in countries including Israel, Japan and the United States—on track may also prove difficult. But the risks are spread over the subprojects, some of which will inevitably add significantly to our sum neuroscience knowledge.

The Editorial concludes by cautioning that, before getting too ‘starry-eyed about mega-projects, let’s remember that major breakthroughs in understanding the brain will continue to emerge from the labs of individual investigators. The journey towards a full understanding of the brain will be long and uncertain, and there will be ample opportunity for individual contributions to help point the way.’

I fully agree with this outlook, and believe it applies equally to the USA BRAIN Initiative.

Sex and brains: vive la difference!  

Ragini Verma, PhD, Associate Professor of Radiology, Department of Radiology (taken from academic. research.microsoft.com via Bing Images).

Ragini Verma, PhD, Associate Professor of Radiology, Department of Radiology (taken from academic. research.microsoft.com via Bing Images).

This is the catchy title of a Science and Technology article in the December 13th issue of The Economist about Prof. Ragini Verma’s PNAS publication about brain ‘hard wiring’ differences between the sexes pictured in my introduction above.

Before reiterating her conclusions, let’s briefly consider how these ‘hard wiring’ connections are experimentally determined. Dr. Verma’s technique is diffusion tensor imaging (DTI), which—according to the Miami Children’s Brain Institute—is a specific Magnetic Resonance Imaging (MRI) modality that measures local microstructural characteristics of water diffusion within tissues. As with other types of MRI, the data set is encoded upon geometric anatomical data. This property allows it to be used to create visual representations of the data, i.e. images derived from water diffusion throughout the brain. Because the fibers that connect nerve cells have fatty sheaths, the water in them can diffuse only along a fiber, not through the sheath. So, DTI is able to detect bundles of such fibers, and see how they’re connected—voilà!

While Verma’s DTI results showing sex-related ‘hard wiring’ differences are open to interpretation, her take according to Nature is that they underlie some of the variations in male and female cognitive skills. The left and right sides of the cerebrum, in particular, are believed to be specialized for logical and intuitive thought, respectively.

In her view, the wiring-diagram cross-talk between them in women, helps explain their better memories, social adeptness, and ability to multitask, all of which benefit from the hemispheres collaborating, so to speak.

In men, by contrast, within-hemisphere links let them focus on those things that do not need complex inputs from both hemispheres, hence the preoccupation with a single idea or thought. Because each half controls, by itself, only one half of the body, there is a physical basis for men having better motor abilities—or in common language, are better coordinated than women.

Dr. Verma’s other salient finding is that these connectome differences develop with age. The brains of boys and girls aged 8 to 13 demonstrated only few differences, but all of these differences became more pronounced at ages 13 to 17, and even more pronounced  after 17 years of age. Differences in brains visible by DTI thus manifest themselves mainly when sex itself begins to matter.

By the way, it dawned on me that, based on Dr. Verma’s findings, the HBP’s ‘human brain in a computer’ could/should have two alternative modes of operation: male or female!

Closing Caveats

When researching Dr. Verma’s PNAS publication, I came across a Los Angeles Times account by Geoffrey Mohan that referred to researches who ‘cautioned that the imagery is an indirect measure of axons, not a cell-by-cell census and map. And the results are strictly statistical averages, although in a very large sample.’ Put another way, individual DTIs of other males and females may show significant deviations from these average patterns of connectivity.

Mohan’s article generated over 80 comments—some more interesting or humorous than others. I found that the following comment by ‘craigbhill’ on December 5, 2013 raised some additional caveats, mixed with some wit.

The key would be whether these gender-brain-based abilities cross all cultural bounds, then we would know they are innate. There’s no denying the basic male-female global tendencies, it’s not like females only in the West or even just in the US act thus and so when it’s the bleepin’ same everywhere, including in cultures removed from the broader worldwide culture. Sure, there are differences culturally, but not basically, and those changes are relatively minute within the basic natural behaviors.

So there’s both nature and nurture, and we’re now learning better than ever which and where.

(Accomplished by my basically male mind which has also been shown by a brain scan to act within the hemispheres in a more female way than most men’s—I am blessed.)

Valentine’s Day, Love and fMRI

In the now classic 1984 track What’s Love Got To Do With It?, Tina Turner sings:

What’s love got to do, got to do with it?
What’s love but a second hand emotion?

You’ll have to read the other lyrics to find out her thoughts about that, but in the meantime, I’d like to connect the three topics of this section headings by first sharing what Wikipedia says about Valentine’s Day, partly because—honestly—I never knew its interesting history.

St. Valentine’s Day—also known as Saint Valentine’s Day, Valentine’s Day or the Feast of Saint Valentine—is observed on Feb 14th each year in many countries around the World, and began as a liturgical celebration of one or more early Christian saints named Valentinus. The most popular martyrology associated with Saint Valentine was that he was imprisoned for performing weddings for soldiers who were forbidden to marry and for ministering to Christians, and were persecuted under the Roman Empire. During his imprisonment, he is said to have healed the daughter of his jailer. Legend states that before his execution he wrote her a letter signed “Your Valentine” as a farewell.

Antique Valentine’s card (taken from Wikipedia).

Antique Valentine’s card (taken from Wikipedia).

The day was first associated with romantic love in the circle of Geoffrey Chaucer in the High Middle Ages, when the tradition of courtly love flourished. In 18th-century England, it evolved into an occasion in which lovers expressed their love for each other by presenting flowers, offering confectionery, and sending greeting cards (known as “valentines”).

Ok, back to science. Obviously, love is an emotion, and I wondered whether fMRI has been applied to mapping brain patterns of love and other human emotions—none of which I regard to be ‘second hand’ as did Tina Turner. Somewhat to my surprise, this has indeed been done by fMRI. Here are some interesting—I think—findings posted on Psychcentral.com on June 21st 2013.

For the first time, scientists have identified which emotion a person is experiencing based on brain activity. A team at Carnegie Mellon University (CMU) lead by Karim Kassam, PhD, combined fMRI and machine learning to measure brain signals to read emotions in individuals. The findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions.

Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.

For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame.

While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.

The computer model was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.

The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.

Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs. The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.

Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant.

This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.

“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.

The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.

Frankly, I did not do well in my college ‘Psychology 101’ course, I’m not sure what all of these fMRI results mean, but I suspect that several PhD theses will offer detailed interpretations. It’s nice to see that happiness was recognized the best. I do wonder, however, whether the results apply to non-actors?

As always, your comments are welcomed.

Postscript

After finishing the above post, the New York Times weekly Science Times section had a lengthy article by James Gorman entitled The Brain, in Exquisite Detail. The article featured state-of-the art NMR imaging by the Human Connectome Project, a $40 million 5-year consortium-effort supported by the NIH aimed at the “first interactive wiring diagram of the living, working human brain.”

To build this diagram, researchers are said to be “doing brain scans and cognitive, psychological, physical and genetic assessments of 1,200 volunteers. They are more than a third of the way through collecting information.” Data processing will yield a 3-D interactive map of the healthy human brain showing structure and function, with detail to one and a half cubic millimeters.

While this article is definitely worth a quick scan—pun intended—its closing remarks given below provide a sobering dose of reality in metaphorical terms.

“Perhaps the greatest challenge is that the brain functions and can be viewed at so many levels, from a detail of a synapse to brain regions trillions of times larger. There are electrical impulses to study, biochemistry, physical structure, networks at every level and between levels. And there are more than 40,000 scientists worldwide trying to figure it out.

This is not a case of an elephant examined by 40,000 blindfolded experts, each of whom comes to a different conclusion about what it is they are touching. Everyone knows the object of study is the brain. The difficulty of comprehending the brain may be more aptly compared to a poem by Wallace Stevens, “13 Ways of Looking at a Blackbird.”

Each way of looking, not looking, or just being in the presence of the blackbird reveals something about it, but only something. Each way of looking at the brain reveals ever more astonishing secrets, but the full and complete picture of the human brain is still out of reach.”

De-Extinction: Hope or Hype?

  • Can scientists “revive” woolly mammoths?
  • Passenger Pigeons, possibly?
  • Is “facilitated adaption” more realistic?

If you haven’t seen the 1993 movie Jurassic Park, the plot involves a tropical island theme park populated with cloned dinosaurs created by a bioengineering company, InGen. The cloning was accomplished by extracting the DNA of dinosaurs from mosquitoes that had been preserved in amber—not unlike extraction of ancient yeast DNA from extinct bees preserved in amber for brewing “Jurassic beer” that I featured in a previous posting. However, in Jurassic Park the strands of DNA were incomplete, so DNA from frogs was used to fill in the gaps. The dinosaurs were cloned genetically as females in order to prevent breeding.

This is all a great premise for a movie, but will Jurassic Park-like fantasy become reality in the near future?  What’s being investigated now, and are there concerns being voiced? These are just some of the questions touch upon below.

Woolly Mammoths May One Day Roam Real-Life Jurassic Park

Hendrik Poinar, Director of the Ancient DNA Centre at McMaster University in Hamilton, Ontario (taken from fhs.mcmaster.ca via Bing Images).

Dr. Hendrik Poinar, Director of the Ancient DNA Centre at McMaster University (taken from fhs.mcmaster.ca).

Dr. Hendrik Poinar, Associate Professor at McMaster University in Canada, was trained as a molecular evolutionary geneticist and biological anthropologist, and now specializes in novel techniques to extract and analyze “molecular information (DNA and/or protein sequences)” from ancient samples. His work included such projects as sequencing the mitochondrial genome of woolly mammoths that went extinct long ago. Based on that work, Dr. Poinar was recently interviewed by CBC News about the likelihood of reestablishing woolly mammoths. Here are some excerpts:

Q: Without getting too technical, describe what you’re doing to bring back animals like the woolly mammoth?

A: We’re interested in the evolutionary history of these beasts. These lumbering animals lived about 10,000 years ago and went extinct. We’ve been recreating their genome in order to understand their origins and migrations and their extinction. That led to the inevitable discussion about if we could revive an extinct species and is it a good thing.

Q: Why is this so interesting to you?

A: There are reasons why these animals went extinct. It could be climate, it could be human-induced over-hunting. If we can understand the processes that caused extinction, maybe we can avoid them for current endangered species. Maybe we need to think about what we can do to bring back extinct species and restore ecosystems that are now dwindling.

Q: Is it possible to bring these things back to life?

A: Not now. We’re looking at 30 to 50 years.

Woolly mammoths roamed both North America and Asia for hundreds of thousands of years. Many went extinct during the most recent period of global warming (taken from CBC News via Bing Images).

Woolly mammoths roamed both North America and Asia for hundreds of thousands of years. Many went extinct during the most recent period of global warming (taken from CBC News via Bing Images).

Q: How would you do something like that?

A: First thing you have to do is to get the entire blueprint. We have mapped the genome of the woolly mammoth. We’re almost completely done with that as well as a couple other extinct animals. We can look at the discrete differences between a mammoth and an Asian elephant. We would take an Asian elephant chromosome and modify it with mammoth information. Technology at Harvard can actually do that. Take the modified chromosomes and put them into an Asian elephant egg. Inseminate that egg and put that into an Asian elephant and take it to term. It could be as soon as 20 years.

Q: Is this such a good idea?

A: That’s the million-dollar question. We’re not talking about dinosaurs. We’ll start with the herbivores—the non-meat eaters. We could use the technology to re-introduce diversity to populations that are dwindling like the cheetah or a wolf species we know are on the verge of extinction. Could we make them less susceptible to disease? Is it good for the environment? We know that the mammoths were disproportionately important to ecosystems. All the plant species survived on the backs of these animals. If we brought the mammoth back to Siberia, maybe that would be good for the ecosystems that are changing because of climate change.

Q: You are tinkering with the evolutionary process?

A: Yes, but would you feel differently if the extinction was caused by man like it was with the passenger pigeon or the Tasmanian wolf, which were killed by humans? Even the large mammoth, there are two theories on their extinction, one is overhunting by humans…and the other is climate. Do we have a moral obligation?

Bringing Back Passenger Pigeons

Ben Novak has a BS in Ecology and worked with mastodon fossils toward a master’s degree at McMaster University, but he abandoned that to pursue his long-time passion for passenger-pigeon genetics (taken from wfs.org via Bing Images).

Ben Novak has a BS in Ecology and worked with mastodon fossils toward a master’s degree at McMaster University, but he abandoned that to pursue his long-time passion for passenger-pigeon genetics (taken from wfs.org via Bing Images).

Ben Novak, according to an interview in Nature last year, has spent his young career endeavoring to resurrect extinct species. Although he has no graduate degree, he has amassed the skills and funding to start a project to bring back the Passenger Pigeon—once the United States’ most numerous bird (about 5 billion according to Audubon)—which died out in 1914. Following are comments from Ben, taken from the Nature article referenced above, about how his work is funded and its prospects.

“Once I had passenger-pigeon tissue [from the Field Museum of natural History in Chicago, Illinois], I started applying for grants to do population analysis, but I couldn’t secure funding. I got about $4,000 from family and friends to sequence the DNA of the samples. When I got data, I contacted George Church, a molecular geneticist at Harvard Medical School in Boston, Massachusetts, who was working in this area. He and members of Long Now Foundation in San Francisco, California, which fosters long-term thinking, were planning a meeting on reviving the passenger pigeon….The more we talked, the more they discovered how passionate I was. Eventually, Long Now offered me full-time work so that nothing was standing in my way.”

“I have just moved to the University of California, Santa Cruz, to work with Beth Shapiro. She has her own sample of passenger pigeons, and we want to do population genetics and the genome. It’s a good fit. Long Now pays me, and we do the work in her lab, taking advantage of her team’s expertise in genome assemblies and ancient DNA.”

Male passenger pigeon (taken from swiftbirder.wordpress.com via Bing Images).

Male passenger pigeon (taken from swiftbirder.wordpress.com via Bing Images).

For the sad story of how this creature went extinct, click here to access an account written by Edward Howe Forbush in 1917.

Doing more searching about Ben Novak led me to another 2013 interview, this time in Audubon. When asked if it’s realistic to get a healthy population from a few museum specimens, here’s what he said.

“If we’re willing to create one individual [passenger pigeon], then through the same process we can produce individuals belonging to completely different genetic families. We can make 10 individuals that, when they’re mated, will have an inbreeding coefficient near zero…First we need to discern what the actual genetic structure of the species was. We can analyze enough tissue samples to get that genetic diversity.”

While perusing the Long Now Foundation’s website, I was pleased to read a Passenger-Pigeon progress report posted by Ben Novak on October 18th 2013.  The posting gives a detailed update on genomic sequencing of “Passenger Pigeon 1871″ [date of preservation] at the University of California San Francisco‘s Mission Bay campus sequencing facility, as well as some nice pictures. Given what he said above about 10 individuals being theoretically adequate for reviving and restoring an extinct population, you’ll be as pleased as Ben is about the following.

“Passenger Pigeon 1871 was selected as the candidate for the full genome sequence for its superb quality compared to other passenger pigeon specimens. Over the last two years Dr. Shapiro, myself and colleagues have scrutinized the quality of 77 specimens including bones and tissues. Our first glimpses of data confirmed that the samples would be able to provide the DNA needed for a full genome sequence, but as we delved into the work, the specimens exceeded our expectations. Not only do we have one specimen of high enough quality for a full genome, we have more than 20 specimens to perform population biology research with bits of DNA from all over the genome.”

Revive and Restore

Reading about Ben Novak’s support from the Long Now Foundation led me discover the organization’s Revive and Restore Project, aimed at genetic rescue of endangered and extinct species. Its mission is stated as follows:

“Thanks to the rapid advance of genomic technology, new tools are emerging for conservation. Endangered species that have lost their crucial genetic diversity may be restored to reproductive health. Those threatened by invasive diseases may be able to acquire genetic disease-resistance.

It may even be possible to bring some extinct species back to life. The DNA of many extinct creatures is well preserved in museum specimens and some fossils. Their full genomes can now be read and analyzed. That data may be transferable as working genes into their closest living relatives, effectively bringing the extinct species back to life. The ultimate aim is to restore them to their former home in the wild.

Molecular biologists and conservation biologists all over the world are working on these techniques. The role of Revive and Restore is to help coordinate their efforts so that genomic conservation can move ahead with the best current science, plenty of public transparency, and the overall goal of enhancing biodiversity and ecological health worldwide.”

This Project’s website is well worth visiting, as it provides a fascinating mix of species under consideration (such as the Passenger Pigeon and the woolly mammoth), various video presentations by advocates, and an engaging blog. It also provides a very convenient “donate” button should you be so inclined.

While the Passenger Pigeon project and other Revive and Restore efforts are well intended, I’m more inclined at this time to be neutral-to-negative about the projects, and will reserve a final opinion until all parties, pro and con, have extensive debates similar to what was done in the past for then (and still) controversial recombinant DNA technology. Given the amount of concern and caution then for what we can now view as conventional genetic engineering, it seems reasonable to me that, with far more powerful tools for genomics and synthetic biology being available, “an abundance of caution” is in order when dealing with the possibility of resurrecting extinct species. If Jurassic Park serves as any sort of model for what science can accomplish, perhaps we should also consider what the movie highlights as the potential implications of those accomplishments.

For now, I’m intently interested in the continuing debates and I find it fascinating to consider alternatives such as rescuing species from extinction as outlined next.

“Facilitated Adaption” Pros & Cons

Michael A. Thomas, Professor of Biology at Idaho State University, and colleagues authored a Comment in Nature last year entitled Gene tweaking for conservation that is freely available (yeh!) and well worth reading. Some highlights are as follows:

Sadly, if not shockingly, conservative estimates predict that 15–40% of living species will be effectively extinct by 2050 as a result of climate change, habitat loss and other consequences of human activities. Among the interventions being debated, facilitated adaptation has been little discussed. It would involve rescuing a target population or species by endowing it with adaptive alleles, or gene variants, using genetic engineering—not too unlike genetically modified crops that now occupy 12% of today’s arable land worldwide. Three options for facilitated adaption are outlined.

“Poster Child” for facilitated adaption: an endangered Florida panther population was bolstered through hybridization with a related subspecies — a technique that could be refined using genomic tools (taken from Thomas et al. Nature 2013).

“Poster Child” for facilitated adaption: an endangered Florida panther population was bolstered through hybridization with a related subspecies — a technique that could be refined using genomic tools (taken from Thomas et al. Nature 2013).

First, threatened populations could be cross with individuals of the same species from better-adapted populations to introduce beneficial alleles. A good example of this is crossing a remnant Florida panther population with related subspecies from Texas that significantly boosted the former population and its heterozygosity, a measure of genetic variation that was desired. Risks of this approach include dilution of locally adaptive alleles.

Second, specific alleles taken from a well-adapted population could be spliced into the genomes of threatened populations of the same species. This was exemplified by recent work wherein heat-tolerance alleles in a commercial trout were identified for possible insertion into fish eggs in populations threatened by rising water temperature. Such an approach was viewed as low risk because it involves genetic manipulations within the same species.

Third, genes removed from a well-adapted species could be incorporated into the genomes of endangered individuals of a different species. This transgenic approach has been extensively used to improve plant crops toward drought and temperature. However, outcomes are hard to predict, and a major concern is that such an approach could bring unintended and unmanageable consequences—definitely a scary possibility.

What do you think about reintroducing extinct species?  Do you see other pros and cons to facilitated adaption?  As always, your comments are welcomed.

Postscript

The following, entitled ‘De-Evolving’ Dinosaurs from Birds, recently appeared in GenomeWeb:

Ancient animals could be resurrected through the genomes of their modern-day descendants, Alison Woollard, an Oxford biochemist tells the UK’s Daily Telegraph. For instance, the DNA of birds could be “de-evolved” to resemble the DNA of dinosaurs, the paper adds.

“We know that birds are the direct descendants of dinosaurs, as proven by an unbroken line of fossils which tracks the evolution of the lineage from creatures such as the velociraptor or T-Rex through to the birds flying around today,” Woollard says, later adding that “[i]n theory we could use our knowledge of the genetic relationship of birds to dinosaurs to ‘design’ the genome of a dinosaur.”

In both the book and movie Jurassic Park, the fictional resurrection of dinosaurs relied on dinosaur DNA that was preserved in fossilized biting insects, but as the Daily Telegraph notes, a study in PLOS One earlier this year found no evidence of DNA from amber-preserved insects.

Daily Telegraph adds that any dinosaur DNA recovered from bird genomes would be fragmented and difficult to piece back together. A mammoth, it says, might have a better shot.

Not-too-Direct Commercialization of Direct-to-Consumer Genetic Testing

  • Navigenics-and-Me: my “genetic selfie” for personalized medicine 
  • 23andMe: a widely watched and hotly debated story

When working at Life Technologies in 2009, I took advantage of the company’s health-oriented initiative to generously subsidize employees’ genetic analysis by Navigenics, which at the time was the first provider of direct-to-consumer (DTC) SNP-based genetic testing and medical counseling. Since then, there has been explosive interest—and intense debate—about a person’s “right” to their genetic information and the pros and cons of DTC genetic testing, as well as considerable corporate maneuvering in this marketplace, such as Life Technologies acquisition of Navigenics in 2012.

In addition to my being both a “consumer” and “technophile” of nucleic acid-based DTC genetic testing, a number of recent events triggered my decision to write this post. On the technology side was the sad passage last November of Frederick Sanger, winner of two Nobel Prizes and “father” of his eponymous sequencing method that provided the foundation for sequencing the human genome some ten years ago. Also, last November the FDA granted marketing authorization for the first high-throughput (next-generation) genomic sequencer, Illumina’s MiSeqDx, which will allow development and use of innumerable new genome-based tests, as reported in the New England Journal of Medicine. On the consumer side was the “upbeat” Consumer Genetics Conference held in September 2013 that, only two months later, was followed only by the FDA’s “bombshell” cease-and-desist letter to 23andMe, which is perhaps the most widely watched DTC genetic-testing company.

So, with all of these events swirling around in my brain, I decided to offer the following comments on what Navigenics did and found for me, several aspects related to 23andMe, and an outline of some of the other corporate players in the rapidly expanding, new world of advanced genetic-testing that aims to go far beyond all current FDA-approved nucleic acid-based tests.

“Navigenics and Me”

david

David B. Agus, M.D.

dietrich

Dietrich Stephan, Ph.D.

Before the “me” part, here’s the backstory (detailed elsewhere) on Navigenics. The company was founded in 2006 by David B. Agus, M.D., a prostate cancer specialist and Professor of Medicine and Engineering at the University of Southern California, and Dietrich Stephan, Ph.D., a human geneticist and Chairman, Department of Human Genetics at University of Pittsburgh. Navigenics began selling its genetic testing services in 2008 based on SNP analysis to assess risk for a variety of common health conditions. The company also launched an online portal allowing doctors to access the genomic information of consenting patients. The portal allows the physician to integrate patients’ genetic information into personalized health plans designed to help early diagnosis or prevention of a number of health conditions.

In June 2008, California health regulators sent cease-and-desist letters to Navigenics and 12 other genetic testing firms, including 23andMe. The state regulators asked the companies to prove a physician was involved in the ordering of each test and that state clinical laboratory licensing requirements were being fulfilled. The controversy sparked a flurry of interest in the relatively new field, as well as a number of media articles, including an opinion piece on Wired.com entitled Attention, California Health Dept.: My DNA Is My Data. Two months later Navigenics and 23andMe received state licenses allowing the companies to continue to do business in California.

In July 2012, genetic analysis tools-provider Life Technologies announced its acquisition of Navigenics, with Ronnie Andrews, president of Medical Sciences at Life Technologies, commenting that “[t]he advent of personalized medicine will require a combination of technologies and informatics focused on delivering relevant information to the treating physician. Navigenics has pioneered the synthesis and communication of complex genomic information, and we will now pivot the company’s effort to date and focus on becoming a comprehensive provider of technology and informatics to pathologists and oncologists worldwide.”

I was unable to find more recent or specific information about this acquisition, perhaps largely due to the fact that Life Technologies itself is in the process of being acquired by Thermo Fisher Scientific, so things are in flux. However, Thermo Chief Executive Marc Casper said in a press release that “advanced genetic testing was an important field going forward, and his company wanted to get into it as an industry leader” through the acquisition of Life Technologies. Stay tuned.

Now for the “me” part.  I’ll start with some basic info about what the past Navigenics methodology involved, what’s provided, and an important disclaimer, taken from my Navigenics Health Compass Report:

  • DNA is collected via a saliva sample, and the DNA is probed for appropriate SNP markers in a CLIA-certified lab.
  • Included SNPs have been reliably shown in cited publications to be associated with diseases.
  • Presence of such markers does not mean that the individual will definitely develop a given health condition, but can raise risk, especially if other lifestyle or environmental risk factors are present.
  • Complex results are analyzed with mathematical formulae to calculate an individual’s risk for the conditions and medication outcomes.
  • The result is an estimate of the individual’s own lifetime risk, compared with the population average, and is provided in a report that is easily understood, but extensively documented.
  • Navigenics emphasizes that these results are “not a diagnostic test”, but rather “highlight genetic predisposition to common conditions and medication outcomes, so that prevention measures may be taken, early diagnosis made, or appropriate medications chosen.”

My Genetic Selfie 

This hopefully not too trendy section heading is just another way of referring to my Navigenics Health Conditions Results, which are cut-and-pasted from my Navigenics Health Compass Report and given below in alphabetical order. Based on my “flagged” (in orange) risk results, I obviously did homework on Graves’ disease, which is the most common type of hyperthyroidism (overactive thyroid) that is more common in women than in men. People with Graves’ disease usually have lower than normal levels of thyroid-stimulating hormone—mine is currently normal. As for risk of heart attack, I’m following my doctor’s advice for dealing with my elevated blood pressure. Interestingly, I’m not (yet) lactose intolerant, but do have some arthritic symptoms and use occasional medication for psoriasis. As for obesity, I exercise regularly and (try to) avoid fattening foods.

table
table2

Regarding eight medications assessed, I learned that I have “moderate risk” of severe reaction to Irinotecan (Camptosar®), which is used to treat cancers—mainly colon cancer—and works by preventing DNA from unwinding by inhibition of topoisomerase.

As for assessed medication effectiveness, my results tabulated below are self-explanatory, and prompted me to consider wearing a medications bracelet regarding Warfarin.

table3

By the way, I thought it was quite apropos for Dr. Agus, who co-founded Navigenics, to share some of his Navigenics Health Compass Report on his website that you can view here, as well as check out his two books entitled A Short Guide to a Long Life and The End of Illness.

You may not like what your “genetic selfie” tells you (taken from dukehealth.org via Bing Images).

You may not like what your “genetic selfie” tells you (taken from dukehealth.org via Bing Images).

What’s with 23andMe?

The short backstory on 23andMe is that—like Navigenics—it too was founded in 2006, by Anne E. Wojcicki and Linda Avey. It began offering services about a year or so later, with the stated goal of “empowering individuals to access, explore, share, and better understand their genetic information, making use of recent advances in DNA analysis technologies and proprietary web-based software tools.” Just as Navigenics and Life Technologies have connected, so to speak, 23andMe partnered with Illumina—even earlier—to leverage the latter’s SNP genotyping platform technology, as discussed elsewhere.

Anne E. Wojcicki

Anne E. Wojcicki

 Linda Avey

Linda Avey

Fast-forward to December 2013 and these snippets from a Nature Editorial entitled—cleverly—The FDA and me, wherein I’ve added bolding for emphasis of certain key perspectives with which I agree.

“Late last month, US regulators dropped a bombshell on…23andMe in an exasperated cease-and-desist letter that prompted a fast and contrite response from the company—and a flurry of criticism of both parties among scientists and self-styled Health 2.0 activists who advocate the use of Internet tools in medicine.

The company has walked a fine line between promising that this activity will revolutionize medicine and averring that it is not actually medical at all, in an attempt to simultaneously lure in customers and avoid the need to conform to medical regulations.

The US Food and Drug Administration (FDA) has now called 23andMe’s bluff, complaining that the company has ‘not completed’ some studies that would prove the soundness of its methods and ‘not even started’ others; that 23andMe has shunned communication with the FDA since May; and that the company has launched a large advertising campaign without getting marketing approval. The agency demanded that 23andMe stop marketing its testing kit until it received proper authorization.

But the big question is not whether regulators will stop people from understanding their own DNA—they cannot. The question is whether such understanding has reached the point at which companies can exploit it, and if so, how to protect their customers. Part of answering that question is determining whether a company’s claim is true. This is what the FDA is trying to do, and until earlier this year, it seemed that 23andMe was happy to aid that mission—FDA approval, after all, would dispel worrying chatter about whether regulators would ultimately shut the company down. Mainstream biotechnology companies learned a long time ago that it pays to play nice with regulators.

Consumer demand is low in part because genetic tests on healthy people still cannot be relied on to produce consistent predictions about medical risks. Customers of 23andMe have detailed how the service variously provides lifesaving information and misleading results. This is simply the state of the science today. Silicon Valley ‘health disrupters’ who plan to revolutionize health care…like to think that they can apply their successful data-mining strategies to medicine, but it turns out that biology is more complicated than they perhaps first assumed.

No one should be fooled into thinking that direct-to-consumer genetic testing is doomed to fail. The science is moving so much faster than medical education that motivated and self-taught laypersons can learn and understand just as much about their genetic medical risks as can their doctors. Indeed, there are already public crowd-sourced tools that customers can use to interpret their genetic data for free. So even if regulators or doctors want to, they will not be able to stand between ordinary people and their DNA for very long.

In the meantime, it seems short-sighted for companies to rebuff regulators. If it is too onerous to prove the accuracy of the information they offer, they should not be selling this information in the first place. And if they turn up their noses at regulators, they may run afoul of an even more powerful force: the US system of civil litigation. Consumers are already joining class-action lawsuits alleging that 23andMe is selling misleading information. Such suits are much more effective than anything the government can do to get companies to change their practices.

To its credit, 23andMe seems to have learned this: on 26 November, [its CEO] acknowledged in a blog post both that the ‘FDA needs to be convinced of the quality of our data’ and that ‘we are behind schedule with our responses’ to the agency. The company has also stopped marketing.

It seems, then, that 23andMe’s experience with the FDA is less about the growing pains of a new industry than about affirming a principle—the need for truth in advertising—that is as old as business itself.” 

As mentioned in the above Editorial, the internet is chock full of dueling opinions about FDA v. 23andMe, and includes this poll in GenomeWeb that clearly reflects mixed and widely varying public thought on this matter:

Question: Do you think the FDA was right to send a warning letter to 23andMe?

  • 42% Yes. Regulators need to ensure that tests and their interpretation are valid.
  • 15% Yes. Such tests influence people’s healthcare decisions.
  • 10% Maybe. It’s unclear as to what the issue is.
  • 17% No. People have a right to their genetic data.
  • 13% No. No one will make a serious medical decision without getting a second opinion.

Stay tuned.

What about concordance of DTC genetic testing? What of it?

When mulling over my “Navigenics-and-Me” risk results in light of the 23andMe controversy, I wondered—as an experimental scientist—whether the same risk results would be obtained in an independent analysis of my saliva by 23andMe. Checking the literature for any DTC genetic testing concordance of evidence, I found a spot-on and revealing publication by Imai et al. in Clinical Chemistry entitled Concordance Study of 3 Direct-to-Consumer Genetic-Testing Services. Briefly, here are important snippets of what was reported.

BACKGROUND: Massive-scale testing of thousands of SNPs is not [in practice] error free, and such errors could translate into misclassification of risk and produce a false sense of security or unnecessary anxiety in an individual. We evaluated 3 DTC services and a genomics service that are based on DNA microarray or solution genotyping with hydrolysis probes (TaqMan® analysis) and compared the test results obtained for the same individual. 

METHODS: We evaluated the results from 3 DTC services (23andMe, deCODEme, Navigenics) and a genomics-analysis service (Expression Analysis). 

RESULTS: The concordance rates between the services for SNP data were >99.6%; however, there were some marked differences in the relative disease risks assigned by the DTC services (e.g., for rheumatoid arthritis, the range of relative risk was 0.9–1.85). A possible reason for this difference is that different SNPs were used to calculate risk for the same disease. The reference population also had an influence on the relative disease risk. 

CONCLUSIONS: Our study revealed excellent concordance between the results of SNP analyses obtained from different companies with different platforms, but we noted a disparity in the data for risk, owing to both differences in the SNPs used in the calculation and the reference population used. The larger issues of the utility of the information and the need for risk data that match the user’s ethnicity remain, however.

Although I’m not an expert in SNP-based genetic analysis, it seems that the aforementioned issues are addressable by industry-wide agreement to use the same SNP markers and associated medical databases (i.e., harmonization), and account for ethnicity—akin to what’s been recently reported for SNP-based human identity testing of different ethnic populations world-wide.

Further reading, if you’re interested (and I hope you are)

Frankly, I was amazed by how much has been published, said, or debated on the subject of DTC genetic testing that I’ve only touched on here, based on my personal experience to date. So, in closing, I decided to share a few links to items that I found to be especially thought provoking.

A November 2013 study abstract entitled No easy explanation for divergent attitudes regarding the medical utility of consumer genetic testing: Findings from the pgen study. This study was funded by NIH and announced in 2012 as having “[t]he goal to produce results that can be translated into recommendations to guide policy and practice in this rapidly emerging area.” Participants include more than one thousand customers of 23andMe and Pathway Genomics.

A scholarly (but not-so-easy-to-read) online paper entitled Myths, Misconceptions and Myopia: Searching for Clarity in the Debate about the Regulation of Consumer Genetics by Stuart Hogarth of Global Biopolitics Research Group, King’s College London, London, UK, funded by the Wellcome Trust. It concludes by stating that “[t]he choices we make as citizens about the technologies we use can have profound implications for the nature of our society. Shaping the future of genetic testing may be something which is better done as a collective policy rather than as individual consumers.”

Perspectives published in 2011 in Nature Reviews Genetics offered by five highly regarded experts—each in different fields—entitled The future of direct-to-consumer clinical genetic tests. The two questions posed to them for comment are:

  • What would be the fairest and safest way to regulate DTC genetic tests?
  • What should be the role of health professionals?

A blog entitled 23andMe DNA Test Review: It’s Right for Me but Is It Right for You?  provides a basic primer on DTC genetic analysis, and a string of ~70 comments that evolve over several years giving a flavor, so to speak, about differences in opinion.

As always, your comments are welcomed.

Are Scientific Publications Accelerating at a Faster Rate Than the Science Itself?

  • A new paper publishes every 20 seconds…but retractions are plentiful
  • Publishing (and retracting) at an unprecedented pace
  • How to cope with the never-ending flood of information?
  • Some mind-numbing stats for nucleic acid-related publications in 2013

A recent series of perspectives in the October 4th, 2013 issue of Science dealt with the accelerating volume of scientific literature due in part to the advent of the web and proliferation of journals—some good, some not so good—and the trend toward free (open-access) journals online, which in 2011 reached the “tipping point” of accounting for 50% of new research. It also stated that a new paper is now published about every 20 seconds, which equates to 1,576,800 papers each year—yikes!

Ok, I know, these include many subjects that don’t directly impact your area of expertise; however, as you’ll see at the end of this posting, even narrowing these subjects down to nucleic acid-related terms involves many more publications in 2013 than you probably would guess.

So, with this year coming to an end, and the next one just around the corner, I began to reflect on the sheer volume of literature that I’ve “read”—mostly scanning titles, occasionally abstracts, but rarely reading in detail—to select and cobble into these postings. Among the latter articles was one of the monthly opinion-pieces by Derek Lowe in Chemistry World entitled The never-ending story—keeping up with the literature is impossible, which strongly resonated with my volume-driven ruminations. He usually has a cleverly humorous way of communicating his thoughts, and this item was no exception. So it’s best to quote him verbatim rather than have his clever humor “lost in translation”, so to speak, by paraphrasing:

“You should be keeping up with the literature, you know. And you should be flossing your teeth. And checking the air pressure on your tyres [sic]. There’s probably some insurance or tax paperwork that you’ve been putting off, too, so you might as well get on with all of these at once. You’ll feel better about yourself, honestly.”

“The literature situation isn’t quite that bad, but if you get chemists [actually, any scientist] in a confessional frame of mind, they’ll probably tell you that they really don’t read the current journals as well as they ought to. In fact, I don’t think I’ve ever met anyone who thought that they were keeping up well enough. One of the problems, of course, is that the literature itself is a geyser, a phalanx of firehoses [sic] and it never stops gushing out information. There are more journals than ever before, publishing more papers, and they’re coming from all directions at once.”

“But most of this is junk. If you find that too strong a term, then lower the size of the junk pile and reassign some of it to the ‘doesn’t have to get read’ pile. That’s surely the largest one; it’s where all the reference-data papers go, the ones that no one looks at until their own research bumps into the same compound or topic.”

Mind-blowing geyser of science publications (taken from rcs.org © DieKleinert / Alamy)

Mind-blowing geyser of science publications (taken from rcs.org © DieKleinert / Alamy)

Derek Lowe goes on to recommend “filtering and prioritizing” as the key to coping, which I also recommend and, in fact, do on a daily basis. While he continues by praising the utility of the now familiar web-feed icon for Really Simple Syndication (RSS) technology, I suggest—in addition or alternatively—taking advantage of equally simple feeds “freely” available via PubMed (paid for by U.S. taxes) and Google (paid for by advertisers, etc.).

The NIH provides a short instructional video (Quick Tour) on how to easily search and register with NCBI to be provided with über-convenient email links to “What’s new for ‘[your search item]’ in PubMed” on a daily, or less frequent, basis as you wish. Alternatively, search in Google Scholar and then click the Create Alert icon.

Quick Tour demo for Pubmed email alerts.

Quick Tour demo for Pubmed email alerts.

Keep in mind that PubMed excludes patents, while Google Scholar will include them (if you wish) as well as find various types of websites. Another benefit of Google Scholar is that, for any publication, it provides the number of citations and links thereto.

Dark Side of Open-Access Journals

Some 8,250 open-access scientific journals are now listed in a directory supported by publishers. Unlike traditional science journals that charge for subscriptions or fees from those who wish to read their contents, open-access journals make research studies free to the public. In return, study authors pay up-front publishing costs if the paper is accepted for publication.

“From humble and idealistic beginnings a decade ago, open-access scientific journals have mushroomed into a global industry, driven by author publication fees,” says journalist John Bohannon, writing in an October 4th 2013 Science report entitled Who’s Afraid of Peer Review? Basically, a “spoof paper” concocted by Science claiming that a cancer drug discovered in a humble lichen, and ready for testing in patients, revealed little or no scrutiny at many open-access journals.

“The goal was to create a credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable,” Bohannon says. He submitted versions of his study to 304 open-access journals; of 255 open-access journals that said they would review his study, 157 accepted the fake study for publication. “Acceptance was the norm, not the exception,” he writes. While spoof papers are not new, the Bohannon study represents a first systematic test of review practices, or their absence, across many journals at once.

The spoof study had at least three problems which should have been caught by reviewers:

  • The study drug killed cancer cells with increasing doses, even though its data didn’t show any such effect.
  • The drug killed cancer cells exposed to medical radiation with increasing effect, even though the study showed the cells weren’t exposed to radiation.
  • The study author concluded the paper by promising to start treating people with the drug immediately, without further safety testing.

“If the scientific errors aren’t motivation enough to reject the paper, its apparent advocacy of bypassing clinical trials certainly should be,” Bohannon writes. But in many cases, it appears the study wasn’t peer-reviewed at all by the journals that responded to the spoof submission. Many of the reviews were just requests to format the study for publication.

This raises the question of how many legitimate but fundamentally flawed manuscripts get through the reviewing system for either traditional or open-access journals due to poor reviewing. In the end, scientists realize that publications are “self-correcting” in that erroneous results—and obviously phony data—are not reproducible, and eventually will be revealed as such.

In the next section, you’ll see the impact of these flawed manuscripts –it’s both sad and scary. There are so many publications being retracted that a daily blog tracks these, and provides back-stories ranging from “honest mistakes” to intentional fraud.

Retractions Run Rampant

You might think that retractions are relatively rare events, but there are now so many that a website, Retraction Watch, gives daily accounts of these obviously not rare events. Adam Marcus and Ivan Oransky have provided a great service to the scientific community by starting Retraction Watch. They deserve kudos for putting in what must be lots of time and thought to carefully research and write this blog. Retraction Watch is well worth visiting, and if you’re so inclined, subscribe to it as I did for daily email postings that oftentimes prompt numerous reader comments. As of November 20th there were 5,751 subscriber-followers, which is an indication of quite widespread interest. Some retractions involve articles that had been cited hundreds of times—the current “mega-correction” record is 319—while other retractions involve bizarre—if not sad—circumstances.

Why write a blog about retraction? This FAQ at Retraction Watch is answered by referring to the first post in 2010, which in part reads as follows:

“First, science takes justifiable pride in the fact that it is self-correcting — most of the time. Usually, that just means more or better data, not fraud or mistakes that would require a retraction.”

“Second, retractions are not often well-publicized. Sure, there are the high-profile…[b]ut most retractions live in obscurity in Medline and other databases. That means those who funded the retracted research — often taxpayers — aren’t particularly likely to find out about them. Nor are investors always likely to hear about retractions on basic science papers whose findings may have formed the basis for companies into which they pour dollars.”

“Third, they’re often the clues to great stories about fraud or other malfeasance….”

“Finally, we’re interested in whether journals are consistent. How long do they wait before printing a retraction? What requires one? How much of a public announcement, if any, do they make? Does a journal with a low rate of retractions have a better peer review and editing process, or is it just sweeping more mistakes under the rug?”

Another FAQ is why are so many of the retractions you cover from the life sciences?

The answer given is that “[t]here are a number of reasons for this. The two most important are that 1) we’re both medical reporters in our day jobs, so our sources and knowledge base are both deeper in the life sciences and 2) there are more papers published in the life sciences than in other areas.”

Also, is there a reliable database of retractions? The reply is “[n]o. There are ways to search Medline and the Web of Science for retractions, but there’s no single database”.

So what are people saying about Retraction Watch? Here’s a sampling of what’s been posted about this:

  • Columbia Journalism Review Regret the Error columnist Craig Silverman calls Retraction Watch “a new blog that should be required reading for anyone interested in scientific journalism or the issue of accuracy.”
  • Retraction Watch is a “somewhat addictive” blog, writes radiation oncology journal editor-in-chief Anthony Zeitman.
  • A “fascinating and worthwhile blog,” writes Andrew Revkin in Dot Earth, the New York Times’ environmental blog. Revkin has also called Retraction Watch “invaluable.”

Well, enough said about this topic, so let’s switch to some stats I collected.

Mind-numbing stats for nucleic acid-related publications in 2013

As I mentioned at the outset, a new paper published about every 20 seconds—or 1,576,800 papers per year—prompted me to look into some stats for nucleic acid-related publications during 2013. Because practical applications of science are reflected in patents, I decided to use SciFinder, which covers academic publications of all sorts as well as patents. The search statistics obtained on Oct 30th were multiplied by 12 and divided by 10 to arrive at an estimated total publication number for 2013.  These totals are listed below in decreasing order:

Gene expression
PCR
Sequencing
Hybridization
SNP
Primers
Oligonucleotides
61,500
58,000
29,000
15,500
9,500
9,200
8,500

 

 

 

 

So, if you think you can keep up with all of the gene expression, or PCR, or any of this literature, FORGET ABOUT IT!!  There will be approximately 191,200 nucleic-acid related publications in 2013.

Although it’s admittedly over simplistic, this rank ordering kind of made sense to me inasmuch as gene expression tools and technology have been available longer, utilize PCR, and are less expensive than those for sequencing. And sequencing—especially new high-throughput methods—are used for gene expression analysis rather than genomics, which also employ PCR. Similarly, hybridization has been used for quite some time, giving way to high-throughput by microarrays for both gene expression and single nucleotide polymorphism (SNP) analyses, also employing PCR. Finally, my rationalization of why primers and oligonucleotides are low on this list is that, while integral to the other topics, they are often unnamed components of kits or listed as sequence IDs.

Oligonucleotides…let me count the ways:

Anyway, my oligonucleotide background led me to a “deeper dive” into this segment, and here’s what was found. Through October 30th 2013, there were actually 7,129 items comprised of 1,365 patents and 5,773 non-patents, which included 230 reviews.

Analysis by author nationality—my guess from surnames rather than stated nationality—gave the following “top-10” rank order:

Rank
1-8
9
10
10
10
Nationality
Korean
Chinese
Danish
Indian
American
Number of publications
tied at 22 each
21
20
20
20

 

 

 

 

Aside from the somewhat surprising—at least to me—prevalence of Korean authors, I was pleased to personally know one of the authors tied for 10th, namely, Jesper Wengel, whose unlocked nucleic acid (UNA)-modified oligonucleotides are offered by TriLink as flexible RNA mimics that enable modulation of affinity and specificity. Also through the amazing “small world of science”, the U.S. author tied for 10th is Eric Swayze at Isis Pharmaceuticals in Carlsbad, California near TriLink.

Analysis by company or organization was likewise surprising—at least to me—in that two Chinese entities ranked highest in this list, followed by Isis, then a Korean entity, and then various U.S. entities I culled out.

Organization
Peoples Republic of China
Chinese Academy of Science
Isis Pharmaceuticals
Korean Inst. Biosci. & Biotechnol.
University of California
Ohio State University
National Institutes of Health
Life Technologies
Yale University School of Med.
Harvard University
University of Utah
Number of publications
33
28
25
25
22
15
13
12
11
10
10

 

 

 

 

 

 

 

The middle of this list provides yet another example of the sometimes scarily “small world of science”. During my professional career I first did a postdoc at Ohio State University, then worked at NIH, and eventually was with Life Technologies, before joining TriLink.

Quality Not Quantity

While the aforementioned has dealt with quantity, it’s perhaps more important to ask about quality. How to meaningfully measure quality or impact of scientific publications has been a topic of discussion and debate for a long time, certainly at institutions dealing with promotions and tenure.

One well-known metric is the h-index, which attempts to measure both the productivity and impact of the published work of a scientist or scholar based on the set of the scientist’s most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department, university or country, as well as a scholarly journal. The index was suggested by Jorge E. Hirsch, a physicist at UCSD, as a tool for determining theoretical physicists’ relative quality and is sometimes called the Hirsch index or Hirsch number.

While somewhat out of date, this 2011 compilation taken from ecnmag.com is worth considering as a “snapshot in time”.

Quality Not Quantity

Good Bye 2013, Hello 2014

While I end this year with a bit of skepticism about the current rapid rate of scientific publications, I enter 2014 with renewed optimism and excitement about the scientific discovery that awaits us in the coming year. It’s been a fun year for me researching and writing this blog, which I sincerely hope you have found to be interesting. If you’ve missed any of this year’s posts, I direct you to the archives at the top of this page to view all of the blog activity. I look forward to another year of research and commentary and I hope you will continue to follow in 2014.

As always, your comments are welcomed.

holidays

 

Modified mRNA Mania

  • Biosynthetic modified mRNA for gene-based therapy without the gene!
  • AstraZeneca bets up to $420M on Moderna’s “messenger RNA therapeutics”
  • “Me-too” Pharma frenzy to follow?

In a perspective on gene therapy published in Science this year, Inder M. Verma starts by observing that the concept of gene therapy is disarmingly simple. Introduce a healthy gene in a patient and its protein product should alleviate the defect caused by a faulty gene or slow the progression of the disease. He then asks the rhetorical question: ‘why then, over the past three decades, have there been so few clinical successes in treating patients with this approach?’ The answer in part has to do with challenges for cell or tissue-specific delivery, which admittedly is an issue for virtually any type of therapeutic agent. There is also concern for adverse events generally ascribed to unintended vector integration leading to neoplasias. Nevertheless, according to Verma, the present clinical trials pipeline is jammed with more than 1700 (!) clinical trials worldwide, drawing on a wide array of gene therapy approaches for both acquired and inherited diseases.

In view of this scientifically laudable but undeniable—if not frustratingly—slow progress, it’s not surprising that various groups of investigators—and investors—have recently opted to pursue a strategy that eliminates a DNA-encoded gene entirely! Instead, biosynthetic mRNA is delivered in order to directly produce the desired therapeutic protein product—this is now being referred to as “mRNA therapeutics”.

Having said this, let’s consider some pivotal scientific publications, patents, and the emerging commercial landscape for what looks to be a very hot area for research and corporate competition.

Modified mRNA Therapeutic Vaccines

An excellent review published in 2010 by Bringmann et al. entitled RNA Vaccines in Cancer Treatment covers various approaches to using mRNA encoding for tumor-associated antigens to induce specific cytotoxic T lymphocyte and antibody responses. RNA-transfected dendritic cell vaccines have been extensively investigated and are currently in numerous clinical trials (the details for which can be found at the NIH ClinicalTrials.gov website by simply searching RNA vaccines).

Interestingly, clinical feasibility and safety assessment for direct intradermal injection of “naked” unmodified mRNA was reported back in 2008 by Weide et al., who removed metastatic tissue from each of 15 melanoma patients for total RNA extraction, reverse-transcription to cDNA, amplification, cloning, and transcription to produce unlimited amounts of copy mRNA.

Stabilizing unmodified mRNA by packaging in liposomes or forming complexes with cationic polymers has been widely investigated, as well as introducing chemical modifications to mRNA to make it more resistant against degradation and more efficient for translation. The latter includes elongation of the poly-A tail at the 3′-end of the molecule and modifications to the cap structure at the 5′-end. For example, if the original 7-methylguanosine triphosphate is replaced by an Antireverse Cap Analog (ARCA), the efficiency of transcription is strongly enhanced. To provide the immune system with even more potent signals, Scheel et al. modified mRNA with a phosphorothioate backbone in early commercial vaccine development work at CureVac GmbH (Tübingen, Germany) that continues today (see image below).

Effects of mRNA vaccines  (taken from an article in Drug Discovery & Development by Ingmar Hoerr, PhD, CEO and Cofounder of CureVac).

Effects of mRNA vaccines (taken from an article in Drug Discovery & Development by Ingmar Hoerr, PhD, CEO and Cofounder of CureVac).

In summary, in a 2013 review entitled RNA: The new revolution in nucleic acid vaccines, Geall et al. from Novartis Vaccines & Diagnostics (Cambridge, MA, USA) stated that “prospects for success are bright.” They site several reasons for this optimistic outlook including the potential of RNA vaccines to address safety and effectiveness issues sometimes associated with vaccines that are based on live attenuated viruses and recombinant viral vectors. In addition, methods to manufacture RNA vaccines are suitable as generic platforms and for rapid response, both of which will be very important for addressing newly emerging pathogens in a timely fashion. Plasmid DNA is the more widely studied form of nucleic acid vaccine and proof of principle in humans has been demonstrated, although no licensed human products have yet emerged. The RNA vaccine approach, based on mRNA, is gaining increased attention and several vaccines are under investigation for infectious diseases, cancer and allergy.

Modified mRNA for Expressing Clinically Beneficial Proteins

Dr. Katalin Karikó, Adjunct Associate Professor of Neurosurgery and Senior Research Investigator, Department of Neurosurgery, University of Pennsylvania (taken from upenn.edu).

Dr. Katalin Karikó, Adjunct Associate Professor of Neurosurgery and Senior Research Investigator, Department of Neurosurgery, University of Pennsylvania (taken from upenn.edu).

In a landmark publication by Karikó et al. in 2008, it was reasoned that the suitability of mRNA as a direct source of therapeutic proteins in vivo required muting its immunogenicity and boosting its effectiveness. Clues as to how this might be achieved were provided in their earlier work demonstrating the use of base-modified triphosphates to enzymatically synthesize in vitro mRNA having modified nucleosides [such as, pseudouridine (Ψ), 5-methylcytidine (m5C), N6-methyladenosine (m6A), 5-methyluridine (m5U), or 2-thiouridine (s2U)] had greatly diminished immunostimulatory properties. They reasoned that, “if any of the in vitro transcripts containing nucleoside modifications would remain translatable and also avoid immune activation in vivo, such an mRNA could be developed into a new therapeutic tool for both gene replacement and vaccination”.

Using the aforementioned and other base-modified nucleotide triphosphates—all obtained from TriLink BioTechnologiesKarikó et al. found, surprisingly, that mRNA containing pseudouridine had a higher translational capacity than unmodified mRNA when tested in mammalian cells and lysates or administered intravenously into mice at 0.015–0.15 mg/kg doses. The delivered mRNA and the encoded protein could be detected in the spleen at 1, 4, and 24 hours after the injection, and at each time-point there was more of the reporter protein when pseudouridine-containing mRNA was administered. Moreover, even at higher doses, only the unmodified mRNA was immunogenic. [Note: a fascinating follow-on publication provides a non-obvious—at least to me—molecular-level rationale for the surprising enhanced translation of pseudouridine-modified mRNA]. 

uridine

Uridine and pseudouridine differ in bonding to ribose but hydrogen-bond similarly to adenine. Pseudouridine is the most prevalent of the 100+ naturally occurring modified nucleosides found in RNA.

They concluded that, “[t]hese collective findings are important steps in developing the therapeutic potential of mRNA, such as using modified mRNA as an alternative to conventional vaccination and as a means for expressing clinically beneficial proteins in vivo safely and effectively.” Prior to publishing this pivotal report, Katalin Karikó and co-author Drew Weissman filed a patent application in 2006 entitled RNA containing modified nucleosides and methods of use thereof that was issued on October 2, 2012 as US 8,278,036 and is assigned to the University of Pennsylvania.

 

Blood Boosting with Erythropoietin

EPO stimulates the production of red blood cells (taken from proactiveinvestors.com via Bing Images)

EPO stimulates the production of red blood cells (taken from proactiveinvestors.com via Bing Images)

In a very persuasive demonstration of the real possibility of mRNA therapeutics, Karikó et al, reported in 2012 that non-immunogenic pseudouridine-modified mRNA encoding erythropoietin (EPO) was translated in mice and non-human primates. Indeed, a single injection of 100 ng (0.005 mg/kg) of HPLC-purified mRNA complexed to a delivery agent elevated serum EPO levels significantly and levels were maintained for 4 days. In comparison, mRNA containing uridine produced 10–100-fold lower levels of EPO lasting only 1 day. EPO translated from pseudouridine-mRNA was functional and caused a significant increase of both reticulocyte counts and hematocrits. As little as 10 ng mRNA doubled reticulocyte numbers. Weekly injection of 100 ng of EPO mRNA was sufficient to increase the hematocrit from 43 to 57%, which was maintained with continued treatment. Even when a large amount of pseudouridine-mRNA was injected, no inflammatory cytokines were detectable in plasma.

Rhesus macaque (taken from flickr.com via Bing Images)

Rhesus macaque (taken from flickr.com via Bing Images)

Using rhesus macaques (aka rhesus monkeys) they could also detect significantly-increased serum EPO levels following intraperitoneal injection of rhesus EPO mRNA. Other researchers (Kormann et al.) independently used a single injection of modified murine mRNA to produce EPO in mice.

Kick-Start Cardiac Repair with VEGF-A

That’s the catchy title of a News & Views article in the October 2013 issue of Nature Biotechnology with an equally catchy byline that reads “[t]he survival of mice after experimental heart attack is greatly improved by a pulse of RNA therapy.” The featured report by Zangi et al., which is characterized as “a masterpiece of multidisciplinary studies…that will advance our thinking about therapeutic options in the cardiovascular arena,” is indeed impressive. These investigators report that intra-myocardial injections of vascular endothelial growth factor-A (VEGF-A) mRNA modified with 5-methylcytidine, pseudouridine, and 5’ cap structure resulted in expansion and directed differentiation of endogenous heart progenitors in a mouse model of myocardial infarction. They found markedly improved heart function and enhanced long-term survival of recipients. Moreover, “pulse-like” delivery of VEGF-A using modified mRNA was found to be superior to use of DNA vectors in vivo.

A heart attack (myocardial infarction) occurs when one of the heart's coronary arteries is blocked suddenly, usually by a blood clot (thrombus), which typically forms inside a coronary artery that already has been narrowed by atherosclerosis, a condition in which fatty deposits (plaques) build up along the inside walls of blood vessels (taken from drugs.com via Bing Images).

A heart attack (myocardial infarction) occurs when one of the heart’s coronary arteries is blocked suddenly, usually by a blood clot (thrombus), which typically forms inside a coronary artery that already has been narrowed by atherosclerosis, a condition in which fatty deposits (plaques) build up along the inside walls of blood vessels (taken from drugs.com via Bing Images).

Notwithstanding these promising results, the aforementioned News & Views article points out that microgram-scale doses of modified mRNA in mice used by Zangi et al. “would probably correspond to several hundred milligrams…in humans delivered in volumes that might exceed 10 ml per heart. In clinical practice, it would be very difficult to administer such volumes to infarcted hearts.” In my humble opinion, these are legitimate but purely hypothetical issues at this time and, given that it’s very “early days” for therapeutic modified mRNA technologies, it’s not unreasonable to assume that new modifications and/or improved delivery strategies can be developed to enable clinical utility.

From a technical perspective, this work by Zangi et al. involves a form of cell-free reprogramming and, as such, is a good segue into the next section. 

Modified mRNA for Cellular Reprogramming

In 2005, when I first heard of the concept of cellular reprogramming and dedifferentiation—which is to somehow coax a mature, differentiated cell to ‘run in reverse and go backwards biologically’ to a more primitive cell—my immediate impression as a chemist was this was impossible. Surely, I thought, this must violate the Second Law of Thermodynamics or, if not, is completely counterintuitive to how life works. Wow, was I wrong!

Reprogramming of differentiated cells to pluripotency is now firmly established and holds great promise as a tool for studying normal development.  It also offers hope that patient-specific induced pluripotent stem cells (iPSCs) could be used to model disease or to generate clinically useful cell types for autologous therapies aimed at repairing deficits arising from injury, illness, and aging. Induction of pluripotency was originally reported by Takahashi & Yamanaka by enforced retroviral expression of four transcription factors, KLF4, c-MYC, OCT4, and SOX2 (aka “Yamanaka factors”)—collectively abbreviated as KMOS. (TriLink sells these and other factors used to direct cell fate.) Viral integration into the genome initially presented a formidable obstacle to therapeutic use of iPSCs. The search for ways to induce pluripotency without incurring genetic change has thus become the focus of intense research effort.

Consequently, much attention has been given to the 2010 publication by Warren et al. entitled Highly efficient reprogramming to pluripotency and directed differentiation of human cells with synthetic modified mRNA. In this work complete substitution of either 5-methylcytidine for cytidine or pseudouridine for uridine in protein-encoding transcripts markedly improved protein expression, although the most significant improvement was seen when both modifications were used together. Transfection of modified mRNAs encoding the above mentioned Yamanaka factors led to robust expression and correct localization to the nucleus. Expression kinetics showed maximal protein expression 12 to 18 hours after transfection, followed by rapid turnover of these transcription factors. From this it was concluded that daily transfections would be required to maintain high levels of expression of the Yamanaka factors during long-term, multifactor reprogramming regimens.

They went on to demonstrate that repeated administration of modified mRNA encoding these (and other) factors led to reprogramming various types of differentiated human cells to pluripotency with conversion efficiencies and kinetics substantially superior to established viral protocols. Importantly, this simple, non-mutagenic, and highly controllable technology was shown to be applicable to a range of tissue-engineering tasks, exemplified by mRNA-mediated directed differentiation of mRNA-generated iPSCs to terminally differentiated myogenic (e.g. heart muscle) cells.

Modified mRNA reprogramming fibroblasts into induced pluripotent cells for directed differentiation into myofibers, according to Warren et al. in Cell Stem Cell (2010)

Modified mRNA reprogramming fibroblasts into induced pluripotent cells for directed differentiation into myofibers, according to Warren et al. in Cell Stem Cell (2010)

Warren et al. concluded that “we believe that our approach has the potential to become a major enabling technology for cell-based therapies and regenerative medicine.”  According to the Acknowledgements section of this 2010 publication, corresponding author Derrick J. Rossi recently founded a company, ModeRNA [sic] Therapeutics, dedicated to the clinical translation of this technology.” That, we shall see below, has had stunning commercial investment consequences.

By the way, and not surprisingly, Rossi & Warren filed a U.S. patent application in 2012 claiming, among other things, iPSCs induction kits using 5-methylcytidine- and pseudouridine-modified mRNA encoding KMOS human cellular reprogramming factors.

AstraZeneca’s Big Bet on Moderna’s Modified-mRNA Therapeutics

AstraZeneca aims to use Moderna Therapeutics’ modified-messenger RNA technology to develop and commercialize new drugs for cancer and serious cardiovascular, metabolic, and renal diseases, under a multi-year deal that could net Moderna more than $420 million. Moderna is also eligible for royalties on drug sales ranging from high single digits to low double digits per product.

AstraZeneca—ranked 7th in sales in 2010 among the world’s pharmaceutical companies—has the option to select up to 40 drug products for clinical development of what the companies are calling messenger RNA Therapeutics™, which could dramatically reduce the time and expense associated with creating therapeutic proteins using current recombinant technologies, the companies say. Moreover, “where current drug discovery technologies can target only a fraction of the disease-relevant proteins in the human genome, we have the potential to create completely new medicines to treat patients with serious cardiometabolic diseases and cancer,” AstraZeneca CEO Pascal Soriot said in a statement. Mr. Soriot, who had been a senior executive at Roche, very recently joined AstraZeneca, which said it would reorganize R&D and eliminate 1,600 jobs by 2016 as part of a plan to address issues related to failures in clinical trials of several drugs just as big sellers like the antipsychotic Seroquel and the heartburn drug Nexium have lost or are about to lose patent protection.

Moderna, based in Cambridge, Massachusetts, is privately held and was founded in 2010 by Flagship VentureLabs in association with leading scientists from Boston Children’s Hospital and Massachusetts Institute of Technology. Moderna has developed a broad intellectual property estate including 144 patent applications with 6,910 claims ranging from novel nucleotide chemistries to specific drug compositions, according to its website.

DARPA also Bets Big on Moderna’s Modified-mRNA Therapeutics

As the saying goes, “when it rains it pours”, and for Moderna it’s pouring money!

On October 2nd, Moderna announced that the U.S. Defense Advanced Research Projects Agency (DARPA)—whose most successful bets so far have been internet technologies—has awarded the company up to $25 million for R&D using its modified-mRNA therapeutics platform as a “rapid and reliable way to make antibody-producing drugs to protect against a wide range of now and emerging infectious diseases and engineered biological threats.” The statement goes on to say that Moderna’s approach can “tap directly into the body’s natural processes to produce antibodies without exposing people to a weakened or inactivated virus or pathogen, as in the case with the vaccine approaches currently being tested.”

The grant could support research for up to 5 years to advance promising antibody-producing drug candidates into preclinical testing and human clinical trial. The company also received a $700,000 ‘seeding’ grant from DARPA in March to begin work on the project.

If you’re interested in some of the possible ideas associated with the project, go to the 2013 patent application by Moderna entitled Methods of responding to a biothreat, which even envisages a portable, battery operated device for synthesizing modified mRNA. Oh well, never let it be said that DARPA fears a risky bet; on the other hand, since DARPA’s “playing with house money” (aka our taxes!), I suppose it’s easy for them. Let’s hope they/we all win.

Other Commercial Players

In addition to TriLink’s mRNA products, related services, and new cGMP facility, there are other companies to mention here, which I’ll do in alphabetical order.

  • Acuitas Therapeutics has compared the effectiveness of its lipid nanoparticle (LNP) carriers in vivo with the most potent delivery systems reported in the scientific literature, and found that Acuitas LNPs demonstrate much greater luciferase expression in the liver after systemic administration.
  • CureVac is combining both the antigenic and adjuvant properties of mRNA to develop novel and effective mRNA vaccines. CureVac is currently developing therapeutic mRNA vaccines in oncology and therapeutic/prophylactic vaccines for infectious diseases. Information on five of its clinical studies is available at ClinicalTrials.gov.
  • Dendreon has a U.S. patent application for a method to make dendritic cell vaccines from embryonic stem cells that are genetically modified with mRNA encoding tumor antigen. However, no mRNA-searchable items are currently listed on Dendreon’s website.
  • In-Cell-Art is investigating new and improved nanocarriers for mRNA vaccines, and has collaborated with Sanofi Pasteur and CureVac in DARPA-funded studies.
  • Mirus Bio offers a TransIT®-mRNA Transfection Kit for high efficiency, low toxicity, mRNA transfection of mammalian cells, as described by Karikó et al.

Also noteworthy, the 1st International mRNA Health Conference recently held on October 23-24 at the University of Tübingen included talks by numerous key scientists in academia and industry that are well worth looking at in the Conference Program.

In conclusion, I hope that you found this emerging area of modified mRNA therapeutics as interesting and exciting as I did in researching this blog posting, and I welcome your comments.

Postscript

After finishing the above blog, I came across these additional publications on possible mRNA therapies.

Huang and coworkers reported earlier this year that systemic delivery of liposome-protamine-formulated modified mRNA encoding herpes simplex virus 1 thymidine kinase for targeted cancer gene therapy was significantly more effective than plasmid DNA in a therapeutic model of human lung carcinoma in xenograft-bearing nude mice.

Zimmermann et al. reported successful use of mRNA-nucleofection for overexpression of interleukin-10 in murine monocytes/macrophages for anti-inflammatory therapy in a murine model of autoimmune myocarditis. [Note: for a related report on mRNA-engineered mesenchymal stem cells for targeted delivery of interleukin-10 to sites of inflammation see Levy et al.]

Cystic Fibrosis (CF) is the most frequent lethal genetic disease in the Caucasian population. CF is caused by a defective gene coding for the cystic fibrosis transmembrane conductance regulator (CFTR). Bangel-Ruland et al. reported in vitro results indicating that CFTR-mRNA delivery provided a novel alternative for cystic fibrosis “gene therapy”.