More Recent Comments

Showing posts with label Biochemistry. Show all posts
Showing posts with label Biochemistry. Show all posts

Saturday, March 16, 2024

How do proteins move around amidst the jumble of molecules inside a living cell?

I've been reading Philip Ball's book on "How Life Works" and I find it increasingly frustrating because he consistently describes things that he's "discovered" that biochemists like me must have missed. Here's an example from pages 231-232.

He presents a cartoon image of a cell showing that it's full of all kinds of molecules packed closely together, then he says,

Friday, March 15, 2024

Nils Walter disputes junk DNA: (9) Reconciliation

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is arguing against junk DNA by claiming that the human genome contains large numbers of non-coding genes.

This is the ninth and last post in the series. I'm going to discuss Walker's view on how to tone down the dispute over the amount of junk in the human genome. Here's a list of the previous posts.


"Conclusion: How to Reconcile Scientific Fields"

Walter concludes his paper with some thoughts on how to deal with the controversy going forward. I'm using the title that he choose. As you can see from the title, he views this as a squabble between two different scientific fields, which he usually identifies as geneticists and evolutionary biologists versus biochemists and molecular biologists. I don't agree with this distinction. I'm a biochemist and molecular biologist, not a geneticist or an evolutionary biologist, and still I think that many of his arguments are flawed.

Let's see what he has to say about reconciliation.

Science thrives from integrating diverse viewpoints—the more diverse the team, the better the science.[107] Previous attempts at reconciling the divergent assessments about the functional significance of the large number of ncRNAs transcribed from most of the human genome by pointing out that the scientific approaches of geneticists, evolutionary biologists and molecular biologists/biochemists provide complementary information[42] was met with further skepticism.[74] Perhaps a first step toward reconciliation, now that ncRNAs appear to increasingly leave the junkyard,[35] would be to substitute the needlessly categorical and derogative word RNA (or DNA) “junk” for the more agnostic and neutral term “ncRNA of unknown phenotypic function”, or “ncRNAupf”. After all, everyone seems to agree that the controversy mostly stems from divergent definitions of the term “function”,[42, 74] which each scientific field necessarily defines based on its own need for understanding the molecular and mechanistic details of a system (Figure 3). In addition, “of unknown phenotypic function” honors the null hypothesis that no function manifesting in a phenotype is currently known, but may still be discovered. It also allows for the possibility that, in the end, some transcribed ncRNAs may never be assigned a bona fide function.

First, let's take note of the fact that this is a discussion about whether a large percentage of transcripts are functional or not. It is not about the bigger picture of whether most of the genome is junk in spite of the fact that Nils Walter frames it in that manner. This becomes clear when you stop and consider the implications of Walter's claim. Let's assume that there really are 200,000 functional non-coding genes in the human genome. If we assume that each one is about 1000 bp long then this amounts to 6.5% of the genome—a value that can easily be accommodated within the 10% of the genome that's conserved and functional.

Now let's look at how he frames the actual disagreement. He says that the groups on both sides of the argument provide "complementary information." Really? One group says that if you can delete a given region of DNA with no effect on the survival of the individual or the species then it's junk and the other group says that it still could have a function as long as it's doing something like being transcribed or binding a transcription factor. Those don't look like "complimentary" opinions to me.

His first step toward reconciliation starts with "now that ncRNAs appear to increasingly leave the junkyard." That's not a very conciliatory way to start a conversation because it immediately brings up the question of how many ncRNAs we're talking about. Well-characterized non-coding genes include ribosomal RNA genes (~600), tRNA genes (~200), the collection of small non-coding genes (snRNA, snoRNA, microRNA, siRNA, PiWi RNA)(~200), several lncRNAs (<100), and genes for several specialized RNAs such as 7SL and the RNA component of RNAse P (~10). I think that there are no more than 1000 extra non-coding genes falling outside these well-known examples and that's a generous estimate. If he has evidence for large numbers that have left the junkyard then he should have presented it.

Walter goes on to propose that we should divide non-coding transcripts into two categories; those with well-characterized functions and "ncRNA of unknown function." That's ridiculous. That is not a "agnostic and neutral term." It implies that non-conserved transcripts that are present at less that one copy per cell could still have a function in spite of the fact that spurious transcription is well-documented. In fact, he basically admits this interpretation at the end of the paragraph where he says that using this description (ncRNA of unknown function) preserves the possibility that a function might be discovered in the future. He thinks this is the "null hypothesis."

The real null hypothesis is that a transcript has no function until it can be demonstrated. Notice that I use the word "transcript" to describe these RNAs instead of "ncRNA" or "ncRNA of unknown phenotypic function." I don't think we lose anything by using the word "transcript."

Walter also address the meaning of "function" by claiming that different scientific fields use different definitions as though that excuses the conflict. But that's not an accurate portrayal of the problem. All scientists, no matter what field they identify with, are interested in coming up with a way of identifying functional DNA. There are many biochemists and molecular biologists who accept the maintenance definition as the best available definition of function. As scientists, they are more than willing to entertain any reasonable scientific arguments in favor of a different definition but nobody, including Nils Walter, has come up with such arguments.

Now let's look at the final paragraph of Walter's essay.

Most bioscientists will also agree that we need to continue advancing from simply cataloging non-coding regions of the human genome toward characterizing ncRNA functions, both elementally and phenotypically, an endeavor of great challenge that requires everyone's input. Solving the enigma of human gene expression, so intricately linked to the regulatory roles of ncRNAs, holds the key to devising personalized medicines to treat most, if not all, human diseases, rendering the stakes high, and unresolved disputes counterproductive.[108] The fact that newly ascendant RNA therapeutics that directly interface with cellular RNAs seem to finally show us a path to success in this challenge[109] only makes the need for deciphering ncRNA function more urgent. Succeeding in this goal would finally fulfill the promise of the human genome project after it revealed so much non-protein coding sequence (Figure 1). As a side effect, it may make updating Wikipedia and encyclopedia entries less controversial.

I agree that it's time for scientists to start identifying those transcripts that have a true function. I'll go one step further; it's time to stop pretending that there might be hundreds of thousands of functional transcripts until you actually have some data to support such a claim.

I take issue with the phrase "solving the enigma of human gene expression." I think we already have a very good understanding of the fundamental mechanisms of gene expression in eukaryotes, including the transitions between open and closed chromatin domains. There may be a few odd cases that deviate from the norm (e.g. Xist) but that hardly qualifies as an "enigma." He then goes on to say that this "enigma" is "intricately linked to the regulatory roles of ncRNAs" but that's not a fact, it's what's in dispute and why we have to start identifying the true function (if any) of most transcripts. Oh, and by the way, sorting out which parts of the genome contain real non-coding genes may contribute to our understanding of genetic diseases in humans but it won't help solve the big problem of how much of our genome is junk because mutations in junk DNA can cause genetic diseases.

Sorting out which transcripts are functional and which ones are not will help fill in the 10% of the genome that's functional but it will have little effect on the bigger picture of a genome that's 90% junk.

We've known that less than 2% of the genome codes for proteins since the late 1960s—long before the draft sequence of the human genome was published in 2001—and we've known for just as long that lots of non-coding DNA has a function. It would be helpful if these facts were made more widely known instead of implying that they were only dscovered when the human genome was sequenced.

Once we sort out which transcripts are functional, we'll be in a much better position to describe the all the facts when we edit Wikipedia articles. Until that time, I (and others) will continue to resist the attempts by the students in Nils Walter's class to remove all references to junk DNA.


Walter, N.G. (2024) Are non‐protein coding RNAs junk or treasure? An attempt to explain and reconcile opposing viewpoints of whether the human genome is mostly transcribed into non‐functional or functional RNAs. BioEssays:2300201. [doi: 10.1002/bies.202300201]

Thursday, March 14, 2024

Nils Walter disputes junk DNA: (8) Transcription factors and their binding sites

I'm discussing a recent paper published by Nils Walter (Walter, 2024). He is arguing against junk DNA by claiming that the human genome contains large numbers of non-coding genes.

This is the seventh post in the series. The first one outlines the issues that led to the current paper and the second one describes Walter's view of a paradigm shift/shaft. The third post describes the differing views on how to define key terms such as 'gene' and 'function.' In the fourth post I discuss his claim that differing opinions on junk DNA are mainly due to philosophical disagreements. The fifth, sixth, and seventh posts address specific arguments in the junk DNA debate.

Friday, December 08, 2023

What really happened between Rosalind Franklin, James Watson, and Francis Crick?

That's part of the title of podcast by Kat Arney who interviews Matthew Cobb [Double helix double crossing? What really happened between Rosalind Franklin, James Watson and Francis Crick?].

Matthew Cobb is one of the world's leading experts on the history of molecular biology.

The way it’s usually told, Franklin was effectively ripped off and belittled by the Cambridge team, especially Watson, and has only recently been restored to her rightful place as one of the key discoverers of the double helix. It’s a dramatic narrative, with heroes, villains and a grand prize. But, as I found out when I sat down for a chat with Matthew Cobb, science author and Professor of Zoology at the University of Manchester, the real story is a lot more nuanced.

Photo 51 did not belong to Rosalind Franklin and it had (almost) nothing to do with solving the structure of DNA. Franklin and Wilkins would never have gotten the structure on their own. Crick and Watson did not "steal" any data. Whether they behaved ethically is debatable.


Tuesday, August 08, 2023

Donald Voet (1938-2023)

I just learned that Don Voet died on April 11th, 2023. Don and Judy Voet were the authors of one of the most successful biochemistry textbooks of all time and for a long time they were the editors of the journal Biochemistry and Molecular Biology Education (BAMBED). I've known Don for over thirty years and we met often at conferences.

He will be greatly missed. Here's an excerpt from the obituary from the American Society for Biochemistry and Molecular Biology written by my old friend Charlotte Pratt who collaboated with me on my textbook and with Don and Judy Voet on theirs [Don Voet (1938-2023)].

Don’s work over the years demonstrated his conviction that biochemical knowledge has limited value unless it is transmitted fully and honestly to the next generation of scientists. His writing style was intentionally aimed at students of all levels, never dumbed down, and straightforward — a way to invite readers to enter a conversation among professional scientists.

Ever collegial, Don insisted on dropping names into the text, referring to the discoveries of specific researchers wherever possible and borrowing figures from the original publications rather than rendering simplified versions. In cases where visual information was lacking, Don created his own molecular graphics, at a time when modeling software was not accessible to amateurs.


Tuesday, April 25, 2023

Happy DNA Day 2023!

It was 70 years ago today that the famous Watson and Crick paper was published in Nature along with papers by Franklin & Gosling and Wilkins, Stokes, & Wilson. Threre's a great deal of misinformation circulating about this discovery so I wrote up a brief history of the events based largely on Horace Freeland Judson's book The Eighth Day of Creation. Every biochemistry and molecular biology student must read this book or they don't qualify to be an informed scientist. However, if you are not a biochemistry student then you might enjoy my short version.

Some practising scientists might also enjoy refreshing their memories so they have an accurate view of what happened in case their students ask questions.

The Story of DNA (Part 1)

Where Rosalind Franklin teaches Jim and Francis something about basic chemistry.

The Story of DNA (Part 2)

Where Jim and Francis discover the secret of life.

Here's the latest version of Rosalind Frankin's contribution written by Matthew Cobb and Nathaniel Comfort: What Rosalind Franklin truly contributed to the discovery of DNA's structure. If you want to know the accurate version of her history then this is a must-read. Cobb is working on a biography of Crick and Comfort is writing a biography of Watson.

Here are some other posts that might interest you on DNA Day.



Saturday, November 19, 2022

How many enhancers in the human genome?

In spite of what you might have read, the human genome does not contain one million functional enhancers.

The Sept. 15, 2022 issue of Nature contains a news article on "Gene regulation" [Two-layer design protects genes from mutations in their enhancers]. It begins with the following sentence.

The human genome contains only about 20,000 protein-coding genes, yet gene expression is controlled by around one million regulatory DNA elements called enhancers.

Sandwalk readers won't need to be told the reference for such an outlandish claim because you all know that it's the ENCODE Consortium summary paper from 2012—the one that kicked off their publicity campaign to convince everyone of the death of junk DNA (ENCODE, 2012). ENCODE identified several hundred thousand transcription factor (TF) binding sites and in 2012 they estimated that the total number of base pairs invovled in regulating gene expression could account for 20% of the genome.

How many of those transcription factor binding sites are functional and how many are due to spurious binding to sites that have nothing to do with gene regulation? We don't know the answer to that question but we do know that there will be a huge number of spurious binding sites in a genome of more than three billion base pairs [Are most transcription factor binding sites functional?].

The scientists in the ENCODE Consortium didn't know the answer either but what's surprising is that they didn't even know there was a question. It never occured to them that some of those transcription factor binding sites have nothng to do with regulation.

Fast forward ten years to 2022. Dozens of papers have been published criticizing the ENCODE Consortium for their stupidity lack of knowledge of the basic biochemical properties of DNA binding proteins. Surely nobody who is interested in this topic believes that there are one million functional regulatory elements (enhancers) in the human genome?

Wrong! The authors of this Nature article, Ran Elkon at Tel Aviv University (Israel) and Reuven Agami at the Netherlands Cancer Institute (Amsterdam, Netherlands), didn't get the message. They think it's quite plausible that the expression of every human protein-coding gene is controlled by an average of 50 regulatory sites even though there's not a single known example any such gene.

Not only that, for some reason they think it's only important to mention protein-coding genes in spite of the fact that the reference they give for 20,000 protein-coding genes (Nurk et al., 2022) also claims there are an additional 40,000 noncoding genes. This is an incorrect claim since Nurk et al. have no proof that all those transcribed regions are actually genes but let's play along and assume that there really are 60,000 genes in the human genome. That reduces the average number of enhancers to an average of "only" 17 enhancers per gene. I don't know of a single gene that has 17 or more proven enhancers, do you?

Why would two researchers who study gene regulation say that the human genome contains one million enhancers when there's no evidence to support such a claim and it doesn't make any sense? Why would Nature publish this paper when surely the editors must be aware of all the criticism that arose out of the 2012 ENCODE publicity fiasco?

I can think of only two answers to the first question. Either Elkon and Agami don't know of any papers challenging the view that most TF binding sites are functional (see below) or they do know of those papers but choose to ignore them. Neither answer is acceptable.

I think that the most important question in human gene regulation is how much of the genome is devoted to regulation. How many potential regulatory sites (enhancers) are functional and how many are spurious non-functional sites? Any paper on regulation that does not mention this problem should not be published. All results have to interpreted in light of conflicting claims about function.

Here are some example of papers that raise the issue. The point is not to prove that these authors are correct - although they are correct - but to show that there's a controvesy. You can't just state that there are one million regulatory sites as if it were a fact when you know that the results are being challenged.

"The observations in the ENCODE articles can be explained by the fact that biological systems are noisy: transcription factors can interact at many nonfunctional sites, and transcription initiation takes place at different positions corresponding to sequences similar to promoter sequences, simply because biological systems are not tightly controlled." (Morange, 2014)

"... ENCODE had not shown what fraction of these activities play any substantive role in gene regulation, nor was the project designed to show that. There are other well-studied explanations for reproducible biochemical activities besides crucial human gene regulation, including residual activities (pseudogenes), functions in the molecular features that infest eukaryotic genomes (transposons, viruses, and other mobile elements), and noise." (Eddy, 2013)

"Given that experiments performed in a diverse number of eukaryotic systems have found only a small correlation between TF-binding events and mRNA expression, it appears that in most cases only a fraction of TF-binding sites significantly impacts local gene expression." (Palazzo and Gregory, 2014)

One surprising finding from the early genome-wide ChIP studies was that TF binding is widespread, with thousand to tens of thousands of binding events for many TFs. These number do not fit with existing ideas of the regulatory network structure, in which TFs were generally expected to regulate a few hundred genes, at most. Binding is not necessarily equivalent to regulation, and it is likely that only a small fraction of all binding events will have an important impact on gene expression. (Slattery et al., 2014)

Detailed maps of transcription factor (TF)-bound genomic regions are being produced by consortium-driven efforts such as ENCODE, yet the sequence features that distinguish functional cis-regulatory sites from the millions of spurious motif occurrences in large eukaryotic genomes are poorly understood. (White et al., 2013)

One outstanding issue is the fraction of factor binding in the genome that is "functional", which we define here to mean that disturbing the protein-DNA interaction leads to a measurable downstream effect on gene regulation. (Cusanovich et al., 2014)

... we expect, for example, accidental transcription factor-DNA binding to go on at some rate, so assuming that transcription equals function is not good enough. The null hypothesis after all is that most transcription is spurious and alterantive transcripts are a consequence of error-prone splicing. (Hurst, 2013)

... as a chemist, let me say that I don't find the binding of DNA-binding proteins to random, non-functional stretches of DNA surprising at all. That hardly makes these stretches physiologically important. If evolution is messy, chemistry is equally messy. Molecules stick to many other molecules, and not every one of these interactions has to lead to a physiological event. DNA-binding proteins that are designed to bind to specific DNA sequences would be expected to have some affinity for non-specific sequences just by chance; a negatively charged group could interact with a positively charged one, an aromatic ring could insert between DNA base pairs and a greasy side chain might nestle into a pocket by displacing water molecules. It was a pity the authors of ENCODE decided to define biological functionality partly in terms of chemical interactions which may or may not be biologically relevant. (Jogalekar, 2012)


Nurk, S., Koren, S., Rhie, A., Rautiainen, M., Bzikadze, A. V., Mikheenko, A., et al. (2022) The complete sequence of a human genome. Science, 376:44-53. [doi:10.1126/science.abj6987]

The ENCODE Project Consortium (2012) An integrated encyclopedia of DNA elements in the human genome. Nature, 489:57-74. [doi: 10.1038/nature11247]

Friday, April 01, 2022

Illuminating dark matter in human DNA?

A few months ago, the press office of the University of California at San Diego issued a press release with a provocative title ...

Illuminating Dark Matter in Human DNA - Unprecedented Atlas of the "Book of Life"

The press release was posted on several prominent science websites and Facebook groups. According to the press release, much of the human genome remains mysterious (dark matter) even 20 years after it was sequenced. According to the senior author of the paper, Bing Ren, we still don't understand how genes are expressed and how they might go awry in genetic diseases. He says,

A major reason is that the majority of the human DNA sequence, more than 98 percent, is non-protein-coding, and we do not yet have a genetic code book to unlock the information embedded in these sequences.

We've heard that story before and it's getting very boring. We know that 90% of our genome is junk, about 1% encodes proteins, and another 9% contains lots of functional DNA sequences, including regulatory elements. We've known about regulatory elements for more than 50 years so there's nothing mysterious about that component of noncoding DNA.

Monday, October 25, 2021

Electron transport supercomplexes in the mitochondria

The generation of ATP in mitochodrial membranes is driven by the energy from a protonmotive force caused by the movement of protons from a high concentration in the space between the inner and outer membranes and a low concentration on the inside of the mitochondria. The discovery of this mechanism, called the chemiosmotic theory, is due to Peter Mitchell, who won a Nobel Prize in 1978. It's one of the classic examples of a real paradigm shift because the mechanism completely changed the way biochemists looked at chemical reactions [Ode to Peter Mitchell].

The creation of the proton gradient is due to a series of oxidation-reduction reactions that involve the transfer of electrons from one molecule to another. The electrons pass down a chain of reactions from a relatively high energy molecule such as NADH to successively lower energy molecules and end up in water. As the electrons lose energy, the excess energy is coupled to the pumping of protons from the inside of the mitochondrion to the inner membrane space, creating a proton gradient. This gradient is what drives ATP production.

The chemical reactions are catalyzed by three large enzyme complexes: Complex I (NADH-ubiquinone oxidoreductase); Complex III (Ubiquinol-cytochrome oxidoreductase; and Complex IV (Cytochrome C oxidase). The electrons are transported between the complexes by ubiquinone (Q) and cytochrome C.

The structure and function of each of these complexes are usually described in separate sections of a biochemistry textbook because they are so complex but we've always known that they function together as an electron transport chain. When I first started writing this section of my textbook it was obvious that the three complexes would cluster together in the inner mitochondrial membrane and there was even some evidence to suggest that they were in close contact. The idea that they would form a well-defined supercomplex was hypothesized in the 1990s but the evidence was weak so it didn't make it into most textbooks.

The structures of Complexes I, II and III were eventually solved by X-ray diffraction studies of the isolated complexes but isolating any supercomplexes was difficult. With the advent of high resolution three dimensional electron microscopy one could finally see the predicted supercomplexes. The latest result, just published in Nature, characterizes the CI-CIII-CIV supercomplex from mammalian mitochondria (Vercellino and Sazanov, 2021).

The nice thing about this result, and the earlier studies on supercomplexes in yeast and plants, is that it confirms earlier speculations that such complexes must exist even though the evidence was weak. It shows you that a deep understanding of biochemistry structure and function can lead to reasonable hypotheses than can be tested when the technology improves.


Image credit: The first image is a modified version of Figure 14.6 from Moran et al. Principles of Biochemistry, 5th edition. The second image is from Vercellino and Sazanov (2021).

Vercellino, I. and Sazanov, L.A. (2021) Structure and assembly of the mammalian mitochondrial supercomplex CIII2CIV. Nature 598:364-367. [doi: 10.1038/s41586-021-03927-z]

Monday, May 10, 2021

MIT Professor Rick Young doesn't understand junk DNA

Richard ("Rick") Young is a Professor of Biology at the Massachusetts Institute of Technology and a member of the Whitehead Institute. His area of expertise is the regulation of gene expression in eukaryotes.

He was interviewed by Jorge Conde and Hanne Winarsky on a recent podcast (Feb. 1, 2021) where the main topic was "From Junk DNA to an RNA Revolution." They get just about everything wrong when they talk about junk DNA including the Central Dogma, historical estimates of the number of genes, confusing noncoding DNA with junk, alternative splicing, the number of functional RNAs, the amount of regulatory DNA, and assuming that scientists in the 1970s were idiots.

In this episode, a16z General Partner Jorge Conde and Bio Eats World host Hanne Winarsky talk to Professor Rick Young, Professor of Biology and head of the Young Lab at MIT—all about “junk” DNA, or non-coding DNA.

Which, it turns out—spoiler alert—isn’t junk at all. Much of this so-called junk DNA actually encodes RNA—which we now know has all sorts of incredibly important roles in the cell, many of which were previously thought of as only the domain of proteins. This conversation is all about what we know about what that non-coding genome actually does: how RNA works to regulate all kinds of different gene expression, cell types, and functions; how this has dramatically changed our understanding of how disease arises; and most importantly, what this means we can now do—programming cells, tuning functions up or down, or on or off. What we once thought of as “junk” is now giving us a powerful new tool in intervening in and treating disease—bringing in a whole new category of therapies.

Here's what I don't understand. How could a prominent scientist at one of the best universities in the world be so ignorant of a topic he chooses to discuss on a podcast? Perhaps you could excuse a busy scientist who doesn't have the time to research the topic but what excuse can you offer to explain why the entire culture at MIT and the Whitehead must also be ignorant? Does nobody there ever question their own ideas? Do they only read the papers that support their views and ignore all those that challenge those views?

This is a very serious question. It's the most difficult question I discuss in my book. Why has the false narrative about junk DNA, and many other things, dominated the scientific literature and become accepted dogma among leading scientists? Soemething is seriously wrong with science.


Sunday, April 25, 2021

Happy DNA day 2021!

It was 68 years ago today that the famous Watson and Crick paper was published in Nature along with papers by Franklin & Gosling and Wilkins, Stokes, & Wilson. Threre's a great deal of misinformation circulating about this discovery so I wrote up a brief history of the events based largely on Horace Freeland Judson's book The Eighth Day of Creation. Every biochemistry and molecular biology student must read this book or they don't qualify to be an informed scientist. However, if you are not a biochemistry student then you might enjoy my short version.

Some practising scientists might also enjoy refreshing their memories so they have an accurate view of what happened in case their students ask questions.

The Story of DNA (Part 1)

Where Rosalind Franklin teaches Jim and Francis something about basic chemistry.

The Story of DNA (Part 2)

Where Jim and Francis discover the secret of life.

Here are some other posts that might interest you on DNA Day.



Thursday, December 31, 2020

On the importance of controls

When doing an exeriment, it's important to keep the number of variables to a minimum and it's important to have scientific controls. There are two types of controls. A negative control covers the possibility that you will get a signal by chance; for example, if you are testing an enzyme to see whether it degrades sugar then the negative control will be a tube with no enzyme. Some of the sugar may degrade spontaneoulsy and you need to know this. A positive control is when you deliberately add something that you know will give a positive result; for example, if you are doing a test to see if your sample contains protein then you want to add an extra sample that contains a known amount of protein to make sure all your reagents are working.

Lots of controls are more complicated than the examples I gave but the principle is important. It's true that some experiments don't appear to need the appropriate controls but that may be an illusion. The controls might still be necessary in order to properly interpret the results but they're not done because they are very difficult. This is often true of genomics experiments.

Tuesday, November 17, 2020

Using modified nucleotides to make mRNA vaccines

The key features of the mRNA vaccines are the use of modified nucleotides in their synthesis and the use of lipid nanoparticles to deliver them to cells. The main difference between the Pfizer/BioNTech vaccine and the Moderna vaccine is in the delivery system. The lipid vescicules used by Moderna are somewhat more stable and the vaccine doesn't need to be kept constantly at ultra-low temperatures.

Both vaccines use modified RNAs. They synthesize the RNA using modified nucleotides based on variants of uridine; namely, pseudouridine, N1-methylpseudouridine and 5-methylcytidine. (The structures of the nucleosides are from Andries et al., 2015).) The best versions are those that use both 5-methylcytidine and N1-methylpseudouridine.

I'm not an expert on these mRNAs and their delivery systems but the way I understand it is that regular RNA is antigenic—it induces antibodies against it, presumably when it is accidently released from the lipid vesicles outside of the cell. The modified versions are much less antigenic. As an added bonus, the modified RNA is more stable and more efficiently translated.

Two of the key papers are ...

Andries, O., Mc Cafferty, S., De Smedt, S.C., Weiss, R., Sanders, N.N. and Kitada, T. (2015) "N1-methylpseudouridine-incorporated mRNA outperforms pseudouridine-incorporated mRNA by providing enhanced protein expression and reduced immunogenicity in mammalian cell lines and mice." Journal of Controlled Release 217: 337-344. [doi: 10.1016/j.jconrel.2015.08.051]

Pardi, N., Tuyishime, S., Muramatsu, H., Kariko, K., Mui, B.L., Tam, Y.K., Madden, T.D., Hope, M.J. and Weissman, D. (2015) "Expression kinetics of nucleoside-modified mRNA delivered in lipid nanoparticles to mice by various routes." Journal of Controlled Release 217: 345-351. [doi: 10.1016/j.jconrel.2015.08.007]


Sunday, November 15, 2020

Why is the Central Dogma so hard to understand?

The Central Dogma of molecular biology states ...

... once (sequential) information has passed into protein it cannot get out again (F.H.C. Crick, 1958).

The central dogma of molecular biology deals with the detailed residue-by-residue transfer of sequential information. It states that such information cannot be transferred from protein to either protein or nucleic acid (F.H.C. Crick, 1970).

This is not difficult to understand since Francis Crick made it very clear in his original 1958 paper and again in his 1970 paper in Nature [see Basic Concepts: The Central Dogma of Molecular Biology]. There's nothing particularly complicated about the Central Dogma. It merely states the obvious fact that sequence information can flow from nucleic acid to protein but not the other way around.

So, why do so many scientists have trouble grasping this simple idea? Why do they continue to misinterpret the Central Dogma while quoting Crick? I seems obvious that they haven't read the paper(s) they are referencing.

I just came across another example of such ignorance and it is so outrageous that I just can't help sharing it with you. Here's a few sentences from a recent review in the 2020 issue of Annual Reviews of Genomics and Human Genetics (Zerbino et al., 2020).

Once the role of DNA was proven, genes became physical components. Protein-coding genes could be characterized by the genetic code, which was determined in 1965, and could thus be defined by the open reading frames (ORFs). However, exceptions to Francis Crick's central dogma of genes as blueprints for protein synthesis (Crick, 1958) were already being uncovered: first tRNA and rRNA and then a broad variety of noncoding RNAs.

I can't imagine what the authors were thinking when they wrote this. If the Central Dogma actually said that the only role for genes was to make proteins then surely the discovery of tRNA and rRNA would have refuted the Central Dogma and relegated it to the dustbin of history. So why bother even mentioning it in 2020?


Crick, F.H.C. (1958) On protein synthesis. Symp. Soc. Exp. Biol. XII:138-163. [PDF]

Crick, F. (1970) Central Dogma of Molecular Biology. Nature 227, 561-563. [PDF file]

Zerbino, D.R., Frankish, A. and Flicek, P. (2020) "Progress, Challenges, and Surprises in Annotating the Human Genome." Annual review of genomics and human genetics 21:55-79. [doi: 10.1146/annurev-genom-121119-083418]

Saturday, August 01, 2020

ENCODE 3: A lesson in obfuscation and opaqueness

The Encyclopedia of DNA Elements (ENCODE) is a large-scale, and very expensive, attempt to map all of the functional elements in the human genome.

The preliminary study (ENCODE 1) was published in 2007 and the main publicity campaign surrounding that study focused on the fact that much of the human genome was transcribed. The implication was that most of the genome is functional. [see: The ENCODE publicity campaign of 2007].

The ENCODE 2 results were published in 2012 and the publicity campaign emphasized that up to 80% of our genome is functional. Many stories in the popular press touted the death of junk DNA. [see: What did the ENCODE Consortium say in 2012]

Both of these publicity campaigns, and the published conclusions, were heavily criticized for not understanding the distinction between fortuitous transcription and real genes and for not understanding the difference between fortuitous binding sites and functional binding sites. Hundreds of knowledgeable scientists pointed out that it was ridiculous for ENCODE researchers to claim that most of the human genome is functional based on their data. They also pointed out that ENCODE researchers ignored most of the evidence supporting junk DNA.

ENCODE 3 has just been published and the hype has been toned down considerably. Take a look at the main publicity article just published by Nature (ENCODE 3). The Nature article mentions ENCODE 1 and ENCODE 2 but it conveniently ignores the fact that Nature heavily promoted the demise of junk DNA back in 2007 and 2012. The emphasis now is not on how much of the genome is functional—the main goal of ENCODE—but on how much data has been generated and how many papers have been published. You can read the entire article and not see any mention of previous ENCODE/Nature claims. In fact, they don't even tell you how many genes ENCODE found or how many functional regulatory sites were detected.

The News and Views article isn't any better (Expanded ENCODE delivers invaluable genomic encyclopedia). Here's the opening paragraph of that article ...
Less than 2% of the human genome encodes proteins. A grand challenge for genomic sciences has been mapping the functional elements — the regions that determine the extent to which genes are expressed — in the remaining 98% of our DNA. The Encyclopedia of DNA Elements (ENCODE) project, among other large collaborative efforts, was established in 2003 to create a catalogue of these functional elements and to outline their roles in regulating gene expression. In nine papers in Nature, the ENCODE consortium delivers the third phase of its valuable project.1
You'd think with such an introduction that you would be about to learn how much of the genome is functional according to ENCODE 3 but you will be disappointed. There's nothing in that article about the number of genes, the number of regulatory sites, or the number of other functional elements in the human genome. It almost as if Nature wants to tell you about all of the work involved in "mapping the functional elements" without ever describing the results and conclusions. This is in marked contrast to the Nature publicity campaigns of 2007 and 2012 where they were more than willing to promote the (incorrect) conclusions.

In 2020 Nature seems to be more interested in obfuscation and opaqueness. One other thing is certain, the Nature editors and writers aren't the least bit interested in discussing their previous claims about 80% of the genome being functional!

I guess we'll have to rely on the ENCODE Consortium itself to give us a summary of their most recent findings. The summary paper has an intriguing title (Perspectives on ENCODE) that almost makes you think they will revisit the exaggerated claims of 2007 and 2012. No such luck. However, we do learn a little bit about the human genome.
  • 20,225 protein-coding genes [almost 1000 more than the best published estimates - LAM]
  • 37,595 noncoding genes [I strongly doubt they have evidence for that many functional genes]
  • 2,157,387 open chromatin regions [what does this mean?]
  • 1,224,154 transcription factor binding sites [how many are functional?]
That's it. The ENCODE Consortium seems to have learned only two things in 2012. They learned that it's better to avoid mentioning how much of the genome is functional in order to avoid controversy and criticism and they learned that it's best to ignore any of their previous claims for the same reason. This is not how science is supposed to work but the ENCODE Consortium has never been good at showing us how science is supposed to work.

Note: I've looked at some of the papers to try and find out if ENCODE stands by it's previous claim that most the genome is functional but they all seem to be written in a way that avoids committing to such a percentage or addressing the criticisms from 2007 and 2012. The only exception is a paper stating that cis-regulatory elements occupy 7.9% of the human genome (Expanded encyclopaedias of DNA elements in the human and mouse genomes). Please let me know if you come across anything interesting in those papers.


1. Isn't it about time to stop dwelling on the fact that 2% (actually less than 1%) of our genome encodes protein? We've known for decades that there are all kinds of other functional regions of the genome. No knowledgeable scientist thinks that the remaining 98% (99%) has no function.

Monday, July 06, 2020

A storm of cytokines

Cytokines are a diverse groups of small signal proteins that act like hormones to turn on genes in blood cells and cells of the immune system. In COVID-19 the production of cytokines can be over-stimulated to produce a cytokine storm that activates immune cells producing all kinds of severe, sometimes lethal, effects. There are dozens of different cytokines but they all act in a similar manner. Each one binds to a receptor on the membrane of a target cell and this stimulates the cytoplasmic side of the receptor to activate a transcription factor that enters the nucleus and turns on a specific set of genes. The activation step requires phosphorylation just like dozens of other signalling pathways. (See Morris et al. (2018) for a recent review.)

I was curious about the structures of these cytokines so I looked up a few of them on PDB. Here are three fairly representative structures.



Morris, R., Kershaw, N.J., and Babon, J.J. (2018) The molecular details of cytokine signaling via the JAK/STAT pathway. Protein Science 27: 1984-2009. [doi: doi.org/10.1002/pro.3519]

Thursday, June 11, 2020

Dan Graur proposes a new definition of "gene"

I've thought a lot about how to define the word "gene." It's clear that no definition will capture all the possibilities but that doesn't mean we should abandon the term. Traditionally, the biochemical definition attempts to describe the part of the genome that produces a functional product. Most scientists seem to think that the only possible product is a protein so it's common to see the word "gene" defined as a DNA sequence that produces a protein.

But from the very beginning of molecular biology the textbooks also talked about genes for ribosomal RNAs and tRNAs so there was never a time when knowledgeable scientists restricted their definition of a gene to protein-coding regions. My best molecular definition is described in What Is a Gene?.

A gene is a DNA sequence that is transcribed to produce a functional product.

Dan Graur has also thought about the issue and he comes up with a different definition in a recent blog post: What Is a Gene? A Very Short Answer with a Very Long Footnote

A gene is a sequence of genomic material (DNA or RNA) that has a selected effect function.

This is obviously an attempt to equate "function" with "gene" so that all functional parts of the genome are genes, by definition. You might think this is rather silly because it excludes some obvious functional regions but Dan really does want to count them as genes.
Performance of the function may or may not require the gene to be translated or even transcribed.

Genes can, therefore, be classified into three categories:

(1) protein-coding genes, which are transcribed into RNA and subsequently translated into proteins.

(2) RNA-specifying genes, which are transcribed but not translated

(3) nontranscribed genes.
Really? Is it useful to think of centromeres and telomeres as genes? Is it useful to define an origin of replication as a gene? And what about regulatory sequences? Should each functional binding site for a transcription factor be called a gene?

The definition also leads to some other problems. Genes (my definition) occupy about 30% of the human genome but most of this is introns, which are mostly junk (i.e. no selected effect function). How does that make sense using Dan's definition?


Friday, February 07, 2020

The Function Wars Part VI: The problem with selected effect function

The term "Function Wars" refers to the debate over the meaning of 'function,' especially in the context of junk DNA.1 That debate intensified in 2012 after the ENCODE publicity campaign that tried to redefine function to mean anything they want as long as it refutes junk DNA. This is the sixth in a series of posts exploring the debate and why it's important, or not. Links to the other five posts can be found at the bottom or this post.

The world is not inhabited exclusively by fools and when a subject arouses intense interest and debate, as this one has, something other than semantics is usually at stake.
Stephen Jay Gould (1982)
Much of the discussion seems like quibbling over semantics but I'm reminded of a similar debate over the mode of evolution: is it gradual or punctuated? As Gould pointed out in 1982, there's a serious issue underlying the debate—an issue that shouldn't get lost in bickering over the meaning of 'gradualistic.' The same warning applies here. It's important to determine how much of the human genome is junk and that requires an understanding of what we mean by junk DNA. However, it's easy to get distracted by focusing on the exact meaning of the word 'function' instead of looking at the big picture.

Sunday, December 15, 2019

The evolution of citrate synthase

Citrate synthase [EC 2.3.3.1] is one of the key enzymes of the citric acid cycle. It catalyzes the joining of acetyl-CoA and oxaloacetate to produce citrate.
acetyl-CoA + H2O + oxaloacetate → citrate + HS-CoA + H+
We usually think of this reaction in terms of energy production since acetyl-CoA is the end product of glycolysis and the citric acid cycle produces substrates that enter the electron transport system leading to production of ATP. However, it's important to keep in mind that the enzyme also catalyzes the reverse reaction.

Thursday, May 10, 2018

Fixing carbon by reversing the citric acid cycle

The citric acid cycle1 is usually taught as depicted in the diagram on the right.2 A four-carbon compound called oxaloaceate is joined to a two-carbon compound called acetyl-CoA to produce a six-carbon tricarboxylic acid called citrate. In subsequent reactions, two carbons are released in the form of carbon dioxide to regenerate the original oxaloacetate. The cycle then repeats. The reactions produce one ATP equivalent (ATP or GTP), three NADH molecules, and one QH2 molecule.

The GTP/ATP molecule and the reduced coenzymes (NADH and QH2) are used up in a variety of other reactions. In the case of NADH and QH2, one of the many pathways to oxidation is the membrane-associated electron transport system that creates a proton gradient across a membrane. The electron transport complexes are buried in membranes—plasma and internal membranes in bacteria and the inner mitochondrial membrane in eukaryotes. Students are often taught that this is the only fate of NADH and QH2 but that's not true.

One of the other common misconceptions is that the citric acid cycle runs exclusively in one direction; namely, the direction shown in the diagram. That's also not true. The reactions of the citric acid cycle are near-equilibrium reactions like most reactions in the cell. What this means is that the concentrations of the reactants and products are close to the equilibrium values so that a slight increase in one of them will lead to a rapid equilibration. The reactions can run in either direction.3