Wednesday, September 12, 2007

More on Transcribed But Non-Translated RNA

On the same subject as the previous post (The Gene Is Dead, Long Live The Gene!), here is the following:

MikeGene at Telic Thoughts ( see Error Correction Runs Yet Deeper) wrote this about the new findings vis-a-vis transcribed but not translated RNAs:

According to Mats Ljungman, a researcher at the University of Michigan Medical School, as many as 20,000 lesions occur daily in a cell’s DNA. To repair all this continual damage, how does the cell first detect it? Ljungman’s research identified the logical candidate – RNA polymerase (the machine that reads the DNA and makes an RNA copy). Apparently, whenever the RNA polymerase encounters a lesion, it signals to p53, a master protein that activates all sorts of DNA repair processes.

According to the press release:

“These two proteins are saying, ‘Transcription has stopped,’” says Ljungman. These early triggers act like the citizen who smells smoke and sounds a fire alarm, alerting the fire department. Then p53, like a team of fire fighters, arrives and evaluates what to do. To reduce the chance of harmful mutations that may result from DNA damage, p53 may kill cells or stop them temporarily from dividing, so that there is time for DNA repair.


Recently, the ENCODE consortium determined that the majority of DNA in the human genome is transcribed:

This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active.


Of course, one could also argue that all this transcription simply speaks to the sloppy and wasteful nature of the cell. Yet here’s a thought. It would seem to me that Ljungman’s research now raises a third possibility: all that transcription is just another layer of error surveillance.

To which I replied:

That is a VERY interesting hypothesis. It could work like this: by incorporating large amounts of transcribed (but not translated) DNA into the human genome, the cell is essentially presenting a much larger "target" for mutation-detection by the p53 surveillance system. In essence, a cell that has been especially challenged by mutation-producing processes would be much more likely to send out the "fire alarm," since it would be much more likely to have transcription terminated and thereby triggering the p53 "stopped transcription" alarm. To extend the "fire alarm" analogy, imagine a house that is unusually likely to have a fire; perhaps it's very hot, or dry, or has smoldering fires in several locations. As the old saying goes, "where there's smoke, there's fire," and a fail-safe cancer/mutation detection system would be much more likely to detect potential "hot-cells" if there were a large amount of transcription going on.

Indeed, this would be most important in cells in which relatively little transcription of functional (i.e. protein-encoding) genes normally takes place, but which are still subject to mutation and potential cancer induction. By running the "non-coding transcription program constantly in the background, such cells could still alert the cancer/mutation surveillance system, even when they themselves aren't actively coding for protein.

Now, since transcription is itself a costly process, doing a lot of it for non-coding genes would also be costly. Cells would therefore be selected via a cost-benefit process for the amount of non-coding "surveillance transcription" they could do. that is, the more likely a cell/organism is to have a cancer/mutation event, the more valuable its non-coding/surveillance transcription system would be, and therefore the more non-coding DNA it should have. This immediately suggests a possible test of hypothesis: those cells (or organisms) that are more likely to suffer from cancer/mutation events would therefore have more non-coding "surveillance transcription" DNA sequences.

For example, since animals are much more likely to be harmed by uncontrolled cell division (i.e. cancer, induced by mutation), then one would predict that animals would have more non-coding/surveillance transcription sequences than, say, plants. Also, animals that live longer (and would therefore have a larger "window" for suffering mutations), should also have relatively large amounts of non-coding/surveillance transcription sequences.

Somebody should check this out (if they haven't already).

Nick (Matzke?) then commented:

The old C-value paradox may have some relevance here. Does the amount of non-coding/surveillance transcribed sequences correlate with the total amount on non-coding sequence? For example, do puffer fish have fewer non-coding transcribed sequences than zebrafish, or do they have the same amount of transcribed DNA with the difference in genome size being due to non-coding, non-transcribed sequence?

Encode's data would seem to argue for a close correlation between total genome size and amount of transcribed non-coding sequence. If that observation is generally applicable to other organisms, thenC value might be one way to test MikeGene's and Allen's hypotheses. The idea that transcription of non-coding DNA is another layer of mutation detection/error correction would imply that organisms with larger genomes have more mutation detection capability. Do animals with smaller genomes require less error detection because they live in less mutagenic environments? The dramatic differences in genome size among related organisms that live in similar environments would seem to argue against that hypothesis. Compare genome sizes of freshwater pufferfish and zebrafish, both of which live in freshwater streams, or look at the variation in genome size among salamanders of the genus Plethodon

To explore this issue, check out the very cool Animal Genome size database:

You can also test Allen's lifespan hypothesis. For example, zebrafish and small tetras with lifespans of 2 or 3 years have approximately the same genome size as common carp with lifespans of 20+ years.

One of the ID supporters on the list then challenged me to explain how such a complex error-surveillance system could have evolved via non-directed natural selection. This was my reply (Nota bene: the following is, of course, an HYPOTHESIS only):

Consider two virtually identical phylogenetic lines, A and B. At time zero, individuals in both lines start out with virtually no transcribable but non-coding DNA (abbreviated TNCDNA). If we assume a constant mutation rate for both lines, individuals in both lines would have essentially the same probability of dying from cancer.

Assume further that, over time, sequences of non-TNCDNA accumulate in the genomes of each line. This can happen by any one (or more) of several known mechanisms, such as gene dupilcation (without active promoter sequences), random multiplication of tandem repeats, retroviral or transposon insertions of non-TNCDNA, etc.

Then, at time one, an individual (or more than one) in line B have an active promoter inserted in front of one or more of their non-TNCDNA sequences in one or more of their cells, by the same mechanisms listed above. Now, these individuals have a lower probability of dying from the resulting cancer, since their p53-regulated surveillance systems would be more likely to eliminate the affected cells. Again, this would be a side-effect of the larger "mutation sponge" their cells would present to potentially mutagenic processes. Such individuals would therefore have more descendants, and over time the average size of all of the "mutation sponges" in the subsequent populations would increase. Natural selection in action, folks.

Now, as to the question of where the p53 surveillance system came from in the first place, proteins like p53 are common intermediates in intracellular signalling systems. Assume that the ancestor of p53 was a protein with some other signalling function. At some point, an individual that had p53 doing that other function has a mutation that changes the shape of p53 in such a way that it becomes part of a regulatory pathway that triggers apoptosis, thereby eliminating the cell. If the altered p53 no longer participates in the original pathway, and if that alteration is damaging, such individuals would be elimated, and the original function of p53 would be preserved.

However, if the altered p53 (now participating in the regulation of apoptosis) were also activated by the cells' normal "transcription termination signalling system" as described in Mike's original post, then individuals with the altered p53 would be less likely to die from cancer, and their descendants (who now produce the altered form of p53) would become more common over time.

Mike's original post notes that the research report cited the relatively recent observation that many cells actually suffer multiple mutations much of the time. This is precisely the situation that Darwin originally stated was a prerequisite for natural selection: not genetic mutations (Darwin didn't know about them), but increased heritable variation (which Darwin couldn't explain, but could point to as an observable phenomenon in living organisms). In other words, as both EBers and IDers both point out, phenotypic variations are very, very common, and so are the genetic changes with which they are correlated. Most of these variants are either selectively neutral (c.f. Kimura), nearly neutral (c.f. Ohta), or deleterious to some degree. Such changes either accumulate (if they are neutral or nearly so) or are eliminated (if they are deleterious).

But, in those relatively rare occasions when they result in increased relative survival and reproduction, they increase in frequency in those populations in which they exist. By this process of "natural preservation" (Darwin's preferred name for the process he and Alfred Russell Wallace proposed as the primary mechanism for descent with modification) results in the accumulation of both neutral and beneficial characters and the elimination of deleterious ones.

And by the way, the foregoing is why Darwin (and not Edward Blythe) is credited with the concept of "natural selection/preservation": Blythe only described the elimination of deleterious characters, and never realized that the preservation of beneficial characters could result in the origin of adaptations. Blythe, in other words, only recognized what EBers call "stabilizing selection," but missed the much more interesting and important "directional selection," which Darwin cited as the causal basis for evolution of adaptations.

Comments, criticisms, and suggestions are warmly welcomed!

--Allen

Labels: , , , , , ,

1 Comments:

At 9/30/2007 08:22:00 PM, Anonymous Anonymous said...

on the evolutionary opinion that RNA came before DNA - that DNA evolved from RNA and took over what was originally RNA's role of being the initiating source of the genetic code, how does the discovery of non-coding RNA challenge/support this theory?

 

Post a Comment

<< Home