Genetics and the Explanatory Filter
AUTHOR: Salvador Cordova
SOURCE: An Instance of Design Detection?
COMMENTARY: Allen MacNeill
There is a thread at Uncommon Descent in which the development of a commercial service for identifying Genetically Modified Objects (GMOs) is given as an example of industry use of William Dembski's "explanatory filter." Dembski claims that the "explanatory filter" can unambiguously identify "intelligently designed" entities, especially entitities in which information is encoded in a sequences of digital bits (as in the genetic code in DNA).
As one of the comments on the thread suggested that I would have a difficult time refuting this argument, my interest was piqued, and so here is my reply:
As has already been pointed out numerous times (not the least by William Dembski himself), Dr. Dembski has asserted that all biological entities are designed, as indicated by the fact that their nucleotide sequences are highly improbable, yet tied to a necessary biological function. However, if this is truly the case, then it should be literally impossible to separate GMO sequences from naturally evolved sequences using Dembski's "explanatory filter", since both types of sequences conform to his definition of "complex specified information."
However, since companies are able to distinguish between "natural" and GMO sequences at a level of reliability that real-world companies will pay them handsomely for their services, it is therefore clear that there is something fundamentally different between GMO sequences (i.e. sequences that really are designed by intelligent entities) and "natural" sequences (i.e. sequences that have evolved by natural selection and/or genetic drift). Therefore, I must conclude that, rather than providing evidence for the efficacy of the "explanatory filter," the ability to distinguish between genuinely "intelligently designed" and "natural" nucleotide sequences provides powerful evidence for the assertion that the difference between the two is the result of fundamentally different processes: "design" in the case of the former, and "natural selection/genetic drift" in the case of the latter.
(2) The idea that an "explanatory filter" can clearly and unambiguously distinguish between "intelligently designed" and "naturallly evolved" nucleotide sequences is directly contradicted by our experience with the structure and function of most adaptive genetic sequences. As just one example, consider the following nucleotide sequence: TTGACA-17 base pairs-TATAAT. Those of you with some knowledge of molecular genetics should immediately recognize this sequence as the "core" of a typical promoter; that is, a nucleotide sequence that is "recognized" (i.e. provides a binding site for) RNA polymerase during gene transcription. According to Dr. Dembski's model of "CSI", this sequence can only have come about via "intelligent design", because it has such a low probability of existing that for it to have arisen by chance is negligable.
However, as some of you may know, this sequence is actually the "consensus sequence" for the promoter. There are others, including (but not necessarily limited to) TAGACA-17 base pairs-TATAAT, TACACA-17 base pairs-TATAAT, ACCACA-17 base pairs-TATAAT, and TTCACA-17 base pairs-TATAAT. The probability of RNA polymerase binding to one of these alternative sequences is purely a function of how much the sequence deviates from the consensus sequence (i.e. it will bind least often to ACCACA-17 base pairs-TATAAT, as this sequence differs from the consensus sequence by three base pairs, whereas the other sequences differ by only one or two base pairs). The biological significance of this variability in base sequence in gene promoters is this: the regulation of gene expression is at least partly a function of the frequency at which such promoter sequences are bound to by RNA polymerase.
This means that deviations from the consensus sequence, rather than being "mistakes" which the "explanatory filter" should be able to identify as such, deviations from the consensus sequence are actually tied to the rate of gene transcription, which are in turn tied to rates of gene product function in the cell. For example, a gene product (i.e. protein) that is used very often in the cell would be coded for by a gene for which the promoter is very close to the consensus sequence, thereby causing the gene product to be synthesized more often. By contrast, a gene product used less often by the cell would be coded for by a gene with a promoter sequence that deviated more from the consensus sequence, and therefore would be transcribed and translated less often.
This means that deviations from the consensus sequence, rather than having less biological significance (and therefore more likelihood of existing by chance, and therefore less likelihood of being identified by Dembski's "explanatory filter"), would actually be just as biologically significant as the consensus sequence. In other words, if the "explanatory filter" is to be of any use at all, it must explain why random deviations from the consensus sequence (i.e. the "designed" sequence) are in reality just as important to cellular function as the consensus sequence itself, until suddenly (when none of the base pairs match the consensus sequence) the promoter stops functioning as a promoter at all. You can't have it both ways: either the functions of "deviant" promoter sequences are just as "designed" as the consensus sequences, or they aren't. But this means that essentially all nucleotide sequences are "intelligently designed", making the "explanatory filter totally useless for any meaningful investigation of genetic processes. Philosophically intriguing to a few theologically inclined non-scientists perhaps, but totally irrelevant to biology.
From the standpoint of natural selection, however, functions arising from deviations from the consensus sequence are exactly what one would expect, as natural selection is just as capable of exploiting random deviations as it is of exploiting "designed" (i.e. adaptive) sequences. Indeed, from the standpoint of natural selection, there are no such things as "designed" sequences; nucleotide sequences are only more or less adaptive, as reflected in their frequencies in populations. Some sequences are apparently not adaptive at all (i.e. they are not conserved as the result of natural selection) - we sometimes refer to such sequences as "junk DNA", although that term carries implications that do not reflect what we currently understand about non-adaptive DNA sequences. Other sequences (the ones that the "explanatory filter" is supposed to be able to distinguish) are adaptive at some level. However, the only way to tell if a sequence is actually adaptive is to be able to show, from the level of nucleotide sequence all the way up to phenotypic differences, that there is a statistically significant difference between the reproductive success (i.e. "fitness") associated with one sequence as compared with another. Until this is possible (and we are a long way from it), any attempt to rule out selection as the efficient cause of nucleotide sequences is pointless (as is the "explanatory filter").
--Allen
Labels: creationism, design, design in nature, explanatory filter, intelligent design, irreducible complexity, Michael Behe, William Dembski
5 Comments:
This is a superb post. One to save for the files...
You missed the obvious refutation.
Genetic ID is not detecting design. The detection work is already done and sits in their PCR library. It was accomplished by getting the designed sequences inserted into GMOs spelled out to them by the companies that did the inserting.
What Genetic ID is doing is simply searching for DNA sequences already known to have been artificially placed in the GMO by human intelligent agents. This is not remotely comparable to detecting design in a sequence where the origin is not known to be artificial ahead of time.
Allen,
Could you please clarify the argument in your first paragraph? I am having difficulty understanding the point you are trying to make.
First, you argue that if all the sequences are designed, then we should not be able to tell the GMO sequences from naturally evolved sequences. If all the sequences are designed, then we would not be trying to distinguish between GMO and naturally evolved sequences.
Second, you argue that one cannot use the EF to infer that a sequence is designed if some other sequence is also designed, since both exhibit SC. I just don't see how it follows. Because both are designed neither can be designed? That seems a strange thing to argue, but perhaps I am missing something.
Davescot: This is not remotely comparable to detecting design in a sequence where the origin is not known to be artificial ahead of time.
Excellent observation DS, indeed, that's a major shortcoming of the Explanatory Filter. Indeed, IDers will find Macneill's lecture enlightening in many aspects as it may help them understand the many limitations of its claims.
Mung: Second, you argue that one cannot use the EF to infer that a sequence is designed if some other sequence is also designed, since both exhibit SC. I just don't see how it follows.
Simply because SC is not a reliable method for detecting 'design', unless one has additional information. Such as in this case the fact that we know what the designed sequences looked like.
What Allen is arguing is that the EF is useless since it cannot explain much of anything as it is forced to infer design for almost anything which has function. Since science has mechanisms to explain DNA and the frequency distributions, until one has explored any and all relevant pathways, it seems a bit early to use the EF to claim design.
Of course the main problem with Sal's argument is that we are trying to match a sequence to a known sequence, known to have been 'designed'. By conflating EF with 'design detection' Sal is following Dembski's 'argument' that EF is how we detect design in archaeology etc while ignoring the evidence which contradicts such. In fact, neither one has done much effort to support their claims in the first place. Proof by assertion (Dembski) and proof by authority (Sal) somehow fails to impress.
PvM wrote:
"...the EF is useless since it cannot explain much of anything as it is forced to infer design for almost anything which has function."
Indeed, that is the singlular failing of the EF in my opinion. Only nucleotide sequences that actually code for something (i.e. are adaptive) can be "identified" by the EF. Non-coding, non-essential nucleotide sequences are literally irrelevent, and therefore "invisible" to the EF, as they do not "specify" anything (i.e. the "information" contained in such sequences is literally blank).
I admit that this problem provides an rationalisation for why ID theorists insist (without empirical evidence) that every single nucleotide in the DNA of every organism must have some significant biological function. If this were not the case, the non-coding sequences would have to have come about by some other process than the de novo "intellligent design" by which CSI supposedly comes about.
In this way, ID theorists fall into the same trap that many "pan-adaptationist" evolutionary biologists have fallen prey to: the mistaken idea that all "significant" biological information is somehow tied to adaptation. But, as Lewontin and Gould pointed out in their landmark "spandrels" paper ("The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme," available online at http://ethomas.web.wesleyan.edu/wescourses/2004s/ees227/01/spandrels.html), the idea that all biological information is somehow causally related to adaptations is neither supported by the evidence nor central to the modern synthetic theory of evolution. Indeed, Kimura and Ohta's "neutral theory" undermines much of the "pan-adaptationist" argument, and thereby also undermines the adaptationist assumptions of ID theorists.
Post a Comment
<< Home