Tuesday, May 16, 2006

The Resurrection of Formal and Final Causes

SOURCE: Telic Thoughts

COMMENTARY: Allen MacNeill

Over at Telic Thoughts, g arago commented:

"It would perhaps help to bring in Aristotle's causes again, to the effect that final causes are virtually eliminated from modern science. Postmodernity enables the case for formal causality to re-emerge as a legitimate source of (scientific or non-scientific) knowledge."

It's interesting that this should be proposed, as that is precisely what I will be doing during the very first meeting of my "purpose in nature" seminar at Cornell this summer. It is a sad fact that most undergraduates (and an alarming number of philosophers and scientists) do not know anything about Aristotle's doctrine of causes, nor how they relate to the work they are doing.

Aristotle identified four causes for every phenomenon:

Material Cause: What the object in question is composed of (e.g. a house is composed of boards, bricks, mortar, etc.;

Formal Cause: What formal category the object is an exemplar of (e.g. any particular house is a "house" or dwelling place for people);

Efficient Cause: What immediate processes bring about the existence of the object (e.g. the carpenters, etc. are the efficient cause of the house); and

Final Cause: The purpose of the object (e.g. carpenters et al build houses "in order to" provide dwelling places for people).

In modern science, both formal and final causes are considered to be unnecessary, and are therefore not generally included in scientific explanations of natural objects and processes. However, it is not strictly true that formal causes have been completely eliminated from science. Much of physics, for example, has taken on some of the characteristics of "formal cause" insofar as physical processes are describable and predictable using formal mathematics. This is particularly the case for physicists who believe that actual physical phenomena are "the working out of underlying mathematical relationships."

The same could be said for evolutionary theory insofar as the "modern evolutionary synthesis" initiated R. A. Fisher, J.B.S. Haldane, and Sewall Wright sought to lay a formal mathematical foundation for biological evolution.

The problem for "intelligent design theory" therefore is to show (if possible) that final causes are necessary (i.e. not just psychologically gratifying or theologically convenient) for evolutionary explanations of natural objects and processes. Final causes (or "purposes") are not entirely missing from evolutionary biology, as shown by the work of Colin Pittendrigh, Francisco Ayala, and Ernst Mayr, all of whom debated the appropriateness of teleological language when referring to adapations. However, no evolutionary biologist has resorted to teleological explanations for the existence or operation of natural selection, speciation, evolutionary development, or other central processes in evolution, at least not recently. The reason for such exclusion has not been an antipathy to theologically based explanations per se, but rather the simple fact that teleological explanations for evolutionary processes have been shown repeatedly to be unnecessary, and therefore irrelevant (notice that I did not say "untrue," as "truth" is also irrelevant in this context).

What W. Dembski and M. Behe and other ID theorists have attempted to do (in my opinion, so far unsuccessfully) is to re-integrate teleology into evolutionary processes. The more recent discussion by some ID theorists of "'front-loaded' intelligent design" is simply a reinvention of Aristotelian formal cause, and as such is indistinguishable from classical deism. Neither of these approaches to "design or purpose in nature" has yet been successful as scientific enterprises because they have not been shown to be indispensible to scientific explanations. Until they are, they will not be integrated into mainstream science. While I personally do not believe they can be, I am willing to be shown otherwise by people who use direct empirical evidence and strong inference to show how teleological explanations are necessary for scientific explanations.


Labels: , , , , , , , , , , ,

Friday, May 12, 2006

Genetics and the Explanatory Filter

AUTHOR: Salvador Cordova

SOURCE: An Instance of Design Detection?

COMMENTARY: Allen MacNeill

There is a thread at Uncommon Descent in which the development of a commercial service for identifying Genetically Modified Objects (GMOs) is given as an example of industry use of William Dembski's "explanatory filter." Dembski claims that the "explanatory filter" can unambiguously identify "intelligently designed" entities, especially entitities in which information is encoded in a sequences of digital bits (as in the genetic code in DNA).

As one of the comments on the thread suggested that I would have a difficult time refuting this argument, my interest was piqued, and so here is my reply:

As has already been pointed out numerous times (not the least by William Dembski himself), Dr. Dembski has asserted that all biological entities are designed, as indicated by the fact that their nucleotide sequences are highly improbable, yet tied to a necessary biological function. However, if this is truly the case, then it should be literally impossible to separate GMO sequences from naturally evolved sequences using Dembski's "explanatory filter", since both types of sequences conform to his definition of "complex specified information."

However, since companies are able to distinguish between "natural" and GMO sequences at a level of reliability that real-world companies will pay them handsomely for their services, it is therefore clear that there is something fundamentally different between GMO sequences (i.e. sequences that really are designed by intelligent entities) and "natural" sequences (i.e. sequences that have evolved by natural selection and/or genetic drift). Therefore, I must conclude that, rather than providing evidence for the efficacy of the "explanatory filter," the ability to distinguish between genuinely "intelligently designed" and "natural" nucleotide sequences provides powerful evidence for the assertion that the difference between the two is the result of fundamentally different processes: "design" in the case of the former, and "natural selection/genetic drift" in the case of the latter.

(2) The idea that an "explanatory filter" can clearly and unambiguously distinguish between "intelligently designed" and "naturallly evolved" nucleotide sequences is directly contradicted by our experience with the structure and function of most adaptive genetic sequences. As just one example, consider the following nucleotide sequence: TTGACA-17 base pairs-TATAAT. Those of you with some knowledge of molecular genetics should immediately recognize this sequence as the "core" of a typical promoter; that is, a nucleotide sequence that is "recognized" (i.e. provides a binding site for) RNA polymerase during gene transcription. According to Dr. Dembski's model of "CSI", this sequence can only have come about via "intelligent design", because it has such a low probability of existing that for it to have arisen by chance is negligable.

However, as some of you may know, this sequence is actually the "consensus sequence" for the promoter. There are others, including (but not necessarily limited to) TAGACA-17 base pairs-TATAAT, TACACA-17 base pairs-TATAAT, ACCACA-17 base pairs-TATAAT, and TTCACA-17 base pairs-TATAAT. The probability of RNA polymerase binding to one of these alternative sequences is purely a function of how much the sequence deviates from the consensus sequence (i.e. it will bind least often to ACCACA-17 base pairs-TATAAT, as this sequence differs from the consensus sequence by three base pairs, whereas the other sequences differ by only one or two base pairs). The biological significance of this variability in base sequence in gene promoters is this: the regulation of gene expression is at least partly a function of the frequency at which such promoter sequences are bound to by RNA polymerase.

This means that deviations from the consensus sequence, rather than being "mistakes" which the "explanatory filter" should be able to identify as such, deviations from the consensus sequence are actually tied to the rate of gene transcription, which are in turn tied to rates of gene product function in the cell. For example, a gene product (i.e. protein) that is used very often in the cell would be coded for by a gene for which the promoter is very close to the consensus sequence, thereby causing the gene product to be synthesized more often. By contrast, a gene product used less often by the cell would be coded for by a gene with a promoter sequence that deviated more from the consensus sequence, and therefore would be transcribed and translated less often.

This means that deviations from the consensus sequence, rather than having less biological significance (and therefore more likelihood of existing by chance, and therefore less likelihood of being identified by Dembski's "explanatory filter"), would actually be just as biologically significant as the consensus sequence. In other words, if the "explanatory filter" is to be of any use at all, it must explain why random deviations from the consensus sequence (i.e. the "designed" sequence) are in reality just as important to cellular function as the consensus sequence itself, until suddenly (when none of the base pairs match the consensus sequence) the promoter stops functioning as a promoter at all. You can't have it both ways: either the functions of "deviant" promoter sequences are just as "designed" as the consensus sequences, or they aren't. But this means that essentially all nucleotide sequences are "intelligently designed", making the "explanatory filter totally useless for any meaningful investigation of genetic processes. Philosophically intriguing to a few theologically inclined non-scientists perhaps, but totally irrelevant to biology.

From the standpoint of natural selection, however, functions arising from deviations from the consensus sequence are exactly what one would expect, as natural selection is just as capable of exploiting random deviations as it is of exploiting "designed" (i.e. adaptive) sequences. Indeed, from the standpoint of natural selection, there are no such things as "designed" sequences; nucleotide sequences are only more or less adaptive, as reflected in their frequencies in populations. Some sequences are apparently not adaptive at all (i.e. they are not conserved as the result of natural selection) - we sometimes refer to such sequences as "junk DNA", although that term carries implications that do not reflect what we currently understand about non-adaptive DNA sequences. Other sequences (the ones that the "explanatory filter" is supposed to be able to distinguish) are adaptive at some level. However, the only way to tell if a sequence is actually adaptive is to be able to show, from the level of nucleotide sequence all the way up to phenotypic differences, that there is a statistically significant difference between the reproductive success (i.e. "fitness") associated with one sequence as compared with another. Until this is possible (and we are a long way from it), any attempt to rule out selection as the efficient cause of nucleotide sequences is pointless (as is the "explanatory filter").


Labels: , , , , , , ,