Wednesday, April 22, 2009

Why Intelligent Design Supporters Insist That ID Must Be True


It has taken me a very long while, but I think I finally understand why Intelligent Design (ID) exists, why websites like Uncommon Descent exist, and why the regular commentators who support ID at those websites are so determined to assert the absolute reality of ID, in spite of a complete lack of empirical evidence.

It’s all right here in this quote about the ultimate justification for morality:
“Of course [the validity of an objective moral code] is all dependent upon the truth of the existence of God and the truthfulness of scripture - most of us here are aware of that.”

I believe that this is the crux of the whole science versus ID debate: if there is no empirical evidence for the existence of God, then it all comes down entirely to pure, unsupported supposition. Yes, one can assert that God exists, and can assert that therefore whatever God asserts must, by definition, be the absolute objective truth, but by the standards of scientific logic (which are now almost universally accepted as providing the most reliable evidence for descriptions of reality), arguments based purely and solely on assertion are no longer considered valid.

Ergo, without some independent source of evidence – independent of the original assertion, that is – then it all comes down to dueling assertions, which means that eventually it all comes down to force majeure: whoever can make the most forceful assertion gets to define the Truth.

Therefore, there must be some kind of empirical evidence for the existence of God. The fact that no one has ever found any is completely irrelevant, and will remain so indefinitely. It also explains why it is perfectly legitimate to deliberately distort, misinterpret, omit, or otherwise alter empirical evidence if it does not support the otherwise unsupportable assertion that God exists. [1]

Here is the way it looks to me:

Condition #1:

• If a moral code is not objective, it is ipso facto invalid.

• The moral code asserted by God is the only objective moral code. [2]

• If God does not exist, then there is no basis for the assertion that there is an objective moral code.

• Therefore, if God does not exist, anything is permitted.

Condition #2:

• An argument supported purely by assertion(s) is invalid. [3]

• Ever since Francis Bacon’s Novum Organum, it has generally been considered necessary that there should be empirical evidence (either direct or indirect) in support of arguments.

• Ergo, there must be empirical evidence in support of the assertion that God exists. Otherwise, there can be no objective morals, and therefore anything is permitted.

Conclusion:

Since God must exist (otherwise there are no morals and anything is permitted), then there must be empirical evidence for His existence. Finding none, it is therefore necessary to pretend that some exists, or to make some up. Otherwise there can be no objective basis for morals, society will necessarily collapse into chaos, and we will all inevitably become insatiable, maniacal, cannibalistic, orgiastic mass murderers, rapists, and thieves.

It also seems to me that this is the reason why ethical philosophers now virtually unanimously agree that ethical prescriptions cannot be derived from statements derived from empirical science (i.e. "ought" cannot be derived from "is"). To do so not only conflates two separate domains of logic (i.e. deductive versus inductive), but also requires that there be empirical evidence for something (i.e. ethical prescriptions) that are not and cannot be justified by empirical analysis (i.e. the workings of nature). Yes, we can use empirical analysis to determine if our ethical prescriptions have brought about the goals which we have decided to pursue, but we cannot use empirical analysis to formulate those goals.

Notes:

[1] Unsupportable on the basis of empirical evidence, that is.

[2] An obvious corollary to this is that each and every one of God’s moral prescriptions is both objective and absolutely True, by definition. Hence the argument that anything God prescribes (such as the massacre of the Canaanites) is morally right, simply by virtue of His saying so.

[3] To be specific, arguments based purely on deductive (i.e. Aristotelian) logic have been largely superseded by arguments based on inductive logic.

************************************************

As always, comments, criticisms, and suggestions are warmly welcomed!

--Allen

Labels: , , , , , ,

Wednesday, January 07, 2009

Natural Theology, Theodicy, and The Name of the Rose


AUTHOR: Allen MacNeill

SOURCE: Original essay

COMMENTARY: That's up to you...
"Before, we used to look to heaven, deigning only a frowning glance at the mire of matter; now we look at the earth, and we believe in the heavens because of earthly testimony."
- Jorgé of Burgos, The Name of the Rose, by Umberto Eco (William Weaver, translator)

It's a new year and a new administration (in more ways than one), and over at Uncommon Descent (the former weblog of mathematician and theologian William Dembski), social epistemologist and "intelligent design" apologist Steve Fuller has begun a series of posts on the subject of theodicy.

I read his first post on the subject with some interest, as I have just finished re-reading (for the fifth time) Umberto Eco's novel, The Name of the Rose. When I was a kid, it was inconceivable to me that a person could re-read a book. That was like seeing a movie over again; it just never happened. But now I often re-read books, and any movie or television show can be viewed as many times as one can possibly stand it.

One of the reasons I re-read books is that I've found that I often discover new things in the book on re-reading. What I had never noticed before about The Name of the Rose is that one of its main themes is the relationship between empirical evidence (that is, evidence that we can observe, either directly or indirectly) and faith, as exemplified by the epigram for this blogpost.

What Jorgé of Burgos (a thinly veiled portrait of Jorgé Luis Borges) is speaking about is the relationship between empirical evidence and faith. He laments that in past times one's belief was entirely justified by faith, but now (in the 14th century) one's belief was grounded in empirical observation; that is, evidence derived from the observation of "base matter". Jorgé's theology, which could be called revealed theology, was based on scripture and religious experiences of various kinds (especially as portrayed in the Holy Bible and the biographies of the Christian saints).

The "new" way of thinking that Jorgé laments is natural theology, a branch of theology based on reason and ordinary experience, according to which the existence and intentions of God are investigated rationally, based on evidence from the observable physical world. Natural theology has a long history, reaching back to the Antiquitates rerum humanarum et divinarum of Marcus Terentius Varro (116-27 BC). However, for almost two millennia natural theology was a minority tradition in Christian theology.

The replacement of revelation theology by natural theology represents a fundamental shift in the the theological basis of belief in the existence of God, which began in the 1st century BCE, but which reached the tipping point in the early 19th century. In 1802 the Reverend William Paley published Natural Theology: or, Evidences of the Existence and Attributes of the Deity, Collected from the Appearances of Nature. Charles Darwin himself praised Paley's work, and it had a profound effect on the direction of Christian theology, especially in England and America.

Paley's argument in Natural Theology is that one can logically infer the existence and attributes of God by the empirical study of the natural world (hence the name "natural" theology). Paley's famous argument of the "watch on the heath" was based on the idea that complex entities (such as a pocketwatch) cannot come about by accident, the way simple "natural" objects such as boulders do. Rather, Paley observes that a pocketwatch clearly has a purpose (i.e. to indicate the time) and is composed of a set of designed, complex, interactive parts (the gears, springs, hands, face, case, and crystal of the watch) which we know for a fact are designed. He then argues by means of analogy that living organisms are even more clearly purposeful entities that must have a designer.

I have already pointed out the weaknesses of arguments of analogy. I have also criticized Steve Fuller's arguments vis-a-vis "intelligent design theory" (see here as well).

What I want to do in this blogpost is to analyze Fuller's first blogpost at Uncommon Descent on "ID and the Science of God". Fuller begins with a recapitulation of the definition of "intelligent design" contained in the mission statement of Uncommon Descent:
ID is the the science of design detection — how to recognize patterns arranged by an intelligent cause for a purpose [emphasis added]

Fuller takes this definition quite seriously, arguing that the "intelligence" that does the designing in ID exists "outside of matter" (i.e. outside of the natural, physical universe). He then points out that this "intelligence" is "...a deity who exists in at least a semi-transcendent state. But then he poses the crucial question: "[H]ow can you get any scientific mileage from that?"

I would extend Fuller's question by turning it around: How can one get any theological mileage out of the idea that the existence and attributes of the deity can be inferred from observations of the natural, physical universe? This is precisely the program of natural theology, and it is the reason that I believe that natural theology is both intellectually bankrupt and ultimately destructive of belief in God. And, I am apparently not alone in this second belief; several of the comments on Fuller's post express essentially the same misgivings.

The problem here is the problem of theodicy. Fuller asserts that theodicy was originally a much broader topic than it is today. According to him,
Theodicy exists today as a boutique topic in philosophy and theology, where it’s limited to asking how God could allow so much evil and suffering in the world.

However, according to Fuller, theodicy once encompassed
"...issues that are nowadays more naturally taken up by economics, engineering and systems science – and the areas of biology influenced by them: How does the deity optimise, given what it’s trying to achieve (i.e. ideas) and what it’s got to work with (i.e. matter)? This broader version moves into ID territory, a point that has not escaped the notice of theologians who nowadays talk about theodicy. [emphasis in original]

Setting aside Fuller's historical analysis of the meaning(s) of theodicy (which I believe is both incorrect and the reverse of the actual historical evolution of the idea), I believe that Fuller gives Christians who still believe in the primacy of revelation over reason good reason to be concerned about the theological implications of ID:
"[Some theists are] uneasy about concepts like ‘irreducible complexity’ for being a little too clear about how God operates in nature. The problem with such clarity, of course, is that the more we think we know the divine modus operandi, the more God’s allowance of suffering and evil looks deliberate, which seems to put divine action at odds with our moral scruples. One way out – which was the way taken by the original theodicists – is to say that to think like God is to see evil and suffering as serving a higher good, as the deity’s primary concern is with the large scale and the long term.

I have pointed out in an earlier blogpost that this line of reasoning necessarily leads to the conclusion that God (i.e. the "intelligent designer" of ID theory) is a utilitarian Whose means are justified by His ends. As I have pointed out, this conclusion is both morally abhorrent and contrary to Christian doctrine. Fuller agrees, pointing out that "...religious thinkers complained about theodicy from day one":
"...a devout person might complain that this whole way of thinking about God is blasphemous, since it presumes that we can get into the mind of God – and once we do, we find a deity who is not especially loveable, since God seems quite willing to sacrifice His creatures for some higher design principle."

This was precisely my point in my earlier post, and it parallels Darwin's feeling about the more negative attributes of the deity.

However, Fuller takes a different tack in his analysis of theodicy:
"...it’s blasphemous to suppose that God operates in what humans recognise as a ‘rational’ fashion. So how, then, could theodicy have acquired such significance among self-avowed Christians in the first place...and...how could its mode of argumentation have such long-lasting secular effects...in any field [such as evolutionary theory] concerned with optimisation?

He then goes on to make essentially the same argument as that put forth by almost all ID supporters, an argument by analogy:
We tend to presume that any evidence of design is, at best, indirect evidence for a designer. But this is not how the original theodicists thought about the matter. They thought we could have direct (albeit perhaps inconclusive) evidence of the designer, too. Why? Well, because the Bible says so. In particular, it says that we humans are created in the image and likeness of God. At the very least, this means that our own and God’s beings overlap in some sense. (For Christians, this is most vividly illustrated in the person of Jesus.)

And how, precisely, is this an argument by analogy? Here it is:
The interesting question, then, is to figure out how much of our own being is divine overlap and how much is simply the regrettable consequence of God’s having to work through material reality to embody the divine ideas ‘in’ – or, put more controversially, ‘as’ — us. Theodicy in its original full-blooded sense took this question as its starting point. [emphasis added]

By "overlap" Fuller clearly means "analogy"; that is, how analogous is the "design" of nature (presumably brought about by the "intelligent designer", i.e. God) to human (and therefore divine) "design"? This inquiry, therefore, is based on the assumption that finding such analogies is prima facie proof that "design" in nature is the result of "intelligence" (and therefore, by extension, "divine intelligence").

But, as any undergraduate in elementary logic has learned, arguments by analogy alone are not valid evidence for anything. This is because there is nothing intrinsic to analogies that can allow us to determine their validity. As I have pointed out in an earlier blogpost, all analogies are false to some degree: the only "true" analogy to a thing is the thing itself.

Fuller lists four reasons why theodicy became important at about the same time as natural theology. These are:
• that the widespread publication of the Holy Bible not only facilitated the rise of Protestantism, it also made possible "individual confirmation" of one's "overlap" (i.e. analogy) with the deity;

• that "...theodicists...read the Bible as the literal yet fallible word of God. There is scope within Christianity for this middle position because of known problems in crafting the Bible, whose human authorship is never denied...."

• that "...theodicists...claimed legitimacy from Descartes, whose ‘cogito ergo sum’ proposed an example of human-divine overlap, namely, humanity’s repetition of how the deity establishes its own existence. After all, creation is necessary only because God originally exists apart from matter, and so needs to make its presence felt in the world through matter...."; and

• that the Scientific Revolution shifted the focus of theology from revelation to empirical investigation, grounding belief in God and His intentions in observable reality via arguments by analogy.

Let's summarize all of this before going on. According to Fuller, theodicy entails that:
1) the Holy Bible illustrates the analogies between humans and God;

2) the Holy Bible is an imperfect document, written by imperfect humans (and, by extension, should not necessarily be taken literally);

3) the Cartesian cogito ergo sum provides a paradigm of the analogy between human and divine "intelligence" by pointing to the connections between "supernatural" ideas and "natural" phenomena, and

4) the scientific method, fundamentally grounded in empirical verification, provides the most valid paradigm for understanding reality.

Here is where I find the connection to The Name of the Rose. Umberto Eco has pointed out that the title of his novel has several allusions, including Dante's mystic rose, "go lovely rose", the War of the Roses, "rose thou art sick", too many rings around Rosie, "a rose by any other name", "a rose is a rose is a rose", the Rosicrucians...there are probably as many meanings as there are readers, and more. Eco asserts that the concluding Latin hexameter,
stat rosa pristina nomine, nomina nuda tenemus ("and what is left of the rose is only its name")

points to a nominalist interpretation of his novel (see "Accuracy, Precision, Nominalism, and Occam's Razor".

And I agree with his assessment; the name of the rose is not the rose. Or, as Korbzybski put it, the map is not the territory. However, this conclusion can be taken in one of two ways. According to the first (which is based on Platonic idealism), the idea of the rose is what "matters". That is, the idea of the rose pre-exists the rose, and therefore brings the rose into existence. The idea of the rose, therefore, is what is real (hence "Platonic realism"). This is the approach taken by revelation theologists, natural theologists, and ID supporters: that the "design" of the rose (i.e. the "idea" in the "mind" of the "intelligent designer") comes first, and is made manifest in the actual, physical rose.

However, an alternative interpretation is that the rose comes first; our name for the entities which exhibit "roseness" is based on our perception of the analogies between those observed entities we come to call "roses". This is the approach taken by virtually all natural scientists, especially evolutionary biologists. As I have pointed out elsewhere, the "designer" in this case is nature itself; the environment (both external and internal) of the phylogenetic lineage of the entities we call "roses". The "design" produced by this "designer" is encoded within the genome of the rose, and expressed within its phenotype, which is made manifest by an interaction between the rose's genome and its environment.

This view is perhaps most succinctly expressed by Darwin himself, in the concluding paragraph of the Origin of Species:
It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us. [emphasis added]

Darwin saw the physical world as being entirely regulated by a set of natural laws, including laws which had the effect of producing the "origin of species" and evolutionary adaptations. In his published writings, he declined to attribute the authorship of such laws to a deity, and in his private correspondence he generally refused to speculate on it as well.

This is precisely the same position taken by almost all evolutionary biologists, and is echoed in the words of William of Baskerville, Umberto Eco's protagonist in The Name of the Rose, who at the conclusion of the book says:
"It's hard to accept the idea that there cannot be an order in the universe because it would offend the free will of God and His omnipotence."
- William of Baskerville, The Name of the Rose, by Umberto Eco (William Weaver, translator)


REFERENCES CITED:

Eco, U. (Weaver, W., translator) (1983) A Postscript to The Name of the Rose. Harcourt, Brace, Jovanovitch Publishers, New York, NY, ISBN #015173156X, 84 pages.

For those who are interested, I will be keeping up with Steve Fuller's later posts on this subject at Uncommon Descent. For now, have a happy new year!

As always, comments, criticisms, and suggestions are warmly welcomed!

--Allen

Labels: , , , , , , ,

Tuesday, December 30, 2008

On Weird Theories



AUTHOR: Allen MacNeill

SOURCE: Original essay

COMMENTARY: That's up to you...

"... say what you like about the tenets of National Socialism, Dude, at least it's an ethos."
- Walter Sobchak, The Big Lebowski (1998)

I had a next-door neighbor who worked for an aerospace company. One summer afternoon we were sitting beside his barbeque in his back yard, having one of those stream-of-consciousness conversations that often accompanies the guzzling of a six pack or two (his brand was Pabst Blue Ribbon™). I don't remember how the subject came up, but somewhere along the line I must have done what my wife calls "hitting the core-dump button." And he did; for the next couple of hours I listened in semi-horrified fascination as he expounded on his "theory" of reality. Basically, it was a weird variant on the "eagle and snake" mythology of the Aztecs, except that in his own weird theory the snake was the major icon. He went on and on about how the world (and time and everything else) was, at some much deeper level of reality, a snake. Ouroboros and the Midgard Serpent and Satan in the Garden of Eden and Freud's phallic symbols and the Caduceus and on and on and on...it was all tied together in a huge, complicated, and ultimately deranged web of relationships. It clearly was very meaningful to him; at times he seemed on the verge of tears. He showed me a medal of the Aztec eagle-and-snake image, which he wore around his neck at all times (even to bed and in the shower). He told me how it got him through some bad times in Vietnam, and later when he almost broke up with his wife. The emotional connections were so intense that he was shaking at times, and there was a catch in his voice.

This wasn't the first (or the last) time something like this has happened to me. Several times – on the bus or in the bus terminal, on a long car trip with a friend, at the airport, over lunch, at a picnic for work or a fraternal organization – someone hears or thinks of the word or phrase that "hits the core-dump button" and out it all comes. You sit there, in awe and trepidation, while the core-dumper gives you their entire "weird theory" of reality, all in one huge, steaming, highly charged, stream-of-consciousness pile. Sometimes it's clear that they have never articulated this before to anyone. Other times it's clear that they've been working on this particular monolog, maybe for years, and have already "gifted" others with a very similar version. Every time it's always intensely emotional for them, as the whole weird mess unspools and they search your face for some sign of recognition, of empathy, of understanding.

And, with me at least, they don't get that. I listen politely, trying not to look perplexed or horrified, waiting for the whole thing to come tumbling out, and hoping for something to then divert us – the burgers starting to burn or the bus arriving or the teller asking for my driver's license. I nod sometimes, and grunt in what I hope is a non-judgmental way, and quietly wish for someone or something to intervene before the core-dumper realizes that, not only to I not empathize, I think they're nuts.

Because they are, at some very deep level. Almost all of us are; completely whacked. What we almost all have, buried deep in our psyche, is what I call a "weird theory of reality," in which we believe passionately, and into which we shoehorn almost every perception we have about reality. Furthermore, it's clear to me that people have always had such "weird theories" about reality. Today it's alien abductions or UFOs, astral projections, mental telepathy, ESP, clairvoyance, spirit photography, telekinetic movement, full trance mediums, the Loch Ness monster and the theory of Atlantis. Yesterday it was angels and demons, fairies, gnomes, trolls, heaven and Hell, transubstantiation, faith healing, walking on water, flying, and speaking in tongues. Tomorrow...well, I can't say for sure, but I am sure it will be something weird.

What makes modern "weird theories" different from those of the past is that today everyone has their own "weird theory". When people lived in small agricultural villages or even smaller hunter-gatherer groups, people had weird theories, but these were pretty similar within those groups. Heresy was difficult, if not virtually unthinkable, because everyone in a particular group was in constant verbal and emotional contact with virtually everyone else, and there was a strong incentive to conform to group norms of belief.

This pattern persisted into historic times with the establishment and enforcement of "state religions" - that is, weird theories of reality that had the force of political coercion behind them. People may have had personally idiosyncratic versions of the group's weird theory, but they generally kept these to themselves. These "group weird theories" (GWT) were the mythologies that held such groups together, that gave them a sense of shared experience and shared purpose, and that facilitated group coordination. This was often a good thing, but sometimes a bad thing: it made possible group coordination in agriculture and response to natural disasters, but also facilitated warfare and small-scale genocide.

What characterizes us now is that our weird theories are almost entirely idiosyncratic, especially in the First World. We have largely given up the large-scale group mythologies and religions of the past, and replaced them with what could be called "personal mythologies and religions". That Protestantism is the most influential religion in America today is precisely because it isn't a single religion: it's thousands, even millions of little idiosyncratic religions, with some shared similarities. Schism, right down to the individual level (and even within individuals at different times in our lives) is the norm, and so our weird theories are not only weird, they're mutually incomprehensible.

So, are the various sciences also "weird theories"? Anyone acquainted with the current state of quantum physics would almost certainly agree, as would most evolutionary biologists. But, it's not really the same, because although there are many weird theories in science, there is also an underlying agreement that is deeply "unweird" – the idea that empirical verification and logical inference is the basis for all of our weird theories.

Ultimately, the difference between non-scientific and scientific "weird theories" is that eventually the latter become generally accepted by the scientific community in the same way that the grand overarching religious weird theories of past centuries were. Yes, there are still schisms in science (think of the controversies surrounding punctuated equilibrium versus phyletic gradualism), but in the long run these schisms tend to heal themselves. Thomas Kuhn described this process well, but he also asserted (and most scientists would agree) that eventually the various scientific communities agree on their dominant paradigms. Science, in other words, tends to become more unified over time, as deep connections between the various weird theories stitch them together into "grand unified theories".

By contrast, the non-scientific "weird theories" schism and schism and schism, until they become the incomprehensible idiosyncratic messes that one taps into when one hits the "core-dump button". Indeed, one of my personal weird theories is that this is a good way to distinguish between useful (i.e. "true") and pointless (i.e. "false") weird theories: the former tend to unify your ideas with those of the other members of your community, whereas the latter tend to separate us to the point of mutual incomprehensibility.

Hence, the quote from The Big Lebowski: say what you say what you like about the tenets of (insert scientific discipline here), Dude, at least it's an ethos.

RECOMMENDED READING:

Shermer, M. (2002) Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time. Holt, New York, NY, ISBN #0805070893 ($17.00, paperback), 384 pages. Available here.

Sowin, J. (2008) 25 reasons people believe weird things. Pseudoscience, Life, Science, Religion, 28 April 2008. Available online here.

As always, comments, criticisms, and suggestions are warmly welcomed!

--Allen

Labels: , , , , , , , , , , ,

Saturday, June 17, 2006

Identity, Analogy, and Logical Argument in Science (Updated)


AUTHOR: Allen MacNeill

SOURCE: Original essay

COMMENTARY: That's up to you...
"...analogy may be a deceitful guide."
- Charles Darwin, Origin of Species

The descriptions and analysis of the functions of analogy in logical reasoning that I am about to describe are, in my opinion, not yet complete. I have been working on them for several years (actually, about 25 years all told), but I have yet to be completely satisfied with them. I am hoping, therefore, that by making them public here (and eventually elsewhere) that they can be clarified to everyone’s satisfaction.

SECTION ONE: ON ANALOGY

To begin with, let us define an analogy as “a similarity between separate (but perhaps related) objects and/or processes”. As we will see, this definition may require refinement (and may ultimately rest on premises that cannot be proven - that is, axioms - rather than formal proof). But for now, let it be this:

DEFINITION 1.0: Analogy = Similarity between separate objects and/or processes (from the Greek ana, meaning “a collection” and logos, meaning “that which unifies or signifies.”)

AXIOM 1.0: The only perfect analogy to a thing is the thing itself.

COMMENTARY 1.0: This is essentially a statement of the logical validity of tautology (from the Greek tó autos meaning “the same” and logos, meaning “word” or “information”. As Ayn Rand (and, according to her, Aristotle) asserted:

AXIOM 1.0: A = A

From this essentially unprovable axiom, the following corrolary may be derived:

CORROLARY 1.1: All analogies that are not identities are necessarily imperfect.

AXIOM 2.0: Only perfect analogies are true.

CORROLARY 2.1: Only identities (i.e. tautologies, or "perfect" analogies) are true.

CORROLARY 2.2: Since only tautologies are prima facie "true", this implies that all analogical statements (except tautologies) are false to some degree. This leads us to:

AXIOM 3.0: All imperfect analogies are false to some degree.

AXIOM 3.0: A ≠ notA

CORROLARY 3.1: Since all non-tautological analogies are false to some degree, then all arguments based on non-tautological analogies are also false to the same degree.

COMMENTARY 2.0: The validity of all logical arguments that are not based on tautologies are matters of degree, with some arguments being based on less false analogies than others.

CONCLUSION 1: As we will see in the next sections, all forms of logical argument (i.e. transduction, induction, deduction, and abduction) necessarily rely upon non-tautological analogies. Therefore, to summarize:
All forms of logical argument (except for tautologies) are false to some degree.

Our task, therefore, is not to determine if non-tautological logical arguments are true or false, but rather to determine the degree to which they are false (and therefore the degree to which they are also true), and to then use this determination as the basis for establishing confidence in the validity of our conclusions.

SECTION TWO: ON VALIDITY, CONFIDENCE, AND LOGICAL ARGUMENT

Based on the foregoing, let us define validity as “the degree to which a logical statement is based upon false analogies.” Therefore, the closer an analogy is to a tautology, the more valid that analogy is.

DEFINITION 2.0: Validity = The degree to which a logical statement is based upon false analogies.

COMMENTARY: Given the foregoing, it should be clear at this point that (with the exception of tautologies):
There is no such thing as absolute truth; there is only degrees of validity.

In biology, it is traditional to determine the validity of an hypothesis by calculating confidence levels using statistical analyses. According to these analyses, if a hypothesis is supported by at least 95% of the data (that is, if the similarity between the observed data and the values predicted by the hypothesis being tested is at least 95%), then the hypothesis is considered to be valid. In the context of the definitions, axiom, and corrolaries developed in the previous section, this means that valid hypotheses in biology may be thought of as being at least 95% tautological (and therefore less than 5% false).

DEFINITION 2.1: Confidence = The degree to which an observed phenomenon conforms to (i.e. is similar to) a hypothetical prediction of that phenomenon.

This means that, in biology:
Validity (i.e. truth) is, by definition, a matter of degree.

Following long tradition, an argument (from the Latin argueré, meaning “to make clear”) is considered to be a statement in which a premise (or premises, if more than one, from the Latin prae, meaning “before” and mitteré, meaning “to place”) is related to a conclusion (i.e. the end of the argument). There are four kinds of argument, based on the means by which a premise (or premises) are related to a conclusion: transduction, induction, deduction, and abduction, which will be considered in order in the following sections.

DEFINITION 2.2: Argument = A statement of a relationship between a premise (or premises) and a conclusion.

Given the foregoing, the simplest possible argument is a statement of a tautology, as in A = A. Unlike all other arguments, this statement is true by definition (i.e. on the basis of AXIOM 1.0). All other arguments are only true by matter of degree, as established above.

SECTION THREE: ON TRANSDUCTION

The simplest (and least effective) form of logical argument is argument by analogy. The Swiss child psychologist Jean Piaget called this form of reasoning transduction (from the Latin trans, meaning “across” and duceré. meaning “to lead”), and showed that it is the first and simplest form of logical analysis exhibited by young children. We may define transduction as follows:

DEFINITION 3.0: Transduction = Argument by analogy alone (i.e. by simple similarity between a premise and a conclusion).

A tautology is the simplest transductive argument, and is the only one that is “true by definition.” As established above, all other arguments are “true only by matter of degree.” But to what degree? How many examples of a particular premise are necessary to establish some degree of confidence? That is, how confident can we be of a conclusion, given the number of supporting premises?

As the discussion of confidence in Section 2 states, in biology at least 95% of the observations that we make when testing a prediction that flows from an hypothesis must be similar to those predicted by the hypothesis. This, in turn, implies that there must be repeated examples of observations such that the 95% confidence level can be reached.

However, in a transductive argument, all that is usually stated is that a single object or process is similar to another object or process. That is, the basic form of a transductive argument is:

Ai => Aa

where:

Ai is an individual object or process

and

Aa is an analogous (i.e. similar, but identical, and therefore non-tautological) object or process

Since there is only a single example in the premise in such an argument, to state that there is any degree of confidence in the conclusion is very problematic (since it is nonsensical to state that a single example constitutes 95% of anything).

In science, this kind of reasoning is usually referred to as “anecdotal evidence,” and is considered to be invalid for the support of any kind of generalization. For this reason, arguments by analogy are generally not considered valid in science. As we will see, however, they are central to all other forms of argument, but there must be some additional content to such arguments for them to be considered generally valid.

EXAMPLE 3.0: To use an example that can be extended to all four types of logical argument, consider a green apple. Imagine that you have never tasted a green apple before. You do so, and observe that it is sour. What can you conclude at this point?

The only thing that you can conclude as the result of this single observation is that the individual apple that you have tasted is sour. In the formalism introduced above:

Ag => As

where:

Ag = green apple

and

As = sour apple

While this statement is valid for the particular case noted, it cannot be generalized to all green apples (on the basis of a single observation). Another way of saying this is that the validity of generalizing from a single case to an entire category that includes that case is extremely low; so low that it can be considered to be invalid for most intents and purposes.

SECTION FOUR: ON INDUCTION

A more complex form of logical argument is argument by induction. According to the Columbia Encyclopedia, induction (from the Latin in, meaning “into” and duceré, meaning “to lead”) is a form of argument in which multiple premises provide grounds for a conclusion, but do not necessitate it. Induction is contrasted with deduction, in which true premises do necessitate a conclusion.

An important form of induction is the process of reasoning from the particular to the general. The English philosopher and scientist Francis Bacon in his Novum Organum (1620) elucidated the first formal theory of inductive logic, which he proposed as a logic of scientific discovery, as opposed to deductive logic, the logic of argumentation. the Scottish philosopher David Hume has influenced 20th-century philosophers of science who have focused on the question of how to assess the strength of different kinds of inductive argument (see Nelson Goodman and Karl Popper).

We may therefore define induction as follows:

DEFINITION 4.0: Induction = Argument from individual observations to a generalization that applies to all (or most) of the individual observations.

EXAMPLE 4.0: You taste one green apple; it is sour. You taste another green apple; it is also sour. You taste yet another green apple; once again, it is sour. You continue tasting green apples until some relatively arbitrary point (which can be stated in formal terms, but which is unnecessary for the current analysis), you formulate a generalization; “(all) green apples are sour.”

In symbolic terms:

A1 + A2 + A3 + …An => As

where:

A1 + A2 + A3 + …An = individual cases of sour green apples

and

As = green apples are sour

As we have already noted, the number of similar observations (i.e. An in the formula, above) has an effect on the validity of any conclusion drawn on the basis of those observations. In general, enough observations must be made that a confidence level of 95% can be reached, either in accepting or rejecting the hypothesis upon which the conclusion is based. In practical terms, conclusions formulated on the basis of induction have a degree of validity that is directly related to the number of similar observations; the more similar observations one makes, the greater the validity of one’s conclusions.

IMPLICATION 4.0: Conclusions reached on the basis of induction are necessarily tentative and depend for their validity on the number of similar observations that support such conclusions. In other words:
Inductive reasoning cannot reveal absolute truth, as it is necessarily limited only to degrees of validity.

It is important to note that, although transduction alone is invalid as a basis for logical argument, transduction is nevertheless an absolutely essential part of induction. This is because, before one can formulate a generalization about multiple individual observations, it is necessary that one be able to relate those individual observations to each other. The only way that this can be done is via transduction (i.e. by analogy, or similarity, between the individual cases).

In the example of green apples, before one can conclude that “(all) green apples are sour” one must first conclude that “this green apple and that green apple (and all those other green apples) are similar.” Since transductive arguments are relatively weak (for the reasons discussed above), this seems to present an unresolvable paradox: no matter how many similar repetitions of a particular observation, each repetition depends for its overall validity on a transductive argument that it is “similar” to all other repetitions.

This could be called the “nominalist paradox,” in honor of the philosophical tradition founded by the English cleric and philosopher William of Ockham, of “Ockham’s razor” fame. On the face of it, there seems to be no resolution for this paradox. However, I believe that a solution is entailed by the logic of induction itself. As the number of “similar” repetitions of an observation accumulate, the very fact that there are a significant number of such repetitions provides indirect support for the assertion that the repetitions are necessarily (rather than accidentally) “similar.” That is, there is some “law-like” property that is causing the repetitions to be similar to each other, rather than such similarities being the result of random accident.

SECTION FIVE: ON DEDUCTION

A much older form of logical argument than induction is argument by deduction. According to the Columbia Encyclopedia, deduction (from the Latin de, meaning “out of” and duceré, meaning “to lead”) is a form of argument in which individual cases are derived from (and validated by) a generalization that subsumes all such cases. Unlike inductive argument, in which no amount of individual cases can prove a generalization based upon them to be “absolutely true,” the conclusion of a deductive inference is necessitated by the premises. That is, the conclusions (i.e. the individual cases) can’t be false if the premise (i.e. the generalization) is true, provided that they follow logically from it.

Deduction may be contrasted with induction, in which the premises suggest, but do not necessitate a conclusion. The ancient Greek philosopher Aristotle first laid out a systematic analysis of deductive argumentation in the Organon. As noted above, Francis Bacon elucidated the formal theory of inductive logic, which he proposed as the logic of scientific discovery.

Both processes, however, are used constantly in scientific research. By observation of events (i.e. induction) and from principles already known (i.e. deduction), new hypotheses are formulated; the hypotheses are tested by applications; as the results of the tests satisfy the conditions of the hypotheses, laws are arrived at (i.e. by induction again); from these laws future results may be determined by deduction.

We may therefore define deduction as follows:

DEFINITION 5.0: Deduction = Argument from a generalization to an individual case, and which applies to all such individual cases.

EXAMPLE 5.0: You assume that all green apples are sour. You are confronted with a particular green apple. You conclude that, since this is a green apple and green apples are sour, then “this green apple is sour.”

In symbolic terms:

As => Ai

where:

As = all apples are sour

Ai = any individual case of a green apple

As noted above, the conclusions of deductive arguments are necessarily true if the premise (i.e. the generalization) is true. However, it is not clear how such generalizations are themselves validated. In the scientific tradition, the only valid source of such generalizations is induction, and so (contrary to the Aristotelian tradition), deductive arguments are no more valid than the inductive arguments by which their major premises are validated.

IMPLICATION 5.0: Conclusions reached on the basis of deduction are, like conclusions reached on the basis of induction, necessarily tentative and depend for their validity on the number of similar observations upon which their major premises are based. In other words:
Deductive reasoning, like inductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which its major premise is based.

Hence, despite the fact that induction and deduction “argue in opposite directions,” we come to the conclusion that, in terms of natural science, the validity of both is ultimately dependent upon the number and degree of similarity of the observations that are used to infer generalizations. Therefore, unlike the case in purely formal logic (in which the validity of inductive inferences is always conditional, whereas the validity of deductive inferences is not), there is an underlying unity in the source of validity in the natural sciences:
All arguments in the natural sciences are validated by inductive inference.

SECTION SIX: ON ABDUCTION

A somewhat newer form of logical argument is argument by abduction. According to the Columbia Encyclopedia, abduction (from the Latin ab, meaning “away” and duceré, meaning “to lead”) is the process of reasoning from individual cases to the best explanation for those cases. In other words, it is a reasoning process that starts from a set of facts and derives their most likely explanation from an already validated generalization that explains them. In simple terms, the new observation(s) is/are "abducted" into the already existing generalization.

The American philosopher Charles Sanders Peirce (last name pronounced like "purse") introduced the concept of abduction into modern logic. In his works before 1900, he generally used the term abduction to mean “the use of a known rule to explain an observation,” e.g., “if it rains, the grass is wet” is a known rule used to explain why the grass is wet:

Known Rule: “If it rains, the grass is wet.”

Observation: “The grass is wet.”

Conclusion: “The grass is wet because it has rained.”

Peirce later used the term abduction to mean “creating new rules to explain new observations,” emphasizing that abduction is the only logical process that actually creates new knowledge. He described the process of science as a combination of abduction, deduction and implication, stressing that new knowledge is only created by abduction.

This is contrary to the common use of abduction in the social sciences and in artificial intelligence, where Peirce's older meaning is used. Contrary to this usage, Peirce stated in his later writings that the actual process of generating a new rule is not hampered by traditional rules of logic. Rather, he pointed out that humans have an innate ability to correctly do logical inference. Possessing this ability is explained by the evolutionary advantage it gives.

We may therefore define abduction as follows (using Peirce's original formulation):

DEFINITION 6.0: Abduction = Argument that validates a set of individual cases via a an explanation that cites the similarities between the set of individual cases and an already validated generalization.

EXAMPLE 6.0: You have a green fruit, which is not an apple. You already have a tested generalization about green apples that states that green apples are sour. You observe that since the fruit you have in hand is green and resembles a green apple, then (by analogy to the case in green apples) it is probably sour (i.e. it is analogous to green apples, which you have already validated are sour).

In symbolic terms:

(Fg = Ag) + (Ag = As) => Fg = Fs

where:

Fg = a green fruit

Ag = green apple

As = sour green apple

and

Fs = a sour green fruit

In the foregoing example, it is clear why Peirce asserted that abduction is the only way to produce new knowledge (i.e. knowledge that is not strictly derived from existing observations or generalizations). The new generalization (“this new green fruit is sour”) is a new conclusion, derived by analogy to an already existing generalization about green apples. Notice that, once again, the key to formulating an argument by abduction is the inference of an analogy between the green fruit (the taste of which is currently unknown) and green apples (which we already know, by induction, are sour).

IMPLICATION 6.0: Conclusions reached on the basis of abduction are, like conclusions reached on the basis of induction and deduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which an existing analogy is generalized to include a larger set of cases.

Again, since transduction, like induction and deduction, is only validated by repetition of similar cases (see above), abduction is ultimately just as limited as the other forms of argument as the other three:
Abductive reasoning, like inductive and deductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

SECTION SEVEN: ON CONSILIENCE

The newest form of logical argument is argument by consilience. According to Wikipedia, consilience (from the Latin con, meaning “with” and saliré, meaning “to jump”: literally "to jump together") is the process of reasoning from several similar generalizations to a generalization that covers them all. In other words, it is a reasoning process that starts from several inductive generalizations and derives a "covering" generalization that is both validated by and strengthens them all.

The English philosopher and scientist William Whewell (pronounced like "hewel") introduced the concept of consilience into the philosophy of science. In his book, The Philosophy of the Inductive Sciences, published in 1840, Whewell defined the term consilience by saying “The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.”

The concept of consilience has more recently been applied to science in general and evolutionary biology in particular by the American evolutionary biologist Edward_O._Wilson. In his book, Consilience: the Unity of Knowledge, published in 1998, Wilson reintroduced the term and applied it to the modern evolutionary synthesis. His main point was that multiple lines of evidence and inference all point to evolution bynatural selection as the most valid explanation for the origin of evolutionary adaptations and new phylogenetic taxa (e.g. species) as the result of descent with modification (Darwin's term for "evolution").

To extend the example for abduction given above, if the grass is wet (and rain is known to make the grass wet), the road is wet (and rain is known to make the road wet), and the car in the driveway is wet (and rain is known to make the car in the driveway wet), then rain can make everything outdoors wet, including objects whose wetness is not yet verified to be the result of rain.

Independent Observation: “The grass is wet.”

Already validated generalization: "Rain makes grass wet."

Independent Observation: “The road is wet.”

Already validated generalization: "Rain makes roads wet."

Independent Observation: “The car in the driveway is wet.”

Already validated generalization: "Rain makes cars in driveways wet."

Conclusion: “Rain makes everything outdoors wet.”

One can immediately generate an application of this new generalization to new observations:

New observation: "The picnic table in the back yard is wet."

New generalization: “Rain makes everything outdoors wet.”

Conclusion: "The picnic table in the back yard is wet because it has rained."

We may therefore define consilience as follows:

DEFINITION 7.0: Consilience = Argument that validates a new generalization about a set of already validated generalizations, based on similarities between the set of already validated generalizations.

EXAMPLE 7.0: You have a green peach, which when you taste it, is sour. You already have a generalization about green apples that states that green apples are sour and a generalization about green oranges that states that green oranges are sour. You observe that since the peach you have in hand is green and sour, then all green fruits are probably sour. You may then apply this new generalization to all new green fruits whose taste is currently unknown.

In symbolic terms:

(Ag = Sa) + (Og = Os) + (Pg = Ps) => Fg = Fs

where:

Ag = green apples

Sa = sour apples

Og = green oranges

Os = sour oranges

Pg = green peaches

Ps = sour peaches

Fg = green fruit

Fs = sour fruit

Given the foregoing example, it should be clear that consilience, like abduction (according to Peirce) is another way to produce new knowledge. The new generalization (“all green fruits are sour”) is a new conclusion, derived from (but not strictly reducible to) its premises. In essence, inferences based on consilience are "meta-inferences", in that they involve the formulation of new generalizations based on already existing generalizations.

IMPLICATION 7.0: Conclusions reached on the basis of consilience, like conclusions reached on the basis of induction, deduction, and abduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which existing generalizations are generalized to include all of them, and can then be applied to new, similar cases.

Again, since consilience, like induction, deduction, and abduction, is only validated by repetition of similar cases, consilience is ultimately just as limited as the other forms of argument as the other three:
Consilient reasoning, like inductive, deductive, and abductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

However, there is an increasing degree of confidence involved in the five forms of logical argument described above. Specifically, simple transduction produces the smallest degree of confidence, induction somewhat more (depending on the number of individual cases used to validate a generalization), deduction more so (since generalizations are ultimately based on induction), abduction even more (because a new set of observations is related to an already existing generalization, validated by induction), and consilience most of all (because new generalizations are formulated by induction from sets of already validated generalizations, themselves validated by induction).

CONCLUSIONS:

Transduction relates a single premise to a single conclusion, and is therefore the weakest form of logical validation.

Induction validates generalizations only via repetition of similar cases, the validity of which is strengthened by repeated transduction of similar cases.

Deduction validates individual cases based on generalizations, but is limited by the induction required to formulate such generalizations and by the transduction necessary to relate individual cases to each other and to the generalizations within which they are subsumed.

Abduction validates new generalizations via analogy between the new generalization and an already validated generalization; however, it too is limited by the formal limitations of transduction, in this case in the formulation of new generalizations.

Consilience validates a new generalization by showing via analogy that several already validated generalizations together validate the new generalization; once again, consilience is limited by the formal limitations of transduction, in this case in the validation of new generalizations via inferred analogies between existing generalizations.

• Taken together, these five forms of logical reasoning (call them "TIDAC" for short) represent five different but related means of validating statements, listed in order of increasing confidence.

• The validity of all forms of argument are therefore ultimately limited by the same thing: the logical limitations of transduction (i.e. argument by analogy).

• Therefore, there is (and can be) no ultimate certainty in any description or analysis of nature insofar as such descriptions or analyses are based on transduction, induction, deduction, abduction, and/or consilience.

• All we have (and can ever have) is relative degrees of confidence, based on repeated observations of similar objects and processes.

• Therefore, we can be most confident about those generalizations for which we have the most evidence.

• Based on the foregoing analysis, generalizations formulated via simple analogy (transduction) are the weakest and generalizations formulated via consilience are the strongest.

Comments, criticisms, and suggestions are warmly welcomed!

--Allen

Labels: , , , , , , ,

Thursday, February 23, 2006

Incommensurate Worldviews



AUTHOR: Allen MacNeill

SOURCE: Original essay

COMMENTARY: That's up to you...

I am beginning to understand more about the differences between the physical sciences (such as astronomy, chemistry, and physics) and the biological sciences, and why the worldview of a physical scientist with a strongly mathematical predilection is apparently so different from mine and that of most other biologists (at least, of those biologists of whom I have personal and/or reputable knowledge). Furthermore, it seems to me that these differences are central to the apparent inability of non-biologists to fully comprehend the "darwinian" worldview upon which much of biology (and all of evolutionary theory) has been constructed (and vice versa, of course).

To me, these appear to be the basic differences that inform our worldviews:

1) CONTINGENCY: The biological sciences (i.e. anatomy & physiology, parts of biochemistry, botany, development & embryology, ecology, ethology, evolution, genetics, marine biology, neurobiology, and the allied subdisciplines), like the "earth sciences" (i.e. atmospheric sciences, geology, etc.) are both contingent and historical. That is, they cannot be derived from "first principles" in the way that algebra, calculus, geometry (both euclidean and non-euclidean), probability, symbolic logic, topology, trigonometry, and other "non-empirical" sciences can be. As both Ernst Mayr and Karl Popper have pointed out, historical contingency is inextricably intertwined with biological causation, in a way that it is not in mathematics and the physical sciences. This would appear to be true, by the way, for both "darwinist" and ID models of biological evolution and the fields derived from them. Indeed, even the Judeo-Christian-Muslim worldview is contingent and historical, in ways antithetical to both mathematics and pre-"big bang" cosmological physics.

2) UNIVERSALITY: The biological sciences are also not "universal" in the way that chemistry and physics are. We assume that the processes described by physical "laws" are universal and ahistorical. that is, we assume that they are the same regardless of where, when, and by whom they are investigated. Furthermore, it is tacitly assumed by physical scientists that the "laws" they discover apply everywhere and everywhen, without empirical verification that this is, in fact, the case. It seems to me that this assumption is reinforced by the mathematical precision with which physical processes can be analyzed and described.

By contrast, the entities and processes studied by biologists are necessarily "messy" and often "non-quantifiable," in the sense that they cannot be entirely reduced to purely mathematical abstractions. The great beauty and elegance of Newton's physics and Pauling's chemistry are that the objects and processes they describe can be so reduced, and when they are, they reveal an underlying mathematical regularity, a regularity so precise and so elegant that one is tempted to believe that the mathematical formalism is what is "real" and the physical entities and processes that they describe are, at best, somewhat imperfect expressions of the underlying perfect regularities.

To me, however, what has always been appealing about biology is its very "messiness." As the so-called Law of Experimental Psychology states "Under carefully controlled conditions, the organism does whatever it damn well pleases." Biological entities and processes are not quantifiable in the same way that physical ones are. This is probably due to the immensely greater complexity of biological entities and processes, in which causal mechanisms are tangled and often auto-catalytic.

3) STOCHASTICITY: The biological sciences are irreducibly statistical/stochastic, in ways that neither the physical nor mathematical sciences generally are (although they are becoming moreso as they intrude deeper into biology). R. A. Fisher was not only the premier mathematical modeler of evolution, he was also the founder of modern statistical biometry. This is no accident: both field and laboratory biology (but not 19th century natural history) depend almost completely on statistical analysis. Again, this is probably because the underlying causes for biological processes are so multifarious and intertwined.

Physicists, chemists, and astronomers can accept hypotheses at confidence levels that biologists can never aspire to. Indeed, until recently the whole idea of "confidence levels" was generally outside the vocabulary of the physical sciences. When you repeatedly drop a rock and measure its acceleration, the measurements you get are so precise and fit so well with Newton's descriptive formalism that the idea that one would necessarily need to statistically verify that they do not depart significantly from predictions derived from that formalism seems superfluous. Slight deviations from the predicted behavior of non-living falling objects are considered to be just that: deviations (and most likely the result of observer error, rather than actual deviant causation). Rarely does any physical scientist look at such deviations as indicative of some new, perhaps deeper formalism (but consider, of course, Einstein's explanation of the precession of the orbit of Mercury, which did not fit Newton's predictions).

4) FORMALIZATION: There are many processes in biology, and especially in organismal (i.e. "skin out" biology) that are so resistant to quantification or mathematical formalization that there is the nagging suspicion that they cannot in principle be so quantified or formalized. It is, of course, logically impossible to "prove" a negative assertion like this - after all, our inability to produce a Seldonian "psychohistory" that perfectly formalizes and therefore predicts animal (and human) behavior could simply be the result of a deficiency in our mathematics or our ability to measure and separately analyze all causative factors.

However, my own experience as a field and laboratory biologist (I used to study field voles - Microtus pennsylvanicus - and now I study people) has instilled in me what could be called "Haldane's Suspicion:" that biology "is not only queerer than we imagine, but queerer than we can imagine." That is, given the complexity and interlocking nature of biological causation, it may be literally impossible to convert biology into a mathematically formal science like astronomy, chemistry, or physics.

But that's one of the main reasons I love biology so much. Mathematical formalisms, to me, may be elegant, but they are also sterile. The more perfect the formalism, the more boring and unproductive it seems to me. The physicists' quest for a single unifying "law of everything" is apparently very exciting to people who are enamored of mathematical formalism for its own sake. But to me, it is the very multifariousness – one could even say "cussedness" – of biological organisms and processes that makes them interesting to me. That biology may not have a single, mathematical "grand unifying theory" (yes, evolution isn't it ;-)) means to me that there will always be a place for people like me, who marvel at the individuality, peculiarity, and outright weirdness of life and living things.

5) PLATONIC VS. DARWINIAN WORLDVIEWS: It seems to me that many ID theorists come at science from what could be called a "platonic" approach. That is, a philosophical approach that assumes a priori that platonic "ideal forms" exist and are the basis for all natural forms and processes. To a person with this worldview, mathematics are the most "perfect" of the sciences, as they literally deal only with platonic ideal forms. Astronomy, chemistry, and physics are only slightly less "prefect," as the objects and processes they describe can be reduced to purely mathematical formalisms (without stochastic elements, at least at the macroscopic level), and when they are so reduced, the predictive precision of such formalisms increases, rather than decreases.

By contrast, I come at science from what could be called a "darwinian" approach. Darwin's most revolutionary (and subversive) idea was not natural selection. Indeed, the idea had already been suggested by Edward Blythe. Rather, Darwin's most "dangerous" idea was that the variations between individual organisms (and, by extension, between different biological events) were irreducibly "real." As Ernst Mayr has pointed out, this kind of "population thinking" fundamentally violates platonic idealism, and therefore represents a revolutionary break with mainstream western philosophical traditions.

I am and have always been partial to the "individualist" philosophical stance represented by darwinian variation. It informs everything I think about reality, from the idea that every individual living organism is irreducibly unique to the idea that my life (and, by extension, everybody else's) is irreducibly unique (and non-replicible). Such a philosophical position might seem to lead to a kind of radical "loneliness," and indeed there have been times when that was the case for me. But since all of us are equal in our "aloneness," it paradoxically becomes one of the things we universally share.

And so, I don't think a "darwinian worldview" applies to the physical sciences (and certainly does not apply to non-empirical sciences, such as mathematics), for the reasons I have detailed above. In particular, it seems clear to me that although it may be possible to mathematically model microevolutionary processes (as R. A. Fisher and J. B. S. Haldane first did back in the early 20th century), it is almost certainly impossible to mathematically model macroevolutionary processes. The reason for this impossibility is that macroevolutionary processes are necessarily contingent on non-repeatable (i.e. "historical") events, such as asteroid collisions, volcanic eruptions, sea level alterations, and other large-scale ecological changes, plus the occurrence (or non-occurrence) of particular (and especially major) genetic changes in evolving phylogenies. While it may be possible to model what happens after such an event (e.g. adaptive radiation), the interactions between events such as these are fundamentally unpredictable, and therefore cannot be incorporated in prospective mathematical models of macroevolutionary changes.

It's like that famous cartoon by Sidney Harris: "Then a miracle occurs..." The kinds of events that are often correlated with major macroevolutionary changes (such as mass extinctions and subsequent adaptive radiations) are like miracles, in that they are unpredictable and unrepeatable, and therefore can't be integrated into mathematical models that require monotonically changing dynamical systems (like newtonian mechanics, for example).

So, to sum up, I believe that the "darwinian worldview" applies only to those natural sciences that are both contingent and intrinsically historical, such as biology, geology, and parts of astrophysics/cosmology. Does this make such sciences less "valid" than the non-historical (i.e. physical) sciences? Not at all; given that physical laws now appear to critically depend on historical/unrepeatable events such as the "big bang," it may turn out to be the other way around. In the long run, even the physical sciences may have to be reinterpreted as depending on contingent/historical events, leaving the non-empirical sciences (mathematics and metaphysics) as the only "universal" (i.e. non-contingent/ahistorical) sciences.

To summarize it in a bullet point:

• Platonic/physical scientists describe reality with equations, whereas darwinian/biological scientists describe reality with narratives.

--Allen

P.S. Alert readers may recognize some of the hallmarks of the so-called Apollonian vs. Dionysian dichotomy in the preceding analysis. That such characteristics are recognizable in my analysis is not necessarily an accident.

P.P.S. It is also very important to keep in mind, when considering any analysis of this sort, that sweeping generalizations are always wrong ;-)

Labels: , , , , , , , , , , , ,