Wednesday, January 28, 2009

TIDAC: Identity, Analogy, and Logical Argument in Science


AUTHOR: Allen MacNeill

SOURCE: Original essay

COMMENTARY: That's up to you...
"...analogy may be a deceitful guide."
- Charles Darwin, Origin of Species

The descriptions and analysis of the functions of analogy in logical reasoning that I am about to describe are, in my opinion, not yet complete. I have been working on them for several years (actually, about 25 years all told), but I have yet to be completely satisfied with them. I am hoping, therefore, that by making them public here (and eventually elsewhere) that they can be clarified to everyone’s satisfaction.

This version of this analysis is a revised and updated version of an earlier post at this blog.

SECTION ONE: ON ANALOGY

To begin with, let us define an analogy as “a similarity between separate (but perhaps related) objects and/or processes”. As we will see, this definition may require refinement (and may ultimately rest on premises that cannot be proven - that is, axioms - rather than formal proof). But for now, let it be this:

DEFINITION 1.0: Analogy = Similarity between separate objects and/or processes (from the Greek ana, meaning “a collection” and logos, meaning “that which unifies or signifies.”)

AXIOM 1.0: The only perfect analogy to a thing is the thing itself.

COMMENTARY 1.0: This is essentially a statement of the logical validity of tautology (from the Greek tó autos meaning “the same” and logos, meaning “word” or “information”. As Ayn Rand (and, according to her, Aristotle) asserted:

AXIOM 1.0: A = A

From this essentially unprovable axiom, the following corrolary may be derived:

CORROLARY 1.1: All analogies that are not identities are necessarily imperfect.

AXIOM 2.0: Only perfect analogies are true.

CORROLARY 2.1: Only identities (i.e. tautologies, or "perfect" analogies) are true.

CORROLARY 2.2: Since only tautologies are prima facie "true", this implies that all analogical statements (except tautologies) are false to some degree. This leads us to:

AXIOM 3.0: All imperfect analogies are false to some degree.

AXIOM 3.0: A ≠ notA

CORROLARY 3.1: Since all non-tautological analogies are false to some degree, then all arguments based on non-tautological analogies are also false to the same degree.

COMMENTARY 2.0: The validity of all logical arguments that are not based on tautologies are matters of degree, with some arguments being based on less false analogies than others.

CONCLUSION 1: As we will see in the next sections, all forms of logical argument (i.e. transduction, induction, deduction, and abduction) necessarily rely upon non-tautological analogies. Therefore, to summarize:
All forms of logical argument (except for tautologies) are false to some degree.

Our task, therefore, is not to determine if non-tautological logical arguments are true or false, but rather to determine the degree to which they are false (and therefore the degree to which they are also true), and to then use this determination as the basis for establishing confidence in the validity of our conclusions.

SECTION TWO: ON VALIDITY, CONFIDENCE, AND LOGICAL ARGUMENT

Based on the foregoing, let us define validity as “the degree to which a logical statement is based upon false analogies.” Therefore, the closer an analogy is to a tautology, the more valid that analogy is.

DEFINITION 2.0: Validity = The degree to which a logical statement is based upon false analogies.

COMMENTARY: Given the foregoing, it should be clear at this point that (with the exception of tautologies):
There is no such thing as absolute truth; there is only degrees of validity.

In biology, it is traditional to determine the validity of an hypothesis by calculating confidence levels using statistical analyses. According to these analyses, if a hypothesis is supported by at least 95% of the data (that is, if the similarity between the observed data and the values predicted by the hypothesis being tested is at least 95%), then the hypothesis is considered to be valid. In the context of the definitions, axiom, and corrolaries developed in the previous section, this means that valid hypotheses in biology may be thought of as being at least 95% tautological (and therefore less than 5% false).

DEFINITION 2.1: Confidence = The degree to which an observed phenomenon conforms to (i.e. is similar to) a hypothetical prediction of that phenomenon.

This means that, in biology:
Validity (i.e. truth) is, by definition, a matter of degree.

Following long tradition, an argument (from the Latin argueré, meaning “to make clear”) is considered to be a statement in which a premise (or premises, if more than one, from the Latin prae, meaning “before” and mitteré, meaning “to place”) is related to a conclusion (i.e. the end of the argument). There are four kinds of argument, based on the means by which a premise (or premises) are related to a conclusion: transduction, induction, deduction, and abduction, which will be considered in order in the following sections.

DEFINITION 2.2: Argument = A statement of a relationship between a premise (or premises) and a conclusion.

Given the foregoing, the simplest possible argument is a statement of a tautology, as in A = A. Unlike all other arguments, this statement is true by definition (i.e. on the basis of AXIOM 1.0). All other arguments are only true by matter of degree, as established above.

SECTION THREE: ON TRANSDUCTION

The simplest (and least effective) form of logical argument is argument by analogy. The Swiss child psychologist Jean Piaget called this form of reasoning transduction (from the Latin trans, meaning “across” and duceré. meaning “to lead”), and showed that it is the first and simplest form of logical analysis exhibited by young children. We may define transduction as follows:

DEFINITION 3.0: Transduction = Argument by analogy alone (i.e. by simple similarity between a premise and a conclusion).

A tautology is the simplest transductive argument, and is the only one that is “true by definition.” As established above, all other arguments are “true only by matter of degree.” But to what degree? How many examples of a particular premise are necessary to establish some degree of confidence? That is, how confident can we be of a conclusion, given the number of supporting premises?

As the discussion of confidence in Section 2 states, in biology at least 95% of the observations that we make when testing a prediction that flows from an hypothesis must be similar to those predicted by the hypothesis. This, in turn, implies that there must be repeated examples of observations such that the 95% confidence level can be reached.

However, in a transductive argument, all that is usually stated is that a single object or process is similar to another object or process. That is, the basic form of a transductive argument is:

Ai => Aa

where:

Ai is an individual object or process

and

Aa is an analogous (i.e. similar, but identical, and therefore non-tautological) object or process

Since there is only a single example in the premise in such an argument, to state that there is any degree of confidence in the conclusion is very problematic (since it is nonsensical to state that a single example constitutes 95% of anything).

In science, this kind of reasoning is usually referred to as “anecdotal evidence,” and is considered to be invalid for the support of any kind of generalization. For this reason, arguments by analogy are generally not considered valid in science. As we will see, however, they are central to all other forms of argument, but there must be some additional content to such arguments for them to be considered generally valid.

EXAMPLE 3.0: To use an example that can be extended to all four types of logical argument, consider a green apple. Imagine that you have never tasted a green apple before. You do so, and observe that it is sour. What can you conclude at this point?

The only thing that you can conclude as the result of this single observation is that the individual apple that you have tasted is sour. In the formalism introduced above:

Ag => As

where:

Ag = green apple

and

As = sour apple

While this statement is valid for the particular case noted, it cannot be generalized to all green apples (on the basis of a single observation). Another way of saying this is that the validity of generalizing from a single case to an entire category that includes that case is extremely low; so low that it can be considered to be invalid for most intents and purposes.

SECTION FOUR: ON INDUCTION

A more complex form of logical argument is argument by induction. According to the Columbia Encyclopedia, induction (from the Latin in, meaning “into” and duceré, meaning “to lead”) is a form of argument in which multiple premises provide grounds for a conclusion, but do not necessitate it. Induction is contrasted with deduction, in which true premises do necessitate a conclusion.

An important form of induction is the process of reasoning from the particular to the general. The English philosopher and scientist Francis Bacon in his Novum Organum (1620) elucidated the first formal theory of inductive logic, which he proposed as a logic of scientific discovery, as opposed to deductive logic, the logic of argumentation. the Scottish philosopher David Hume has influenced 20th-century philosophers of science who have focused on the question of how to assess the strength of different kinds of inductive argument (see Nelson Goodman and Karl Popper).

We may therefore define induction as follows:

DEFINITION 4.0: Induction = Argument from individual observations to a generalization that applies to all (or most) of the individual observations.

EXAMPLE 4.0: You taste one green apple; it is sour. You taste another green apple; it is also sour. You taste yet another green apple; once again, it is sour. You continue tasting green apples until some relatively arbitrary point (which can be stated in formal terms, but which is unnecessary for the current analysis), you formulate a generalization; “(all) green apples are sour.”

In symbolic terms:

A1 + A2 + A3 + …An => As

where:

A1 + A2 + A3 + …An = individual cases of sour green apples

and

As = green apples are sour

As we have already noted, the number of similar observations (i.e. An in the formula, above) has an effect on the validity of any conclusion drawn on the basis of those observations. In general, enough observations must be made that a confidence level of 95% can be reached, either in accepting or rejecting the hypothesis upon which the conclusion is based. In practical terms, conclusions formulated on the basis of induction have a degree of validity that is directly related to the number of similar observations; the more similar observations one makes, the greater the validity of one’s conclusions.

IMPLICATION 4.0: Conclusions reached on the basis of induction are necessarily tentative and depend for their validity on the number of similar observations that support such conclusions. In other words:
Inductive reasoning cannot reveal absolute truth, as it is necessarily limited only to degrees of validity.

It is important to note that, although transduction alone is invalid as a basis for logical argument, transduction is nevertheless an absolutely essential part of induction. This is because, before one can formulate a generalization about multiple individual observations, it is necessary that one be able to relate those individual observations to each other. The only way that this can be done is via transduction (i.e. by analogy, or similarity, between the individual cases).

In the example of green apples, before one can conclude that “(all) green apples are sour” one must first conclude that “this green apple and that green apple (and all those other green apples) are similar.” Since transductive arguments are relatively weak (for the reasons discussed above), this seems to present an unresolvable paradox: no matter how many similar repetitions of a particular observation, each repetition depends for its overall validity on a transductive argument that it is “similar” to all other repetitions.

This could be called the “nominalist paradox,” in honor of the philosophical tradition founded by the English cleric and philosopher William of Ockham, of “Ockham’s razor” fame. On the face of it, there seems to be no resolution for this paradox. However, I believe that a solution is entailed by the logic of induction itself. As the number of “similar” repetitions of an observation accumulate, the very fact that there are a significant number of such repetitions provides indirect support for the assertion that the repetitions are necessarily (rather than accidentally) “similar.” That is, there is some “law-like” property that is causing the repetitions to be similar to each other, rather than such similarities being the result of random accident.

SECTION FIVE: ON DEDUCTION

A much older form of logical argument than induction is argument by deduction. According to the Columbia Encyclopedia, deduction (from the Latin de, meaning “out of” and duceré, meaning “to lead”) is a form of argument in which individual cases are derived from (and validated by) a generalization that subsumes all such cases. Unlike inductive argument, in which no amount of individual cases can prove a generalization based upon them to be “absolutely true,” the conclusion of a deductive inference is necessitated by the premises. That is, the conclusions (i.e. the individual cases) can’t be false if the premise (i.e. the generalization) is true, provided that they follow logically from it.

Deduction may be contrasted with induction, in which the premises suggest, but do not necessitate a conclusion. The ancient Greek philosopher Aristotle first laid out a systematic analysis of deductive argumentation in the Organon. As noted above, Francis Bacon elucidated the formal theory of inductive logic, which he proposed as the logic of scientific discovery.

Both processes, however, are used constantly in scientific research. By observation of events (i.e. induction) and from principles already known (i.e. deduction), new hypotheses are formulated; the hypotheses are tested by applications; as the results of the tests satisfy the conditions of the hypotheses, laws are arrived at (i.e. by induction again); from these laws future results may be determined by deduction.

We may therefore define deduction as follows:

DEFINITION 5.0: Deduction = Argument from a generalization to an individual case, and which applies to all such individual cases.

EXAMPLE 5.0: You assume that all green apples are sour. You are confronted with a particular green apple. You conclude that, since this is a green apple and green apples are sour, then “this green apple is sour.”

In symbolic terms:

As => Ai

where:

As = all apples are sour

Ai = any individual case of a green apple

As noted above, the conclusions of deductive arguments are necessarily true if the premise (i.e. the generalization) is true. However, it is not clear how such generalizations are themselves validated. In the scientific tradition, the only valid source of such generalizations is induction, and so (contrary to the Aristotelian tradition), deductive arguments are no more valid than the inductive arguments by which their major premises are validated.

IMPLICATION 5.0: Conclusions reached on the basis of deduction are, like conclusions reached on the basis of induction, necessarily tentative and depend for their validity on the number of similar observations upon which their major premises are based. In other words:
Deductive reasoning, like inductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which its major premise is based.

Hence, despite the fact that induction and deduction “argue in opposite directions,” we come to the conclusion that, in terms of natural science, the validity of both is ultimately dependent upon the number and degree of similarity of the observations that are used to infer generalizations. Therefore, unlike the case in purely formal logic (in which the validity of inductive inferences is always conditional, whereas the validity of deductive inferences is not), there is an underlying unity in the source of validity in the natural sciences:
All arguments in the natural sciences are validated by inductive inference.

SECTION SIX: ON ABDUCTION

A somewhat newer form of logical argument is argument by abduction. According to the Columbia Encyclopedia, abduction (from the Latin ab, meaning “away” and duceré, meaning “to lead”) is the process of reasoning from individual cases to the best explanation for those cases. In other words, it is a reasoning process that starts from a set of facts and derives their most likely explanation from an already validated generalization that explains them. In simple terms, the new observation(s) is/are "abducted" into the already existing generalization.

The American philosopher Charles Sanders Peirce (last name pronounced like "purse") introduced the concept of abduction into modern logic. In his works before 1900, he generally used the term abduction to mean “the use of a known rule to explain an observation,” e.g., “if it rains, the grass is wet” is a known rule used to explain why the grass is wet:

Known Rule: “If it rains, the grass is wet.”

Observation: “The grass is wet.”

Conclusion: “The grass is wet because it has rained.”

Peirce later used the term abduction to mean “creating new rules to explain new observations,” emphasizing that abduction is the only logical process that actually creates new knowledge. He described the process of science as a combination of abduction, deduction and implication, stressing that new knowledge is only created by abduction.

This is contrary to the common use of abduction in the social sciences and in artificial intelligence, where Peirce's older meaning is used. Contrary to this usage, Peirce stated in his later writings that the actual process of generating a new rule is not hampered by traditional rules of logic. Rather, he pointed out that humans have an innate ability to correctly do logical inference. Possessing this ability is explained by the evolutionary advantage it gives.

We may therefore define abduction as follows (using Peirce's original formulation):

DEFINITION 6.0: Abduction = Argument that validates a set of individual cases via a an explanation that cites the similarities between the set of individual cases and an already validated generalization.

EXAMPLE 6.0: You have a green fruit, which is not an apple. You already have a tested generalization about green apples that states that green apples are sour. You observe that since the fruit you have in hand is green and resembles a green apple, then (by analogy to the case in green apples) it is probably sour (i.e. it is analogous to green apples, which you have already validated are sour).

In symbolic terms:

(Fg = Ag) + (Ag = As) => Fg = Fs

where:

Fg = a green fruit

Ag = green apple

As = sour green apple

and

Fs = a sour green fruit

In the foregoing example, it is clear why Peirce asserted that abduction is the only way to produce new knowledge (i.e. knowledge that is not strictly derived from existing observations or generalizations). The new generalization (“this new green fruit is sour”) is a new conclusion, derived by analogy to an already existing generalization about green apples. Notice that, once again, the key to formulating an argument by abduction is the inference of an analogy between the green fruit (the taste of which is currently unknown) and green apples (which we already know, by induction, are sour).

IMPLICATION 6.0: Conclusions reached on the basis of abduction are, like conclusions reached on the basis of induction and deduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which an existing analogy is generalized to include a larger set of cases.

Again, since transduction, like induction and deduction, is only validated by repetition of similar cases (see above), abduction is ultimately just as limited as the other forms of argument as the other three:
Abductive reasoning, like inductive and deductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

SECTION SEVEN: ON CONSILIENCE

The newest form of logical argument is argument by consilience. According to Wikipedia, consilience (from the Latin con, meaning “with” and saliré, meaning “to jump”: literally "to jump together") is the process of reasoning from several similar generalizations to a generalization that covers them all. In other words, it is a reasoning process that starts from several inductive generalizations and derives a "covering" generalization that is both validated by and strengthens them all.

The English philosopher and scientist William Whewell (pronounced like "hewel") introduced the concept of consilience into the philosophy of science. In his book, The Philosophy of the Inductive Sciences, published in 1840, Whewell defined the term consilience by saying “The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.”

The concept of consilience has more recently been applied to science in general and evolutionary biology in particular by the American evolutionary biologist Edward_O._Wilson. In his book, Consilience: the Unity of Knowledge, published in 1998, Wilson reintroduced the term and applied it to the modern evolutionary synthesis. His main point was that multiple lines of evidence and inference all point to evolution bynatural selection as the most valid explanation for the origin of evolutionary adaptations and new phylogenetic taxa (e.g. species) as the result of descent with modification (Darwin's term for "evolution").

To extend the example for abduction given above, if the grass is wet (and rain is known to make the grass wet), the road is wet (and rain is known to make the road wet), and the car in the driveway is wet (and rain is known to make the car in the driveway wet), then rain can make everything outdoors wet, including objects whose wetness is not yet verified to be the result of rain.

Independent Observation: “The grass is wet.”

Already validated generalization: "Rain makes grass wet."

Independent Observation: “The road is wet.”

Already validated generalization: "Rain makes roads wet."

Independent Observation: “The car in the driveway is wet.”

Already validated generalization: "Rain makes cars in driveways wet."

Conclusion: “Rain makes everything outdoors wet.”

One can immediately generate an application of this new generalization to new observations:

New observation: "The picnic table in the back yard is wet."

New generalization: “Rain makes everything outdoors wet.”

Conclusion: "The picnic table in the back yard is wet because it has rained."

We may therefore define consilience as follows:

DEFINITION 7.0: Consilience = Argument that validates a new generalization about a set of already validated generalizations, based on similarities between the set of already validated generalizations.

EXAMPLE 7.0: You have a green peach, which when you taste it, is sour. You already have a generalization about green apples that states that green apples are sour and a generalization about green oranges that states that green oranges are sour. You observe that since the peach you have in hand is green and sour, then all green fruits are probably sour. You may then apply this new generalization to all new green fruits whose taste is currently unknown.

In symbolic terms:

(Ag = Sa) + (Og = Os) + (Pg = Ps) => Fg = Fs

where:

Ag = green apples

Sa = sour apples

Og = green oranges

Os = sour oranges

Pg = green peaches

Ps = sour peaches

Fg = green fruit

Fs = sour fruit

Given the foregoing example, it should be clear that consilience, like abduction (according to Peirce) is another way to produce new knowledge. The new generalization (“all green fruits are sour”) is a new conclusion, derived from (but not strictly reducible to) its premises. In essence, inferences based on consilience are "meta-inferences", in that they involve the formulation of new generalizations based on already existing generalizations.

IMPLICATION 7.0: Conclusions reached on the basis of consilience, like conclusions reached on the basis of induction, deduction, and abduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which existing generalizations are generalized to include all of them, and can then be applied to new, similar cases.

Again, since consilience, like induction, deduction, and abduction, is only validated by repetition of similar cases, consilience is ultimately just as limited as the other forms of argument as the other three:
Consilient reasoning, like inductive, deductive, and abductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

However, there is an increasing degree of confidence involved in the five forms of logical argument described above. Specifically, simple transduction produces the smallest degree of confidence, induction somewhat more (depending on the number of individual cases used to validate a generalization), deduction more so (since generalizations are ultimately based on induction), abduction even more (because a new set of observations is related to an already existing generalization, validated by induction), and consilience most of all (because new generalizations are formulated by induction from sets of already validated generalizations, themselves validated by induction).

CONCLUSIONS:

Transduction relates a single premise to a single conclusion, and is therefore the weakest form of logical validation.

Induction validates generalizations only via repetition of similar cases, the validity of which is strengthened by repeated transduction of similar cases.

Deduction validates individual cases based on generalizations, but is limited by the induction required to formulate such generalizations and by the transduction necessary to relate individual cases to each other and to the generalizations within which they are subsumed.

Abduction validates new generalizations via analogy between the new generalization and an already validated generalization; however, it too is limited by the formal limitations of transduction, in this case in the formulation of new generalizations.

Consilience validates a new generalization by showing via analogy that several already validated generalizations together validate the new generalization; once again, consilience is limited by the formal limitations of transduction, in this case in the validation of new generalizations via inferred analogies between existing generalizations.

• Taken together, these five forms of logical reasoning (call them "TIDAC" for short) represent five different but related means of validating statements, listed in order of increasing confidence.

• The validity of all forms of argument are therefore ultimately limited by the same thing: the logical limitations of transduction (i.e. argument by analogy).

• Therefore, there is (and can be) no ultimate certainty in any description or analysis of nature insofar as such descriptions or analyses are based on transduction, induction, deduction, abduction, and/or consilience.

• All we have (and can ever have) is relative degrees of confidence, based on repeated observations of similar objects and processes.

• Therefore, we can be most confident about those generalizations for which we have the most evidence.

• Based on the foregoing analysis, generalizations formulated via simple analogy (transduction) are the weakest and generalizations formulated via consilience are the strongest.

Comments, criticisms, and suggestions are warmly welcomed!

--Allen

Labels: , , , , , , , ,

Friday, November 10, 2006

Thank Goodness! for Daniel Dennett



SOURCE: The Edge.com

AUTHOR: Daniel Dennett

COMMENTARY: Allen MacNeill

I thought readers of this blog might be interested in the fact that notorious atheist (and "Darwinist") Daniel Dennett very nearly died of a dissecting aortic aneurism last week. Did he "find God" like Anthony Flew and A. J. Ayer? Not quite…

THANK GOODNESS!
by Daniel C. Dennett

There are no atheists in foxholes, according to an old but dubious saying, and there is at least a little anecdotal evidence in favor of it in the notorious cases of famous atheists who have emerged from near-death experiences to announce to the world that they have changed their minds. The British philosopher Sir A. J. Ayer, who died in 1989, is a fairly recent example. Here is another anecdote to ponder.

Two weeks ago, I was rushed by ambulance to a hospital where it was determined by c-t scan that I had a "dissection of the aorta"—the lining of the main output vessel carrying blood from my heart had been torn up, creating a two—channel pipe where there should only be one. Fortunately for me, the fact that I'd had a coronary artery bypass graft seven years ago probably saved my life, since the tangle of scar tissue that had grown like ivy around my heart in the intervening years reinforced the aorta, preventing catastrophic leakage from the tear in the aorta itself. After a nine-hour surgery, in which my heart was stopped entirely and my body and brain were chilled down to about 45 degrees to prevent brain damage from lack of oxygen until they could get the heart-lung machine pumping, I am now the proud possessor of a new aorta and aortic arch, made of strong Dacron fabric tubing sewn into shape on the spot by the surgeon, attached to my heart by a carbon-fiber valve that makes a reassuring little click every time my heart beats.

As I now enter a gentle period of recuperation, I have much to reflect on, about the harrowing experience itself and even more about the flood of supporting messages I've received since word got out about my latest adventure. Friends were anxious to learn if I had had a near-death experience, and if so, what effect it had had on my longstanding public atheism. Had I had an epiphany? Was I going to follow in the footsteps of Ayer (who recovered his aplomb and insisted a few days later "what I should have said is that my experiences have weakened, not my belief that there is no life after death, but my inflexible attitude towards that belief"), or was my atheism still intact and unchanged?

Yes, I did have an epiphany. I saw with greater clarity than ever before in my life that when I say "Thank goodness!" this is not merely a euphemism for "Thank God!" (We atheists don't believe that there is any God to thank.) I really do mean thank goodness! There is a lot of goodness in this world, and more goodness every day, and this fantastic human-made fabric of excellence is genuinely responsible for the fact that I am alive today. It is a worthy recipient of the gratitude I feel today, and I want to celebrate that fact here and now.

To whom, then, do I owe a debt of gratitude? To the cardiologist who has kept me alive and ticking for years, and who swiftly and confidently rejected the original diagnosis of nothing worse than pneumonia. To the surgeons, neurologists, anesthesiologists, and the perfusionist, who kept my systems going for many hours under daunting circumstances. To the dozen or so physician assistants, and to nurses and physical therapists and x-ray technicians and a small army of phlebotomists so deft that you hardly know they are drawing your blood, and the people who brought the meals, kept my room clean, did the mountains of laundry generated by such a messy case, wheel-chaired me to x-ray, and so forth. These people came from Uganda, Kenya, Liberia, Haiti, the Philippines, Croatia, Russia, China, Korea, India—and the United States, of course—and I have never seen more impressive mutual respect, as they helped each other out and checked each other's work. But for all their teamwork, this local gang could not have done their jobs without the huge background of contributions from others. I remember with gratitude my late friend and Tufts colleague, physicist Allan Cormack, who shared the Nobel Prize for his invention of the c-t scanner. Allan—you have posthumously saved yet another life, but who's counting? The world is better for the work you did. Thank goodness. Then there is the whole system of medicine, both the science and the technology, without which the best-intentioned efforts of individuals would be roughly useless. So I am grateful to the editorial boards and referees, past and present, of Science, Nature, Journal of the American Medical Association, Lancet, and all the other institutions of science and medicine that keep churning out improvements, detecting and correcting flaws.

Do I worship modern medicine? Is science my religion? Not at all; there is no aspect of modern medicine or science that I would exempt from the most rigorous scrutiny, and I can readily identify a host of serious problems that still need to be fixed. That's easy to do, of course, because the worlds of medicine and science are already engaged in the most obsessive, intensive, and humble self-assessments yet known to human institutions, and they regularly make public the results of their self-examinations. Moreover, this open-ended rational criticism, imperfect as it is, is the secret of the astounding success of these human enterprises. There are measurable improvements every day. Had I had my blasted aorta a decade ago, there would have been no prayer of saving me. It's hardly routine today, but the odds of my survival were actually not so bad (these days, roughly 33 percent of aortic dissection patients die in the first twenty-four hours after onset without treatment, and the odds get worse by the hour thereafter).

One thing in particular struck me when I compared the medical world on which my life now depended with the religious institutions I have been studying so intensively in recent years. One of the gentler, more supportive themes to be found in every religion (so far as I know) is the idea that what really matters is what is in your heart: if you have good intentions, and are trying to do what (God says) is right, that is all anyone can ask. Not so in medicine! If you are wrong—especially if you should have known better—your good intentions count for almost nothing. And whereas taking a leap of faith and acting without further scrutiny of one's options is often celebrated by religions, it is considered a grave sin in medicine. A doctor whose devout faith in his personal revelations about how to treat aortic aneurysm led him to engage in untested trials with human patients would be severely reprimanded if not driven out of medicine altogether. There are exceptions, of course. A few swashbuckling, risk-taking pioneers are tolerated and (if they prove to be right) eventually honored, but they can exist only as rare exceptions to the ideal of the methodical investigator who scrupulously rules out alternative theories before putting his own into practice. Good intentions and inspiration are simply not enough.

In other words, whereas religions may serve a benign purpose by letting many people feel comfortable with the level of morality they themselves can attain, no religion holds its members to the high standards of moral responsibility that the secular world of science and medicine does! And I'm not just talking about the standards 'at the top'—among the surgeons and doctors who make life or death decisions every day. I'm talking about the standards of conscientiousness endorsed by the lab technicians and meal preparers, too. This tradition puts its faith in the unlimited application of reason and empirical inquiry, checking and re-checking, and getting in the habit of asking "What if I'm wrong?" Appeals to faith or membership are never tolerated. Imagine the reception a scientist would get if he tried to suggest that others couldn't replicate his results because they just didn't share the faith of the people in his lab! And, to return to my main point, it is the goodness of this tradition of reason and open inquiry that I thank for my being alive today.

What, though, do I say to those of my religious friends (and yes, I have quite a few religious friends) who have had the courage and honesty to tell me that they have been praying for me? I have gladly forgiven them, for there are few circumstances more frustrating than not being able to help a loved one in any more direct way. I confess to regretting that I could not pray (sincerely) for my friends and family in time of need, so I appreciate the urge, however clearly I recognize its futility. I translate my religious friends' remarks readily enough into one version or another of what my fellow brights have been telling me: "I've been thinking about you, and wishing with all my heart [another ineffective but irresistible self-indulgence] that you come through this OK." The fact that these dear friends have been thinking of me in this way, and have taken an effort to let me know, is in itself, without any need for a supernatural supplement, a wonderful tonic. These messages from my family and from friends around the world have been literally heart-warming in my case, and I am grateful for the boost in morale (to truly manic heights, I fear!) that it has produced in me. But I am not joking when I say that I have had to forgive my friends who said that they were praying for me. I have resisted the temptation to respond "Thanks, I appreciate it, but did you also sacrifice a goat?" I feel about this the same way I would feel if one of them said "I just paid a voodoo doctor to cast a spell for your health." What a gullible waste of money that could have been spent on more important projects! Don't expect me to be grateful, or even indifferent. I do appreciate the affection and generosity of spirit that motivated you, but wish you had found a more reasonable way of expressing it.

But isn't this awfully harsh? Surely it does the world no harm if those who can honestly do so pray for me! No, I'm not at all sure about that. For one thing, if they really wanted to do something useful, they could devote their prayer time and energy to some pressing project that they can do something about. For another, we now have quite solid grounds (e.g., the recently released Benson study at Harvard) for believing that intercessory prayer simply doesn't work. Anybody whose practice shrugs off that research is subtly undermining respect for the very goodness I am thanking. If you insist on keeping the myth of the effectiveness of prayer alive, you owe the rest of us a justification in the face of the evidence. Pending such a justification, I will excuse you for indulging in your tradition; I know how comforting tradition can be. But I want you to recognize that what you are doing is morally problematic at best. If you would even consider filing a malpractice suit against a doctor who made a mistake in treating you, or suing a pharmaceutical company that didn't conduct all the proper control tests before selling you a drug that harmed you, you must acknowledge your tacit appreciation of the high standards of rational inquiry to which the medical world holds itself, and yet you continue to indulge in a practice for which there is no known rational justification at all, and take yourself to be actually making a contribution. (Try to imagine your outrage if a pharmaceutical company responded to your suit by blithely replying "But we prayed good and hard for the success of the drug! What more do you want?")

The best thing about saying thank goodness in place of thank God is that there really are lots of ways of repaying your debt to goodness—by setting out to create more of it, for the benefit of those to come. Goodness comes in many forms, not just medicine and science. Thank goodness for the music of, say, Randy Newman, which could not exist without all those wonderful pianos and recording studios, to say nothing of the musical contributions of every great composer from Bach through Wagner to Scott Joplin and the Beatles. Thank goodness for fresh drinking water in the tap, and food on our table. Thank goodness for fair elections and truthful journalism. If you want to express your gratitude to goodness, you can plant a tree, feed an orphan, buy books for schoolgirls in the Islamic world, or contribute in thousands of other ways to the manifest improvement of life on this planet now and in the near future.

Or you can thank God—but the very idea of repaying God is ludicrous. What could an omniscient, omnipotent Being (the Man Who has Everything?) do with any paltry repayments from you? (And besides, according to the Christian tradition God has already redeemed the debt for all time, by sacrificing his own son. Try to repay that loan!) Yes, I know, those themes are not to be understood literally; they are symbolic. I grant it, but then the idea that by thanking God you are actually doing some good has got to be understood to be just symbolic, too. I prefer real good to symbolic good.

Still, I excuse those who pray for me. I see them as like tenacious scientists who resist the evidence for theories they don't like long after a graceful concession would have been the appropriate response. I applaud you for your loyalty to your own position—but remember: loyalty to tradition is not enough. You've got to keep asking yourself: What if I'm wrong? In the long run, I think religious people can be asked to live up to the same moral standards as secular people in science and medicine.

DANIEL C. DENNETT is University Professor, Professor of Philosophy, and Director of the Center for Cognitive Studies at Tufts University. His most recent book is Breaking the Spell: Religion as a Natural Phenomenon.

See also: The Brights

COMMENTARY:

While I have sometimes found myself disagreeing rather vehemently with Daniel Dennett, in this case I think he is absolutely right on. I had a somewhat similar experience last month - I was rushed to the hospital with my right ureter blocked by a HUGE kidney stone. It was extraordinarily painful, and was causing my right kidney to swell, and would probably have eventually caused it to die, with me following not long after. But, with the help of the ER staffs in two hospitals, an ambulance crew who drove me from the little town of Howell, Michigan to the University of Michigan Hospital, and my urologist and the surgical staff at my home hospital here in Ithaca, I am now much better. Anyway, while I lay there in a drug-induced haze (dilaudid, a morphine analog), I too mused on the question of prayer and the efficacy of religious belief versus "belief" in the protocols and practices of modern medicine (which is, of course, entirely based on the empirical sciences), and concluded just what Daniel Dennett did: that I would much rather have an atheist medical doctor, well trained in medical science, operating on me than a deeply religious person without such training.

Don't get me wrong: I don't begrudge religious believers their beliefs. But, if I had to make a choice and my life (or the life of someone I loved) were on the line, I would choose science every time. In other words, if it were a choice between a deeply religious but poorly trained doctor without much "bedside manner" and an atheist but highly trained doctor with the bedside manner of a Marine drill sargeant, I would choose the latter every time.

And would I appreciate anyone praying for me? I would of course appreciate the sentiment, but would not expect it to have any effect whatsoever on the outcome. Unlike science, prayer has no observable effect on the course of events in the real, physical world...which is, as far as I know, the only world there is.

--Allen

Labels: , , , , , ,

Wednesday, November 01, 2006

An Evolutionary Theory Of Right And Wrong



AUTHOR: Nicholas Wade

SOURCE: An Evolutionary Theory Of Right And Wrong

COMMENTARY: Allen MacNeill

Just in time, here is a review of a new book on the subject of the relationship between evolution and ethics/morals. Those of you who are currently students in evolution at Cornell will already be familiar with this topic: Essay #3 asks you to focus your attention on the very same question. Does knowing that our evolutionary past may strongly bias us in the direction of increased sociality have anything at all to do with answering the question "how should we behave?" Read the book review, and then meet me following it for my own opinion:

FULL TEXT OF ARTICLE:

Who doesn’t know the difference between right and wrong? Yet that essential knowledge, generally assumed to come from parental teaching or religious or legal instruction, could turn out to have a quite different origin.

Primatologists like Frans de Waal have long argued that the roots of human morality are evident in social animals like apes and monkeys. The animals’ feelings of empathy and expectations of reciprocity are essential behaviors for mammalian group living and can be regarded as a counterpart of human morality.

Marc D. Hauser, a Harvard biologist, has built on this idea to propose that people are born with a moral grammar wired into their neural circuits by evolution. In a new book, “Moral Minds” (HarperCollins 2006), he argues that the grammar generates instant moral judgments which, in part because of the quick decisions that must be made in life-or-death situations, are inaccessible to the conscious mind.

People are generally unaware of this process because the mind is adept at coming up with plausible rationalizations for why it arrived at a decision generated subconsciously.

Dr. Hauser presents his argument as a hypothesis to be proved, not as an established fact. But it is an idea that he roots in solid ground, including his own and others’ work with primates and in empirical results derived by moral philosophers.

The proposal, if true, would have far-reaching consequences. It implies that parents and teachers are not teaching children the rules of correct behavior from scratch but are, at best, giving shape to an innate behavior. And it suggests that religions are not the source of moral codes but, rather, social enforcers of instinctive moral behavior.

Both atheists and people belonging to a wide range of faiths make the same moral judgments, Dr. Hauser writes, implying “that the system that unconsciously generates moral judgments is immune to religious doctrine.” Dr. Hauser argues that the moral grammar operates in much the same way as the universal grammar proposed by the linguist Noam Chomsky as the innate neural machinery for language. The universal grammar is a system of rules for generating syntax and vocabulary but does not specify any particular language. That is supplied by the culture in which a child grows up.

The moral grammar too, in Dr. Hauser’s view, is a system for generating moral behavior and not a list of specific rules. It constrains human behavior so tightly that many rules are in fact the same or very similar in every society — do as you would be done by; care for children and the weak; don’t kill; avoid adultery and incest; don’t cheat, steal or lie.

But it also allows for variations, since cultures can assign different weights to the elements of the grammar’s calculations. Thus one society may ban abortion, another may see infanticide as a moral duty in certain circumstances. Or as Kipling observed, “The wildest dreams of Kew are the facts of Katmandu, and the crimes of Clapham chaste in Martaban.”

Matters of right and wrong have long been the province of moral philosophers and ethicists. Dr. Hauser’s proposal is an attempt to claim the subject for science, in particular for evolutionary biology. The moral grammar evolved, he believes, because restraints on behavior are required for social living and have been favored by natural selection because of their survival value.

Much of the present evidence for the moral grammar is indirect. Some of it comes from psychological tests of children, showing that they have an innate sense of fairness that starts to unfold at age 4. Some comes from ingenious dilemmas devised to show a subconscious moral judgment generator at work. These are known by the moral philosophers who developed them as “trolley problems.”

Suppose you are standing by a railroad track. Ahead, in a deep cutting from which no escape is possible, five people are walking on the track. You hear a train approaching. Beside you is a lever with which you can switch the train to a sidetrack. One person is walking on the sidetrack. Is it O.K. to pull the lever and save the five people, though one will die?

Most people say it is.

Assume now you are on a bridge overlooking the track. Ahead, five people on the track are at risk. You can save them by throwing down a heavy object into the path of the approaching train. One is available beside you, in the form of a fat man. Is it O.K. to push him to save the five?

Most people say no, although lives saved and lost are the same as in the first problem.

Why does the moral grammar generate such different judgments in apparently similar situations? It makes a distinction, Dr. Hauser writes, between a foreseen harm (the train killing the person on the track) and an intended harm (throwing the person in front of the train), despite the fact that the consequences are the same in either case. It also rates killing an animal as more acceptable than killing a person.

Many people cannot articulate the foreseen/intended distinction, Dr. Hauser says, a sign that it is being made at inaccessible levels of the mind. This inability challenges the general belief that moral behavior is learned. For if people cannot articulate the foreseen/intended distinction, how can they teach it?

Dr. Hauser began his research career in animal communication, working with vervet monkeys in Kenya and with birds. He is the author of a standard textbook on the subject, “The Evolution of Communication.” He began to take an interest in the human animal in 1992 after psychologists devised experiments that allowed one to infer what babies are thinking. He found he could repeat many of these experiments in cotton-top tamarins, allowing the cognitive capacities of infants to be set in an evolutionary framework.

His proposal of a moral grammar emerges from a collaboration with Dr. Chomsky, who had taken an interest in Dr. Hauser’s ideas about animal communication. In 2002 they wrote, with Dr. Tecumseh Fitch, an unusual article arguing that the faculty of language must have developed as an adaptation of some neural system possessed by animals, perhaps one used in navigation. From this interaction Dr. Hauser developed the idea that moral behavior, like language behavior, is acquired with the help of an innate set of rules that unfolds early in a child’s development.

Social animals, he believes, possess the rudiments of a moral system in that they can recognize cheating or deviations from expected behavior. But they generally lack the psychological mechanisms on which the pervasive reciprocity of human society is based, like the ability to remember bad behavior, quantify its costs, recall prior interactions with an individual and punish offenders. “Lions cooperate on the hunt, but there is no punishment for laggards,” Dr. Hauser said.

The moral grammar now universal among people presumably evolved to its final shape during the hunter-gatherer phase of the human past, before the dispersal from the ancestral homeland in northeast Africa some 50,000 years ago. This may be why events before our eyes carry far greater moral weight than happenings far away, Dr. Hauser believes, since in those days one never had to care about people remote from one’s environment.

Dr. Hauser believes that the moral grammar may have evolved through the evolutionary mechanism known as group selection. A group bound by altruism toward its members and rigorous discouragement of cheaters would be more likely to prevail over a less cohesive society, so genes for moral grammar would become more common.

Many evolutionary biologists frown on the idea of group selection, noting that genes cannot become more frequent unless they benefit the individual who carries them, and a person who contributes altruistically to people not related to him will reduce his own fitness and leave fewer offspring.

But though group selection has not been proved to occur in animals, Dr. Hauser believes that it may have operated in people because of their greater social conformity and willingness to punish or ostracize those who disobey moral codes.

“That permits strong group cohesion you don’t see in other animals, which may make for group selection,” he said.

His proposal for an innate moral grammar, if people pay attention to it, could ruffle many feathers. His fellow biologists may raise eyebrows at proposing such a big idea when much of the supporting evidence has yet to be acquired. Moral philosophers may not welcome a biologist’s bid to annex their turf, despite Dr. Hauser’s expressed desire to collaborate with them.

Nevertheless, researchers’ idea of a good hypothesis is one that generates interesting and testable predictions. By this criterion, the proposal of an innate moral grammar seems unlikely to disappoint.

COMMENTARY:

So, do you think that we have an innate predisposition (inherited from our primate ancestors) to behave in ways we would recognize as "moral?" I think so, and so apparently do Franz De Waal and Marc Hauser (not to mention E. O. Wilson, Jane Goodall, and a host of other evolutionary biologists).

But, does it therefore follow that this predisposition necessarily dictates how we should behave? I believe that the answer to this question is no. More than that, I believe that to make the jump from the former to the latter is to commit a fundamental logical fallacy (indeed, it has a formal name - the "naturalistic fallacy"). It conflates statements about what "is" the case (i.e. what is "natural" behavior for us and our fellow primates) and what "ought" to be the case. This fallacy was pointed out a century ago, most forcefully by G. E. Moore, who pointed out that "is" statements cannot logically be made equivalent to "ought" statements.

This distinction is crucially important, and nowhere more so than in the application of evolutionary theory to human behavior. The economic and social movement known as "social darwinism" was fundamentally based on the "naturalistic fallacy," as exemplified by the words of the well-known English hymn, "All Things Bright And Beautiful," written by Cecil Frances Humphreys Alexander:

All things bright and beautiful,
All creatures great and small,
All things wise and wonderful:
The Lord God made them all.


All well and good, but here's the next verse:

The rich man in his castle,
The poor man at his gate,
God made them, high or lowly,
And ordered their estate.


(Interestingly, if you do a Google search for these particular lyrics, you will not find them. Apparently, the social darwinist overtones of the second verse do not sit well with modern audiences, including those in church.)

Alexander was most emphatically not a "social darwinist," yet the moral equation presented in his hymn is essentially equivalent to that of Herbert Spencer and the other social darwinists: that one's position in life (and, by implication, one's behavior) are determined by a force outside one's self (God or natural selection), and that all that remains for us is to "get with the program."

In a word: bullshit. That way lies the gas chambers at Auschwitz. No amount of science can tell us what we ought to do. At most, scientific knowledge can tell us how difficult (or easy) what we ought to do might be, but to conflate the two is to commit both a logical fallacy and monstrous evil. I sincerely hope that most evolutionary biologists will not agree with either the opinions of Marc Hauser or Franz De Waal on this subject, no matter how encouraging they may be. Long and hard experience has shown us that the "naturalistic fallacy" can be used to justify monstrous injustices (as has the belief in the authority of a "supernatural lawgiver," and for the same reason).

As adults, we must face up to the difficult responsibility of deciding what we "ought" to do, and then do it, no matter how easy or difficult. If it is the former (and if this is because of our evolutionary heritage), then perhaps the road ahead will not be as rocky as the one we have trod to get here. However, if it is the latter (and I suspect it may be, once again based on my understanding of our evolutionary heritage - more on this later), then that's just tough: but (to paraphrase Charles Darwin), one must do one's duty.

--Allen

Labels: , , , , , ,

Friday, July 28, 2006

Update on the Cornell Evolution and Design Seminar

Things have been developing in rather interesting ways in the Cornell "Evolution and Design" seminar. We have worked our way through all of the articles/papers and books in our required reading list, along with several in the recommended list. Before I summarize our "findings", let me point out that for most of the summer our seminar has consisted almost entirely of registered students (all but one undergrads, with one employee taking the course for credit), plus invited guests (Hannah Maxson and Rabia Malik of the Cornell IDEA Club). Two other faculty members (Warren Alman and Will Provine) attended for a while, but stopped in the middle of the second week, leaving me as the only faculty member still attending (not all that surprising, as it is my course after all - however, at this point I view my job mostly as facilitator, rather than teacher).

Anyway, here is how we've evaluated the books and articles/papers we've been "deconstructing":

Dawkins/The Blind Watchmaker: The "Weasel" example is unconvincing, and parts of the book are somewhat polemical, by which we mean substituting assertion, arguments by analogy, arguments from authority, and various other forms of non-logical argument for legitimate logical argument (i.e. based on presentation and evaluation of evidence, especially empirical evidence). Dawkins' argument for non-telological adaptation (the "as if designed" argument), although intriguing, seems mostly to be supported by assertion and abstract models, rather than by empirical evidence.

Behe/Darwin's Black Box: The argument for "irreducible complexity", while interesting, appears to leave almost all of evolutionary biology untouched. Behe's argument is essentially focused on the origin of life from abiotic materials, and arguments for the "irreducible complexity" of the genetic code and a small number of biochemical pathways and processes. Therefore, generalizing his conclusions to all of evolutionary biology (and particularly to descent with modification from common ancestors, which he clearly agrees is "strongly supported by the evidence") is not logically warranted. Attempts to make such extensions are therefore merely polemics, rather than arguments supported by evidence.

Dembski/The Design Inference and "Specification: The Pattern that Signifies Intelligence": Dembski's mathematical models are intriguing, especially his recent updating of the mathematical derivation of chi, his measure for "design" in complex, specified systems. However, it is not clear if empirical evidence (i.e. counted or measured quantities) can actually be plugged into the equation to yield an unambiguous value for chi, nor is it clear what value for chi would unambiguously allow for "design detection." Dembski suggests chi equal to or greater than one, but we agreed that it would make more sense to use repeated tests, using actual designed and undesigned systems, to derive an empirically based value for chi, which could then be used to identify candidates for "design" in nature. If, as some have suggested, plugging empirically derived measurements into Dembski's formula for chi is problematic, then his equation, however interesting, carries no real epistemic weight (i.e. no more than Dawkin's "Weasel", as noted above).

Johnson/The Wedge of Truth: To my surprise, both the ID supporters and critics in the class almost immediately agreed that Johnson's book was simply a polemic, with no real intellectual (and certainly no scientific) merit. His resort to ad hominem arguments, guilt by association, and the drawing of spurious connections via arguments by analogy were universally agreed to be "outside the bounds of this course" (and to exceed in some cases Dawkins' use of similar tactics), and we simply dropped any further consideration of it as unproductive. Indeed, one ID supporter stated quite clearly that "this book isn't ID", and that the kinds of assertions and polemics that Johnson makes could damage the credibility of ID as a scientific enterprise in the long run.

Ruse/Darwin and Design (plus papers on teleology in biology by Ayala, Mayr, and Nagel): Both ID supporters and evolution supporters quickly agreed that all of these authors make a convincing case for the legitimacy of inferring teleology (or what Mayr and others call “teleonomy”) in evolutionary adaptations. That is, adaptations can legitimately be said to have “functions,” and that the genomes of organisms constitute “designs” for their actualization, which is accomplished via organisms' developmental biology interacting with their environments.

Moreover, we were able to come to some agreement that there are essentially two different types of “design”:

Pre-existing design, in which the design for an object/process is formulated prior to the actualization of that object/process (as exemplified by Mozart’s composing of his final requiem mass); note that this corresponds to a certain extent with what ID supporters are now calling “front-loaded design”, and

Emergent design, in which the design for an object/process arises out of a natural process similar to that by which the actualization takes place (as exemplified by Mayr’s “teleonomy”).

In addition, the ID supporters in the seminar class agreed that “emergent design” is not the kind of design they believe ID is about, as it is clearly a product of natural selection. A discussion of “pre-existing design” then ensued, going long past our scheduled closing time without resolution. We will return to a discussion of it for our last two meetings next week.

As we did not use the two days scheduled for “deconstruction” of Johnson’s Wedge of Truth, we opened the floor to members of the class to present rough drafts/outlines of their research papers for the course. It is interesting to note that both papers so presented concerned non-Western/non-Christian concepts of “design” (one focusing on Hindu/Indian and Chinese concepts of teleology in nature, and the other on Buddhist concepts of design and naturalistic causation).

Overall, the discussion taking place in our seminar classes has been both respectful and very spirited, as we tussle with difficult ideas and arguments. For my part, I have come to a much more nuanced perception of both sides of this issue, and to a much greater appreciation of the difficulties involved with coming to conclusions on what is clearly one of the core issues in all of philosophy. And, I believe we have all come to appreciate each other and our commitments to fair and logical argument, despite our differences…and even to have become friends in the process. What more could one ask for in a summer session seminar?

Labels: , , , , ,

Thursday, July 13, 2006

D'Arcy Thompson and "Front-Loaded" Intelligent Design



AUTHOR: Salvador Cordova

SOURCE: Marsupials and placentals: A case of front-loaded, pre-programmed, designed evolution?

COMMENTARY: Allen MacNeill

The concept of "front-loading" as described in Salvador Cordova's post at Telic Thoughts bears a remarkable resemblance to the ideas of the Scottish biomathematician D'Arcy Thompson (1860-1948). In his magnum opus, Growth and Form, Thompson proposed that biologists had over-emphasized evolution (and especially natural selection) and under-emphasized the constraints and parameters within which organisms develop, constraints that "channel" animal forms into particular patterns that are repeated over and over again across the phyla.

However, while Thompson's ideas strongly imply that there is a kind of teleology operating at several levels in biology (especially developmental biology), Thompson himself did not present hypotheses that were empirically testable (sound familiar?):

Thompson did not articulate his insights in the form of experimental hypotheses that can be tested. Thompson was aware of this, saying that 'This book of mine has little need of preface, for indeed it is 'all preface' from beginning to end.'

Thompson's huge book (over 1,000 heavily illustrated pages) is a veritable gold mine of ideas along the lines articulated in Sal's post. However, Thompson's underlying thesis is just as inimical to ID as is the explanation from evolutionary biology. His argument is essentially that biological form is constrained by the kind of mathematical relationships that characterize classical physics. That is, there are "built-in" laws of form that constrain the forms that biological organisms can take. And therefore, physical law provides the “front-loading”, not a supernatural “intelligent designer.”

For example, Thompson pointed out that the shape that droplets of viscous liquid take when dropped into water are virtually identical to the medusa forms of jellyfish, and that this "convergence of form" is therefore not accidental. Rather, it is fundamentally constrained by the physics of moving fluids, as described in the equations of fluid mechanics. Thompson's book is filled with similar examples, all pointing to the same conclusion: that biological form is constrained by the laws of physics (especially classical mechanics).

Evolutionary convergence, far from departing from Thompson's ideas, is based on essentially the same kinds of constraints. Sharks, dolphins (the fish, not the mammals), tunas, ichthyosaurs, and porpoises all appear superficially similar (despite significant anatomical differences) because their external shapes are constrained by the fluid medium through which they swim. In the language of natural selection, any ancestor of a shark, dolphin, tuna, ichthyosaur, or porpoise that (through its developmental biology) could take the shape of a torpedo could move more efficiently through the water than one that had a different (i.e. less efficient) shape, and therefore would have a selective advantage that would, over time, result in similar shapes among its proliferating ancestors. The same concept is applied to the parallel evolution of marsupial and placental mammals: similar environments and subsistence patterns place similar selective constraints on marsupial and placental mammals in different locations, resulting in strikingly similar anatomical and physiological adaptations, despite relatively non-homologous ancestry.

This evolutionary argument is now being strongly supported by findings in the field of evolutionary development ("evo-devo"), in which arguments based on "deep homology" are providing explanations for at least some of the seemingly amazing convergences we see in widely separated groups of organisms. Recent discoveries about gene regulation via hierarchical sets of regulatory genes indicate that these genes have been conserved through deep evolutionary time, from the first bilaterally symmetric metazoans to the latest placental mammals, as shown by their relative positions in the genome and relatively invariant nucleotide sequences. These genes channel the arrangement of overall anatomy and body form throughout the course of development, producing the overall shapes of organisms and the relationships between body parts that we refer to when discussing evolutionary convergence.

However, as should be obvious by now, this in no way provides evidence for the currently popular ID hypothesis of “front-loading”, except insofar that it states that the hierarchical control of overall development evolved very early among the metazoa. It provides no empirically testable way to distinguish between an evolutionary explanation and a “design” explanation. Indeed, all of the evidence to date could be explained using either theory.

And so, by the rules of empirical science, since the evolutionary explanation is both sufficient to explain the phenomena and does not require causes that are outside of nature (i.e. a supernatural designer, that is neither itself natural nor works through natural – i.e. material and efficient – causes), evolutionary biologists are fully justified in accepting the evolutionary explanation (and disregarding the “front-loaded ID” explanation.

Only in the case that the kinds of natural causes described above (especially the ability of evo-devo processes to constrain the development of overall form via purely natural means via the known biochemistry of development) can NOT explain the patterns we observe in convergent evolution should we entertain other hypotheses (especially if those other hypotheses are not empirically testable). Only then, and not before…and therefore certainly not now.

FOR FURTHER READING:

For more on Thompson and his work, see:
http://www.google.com/search?hl=en&q=D%27Arcy+Thompson&btnG=Google+Search
and especially:
http://www-history.mcs.st-andrews.ac.uk/Mathematicians/Thompson_D'Arcy.html
and follow the links at:
http://en.wikipedia.org/wiki/D'Arcy_Thompson

Also, a thread that included a discussion of Thompson's work has already appeared at Telic Thoughts http://telicthoughts.com/?p=763

--Allen

Labels: , , , , , , ,

Saturday, June 17, 2006

Identity, Analogy, and Logical Argument in Science (Updated)


AUTHOR: Allen MacNeill

SOURCE: Original essay

COMMENTARY: That's up to you...
"...analogy may be a deceitful guide."
- Charles Darwin, Origin of Species

The descriptions and analysis of the functions of analogy in logical reasoning that I am about to describe are, in my opinion, not yet complete. I have been working on them for several years (actually, about 25 years all told), but I have yet to be completely satisfied with them. I am hoping, therefore, that by making them public here (and eventually elsewhere) that they can be clarified to everyone’s satisfaction.

SECTION ONE: ON ANALOGY

To begin with, let us define an analogy as “a similarity between separate (but perhaps related) objects and/or processes”. As we will see, this definition may require refinement (and may ultimately rest on premises that cannot be proven - that is, axioms - rather than formal proof). But for now, let it be this:

DEFINITION 1.0: Analogy = Similarity between separate objects and/or processes (from the Greek ana, meaning “a collection” and logos, meaning “that which unifies or signifies.”)

AXIOM 1.0: The only perfect analogy to a thing is the thing itself.

COMMENTARY 1.0: This is essentially a statement of the logical validity of tautology (from the Greek tó autos meaning “the same” and logos, meaning “word” or “information”. As Ayn Rand (and, according to her, Aristotle) asserted:

AXIOM 1.0: A = A

From this essentially unprovable axiom, the following corrolary may be derived:

CORROLARY 1.1: All analogies that are not identities are necessarily imperfect.

AXIOM 2.0: Only perfect analogies are true.

CORROLARY 2.1: Only identities (i.e. tautologies, or "perfect" analogies) are true.

CORROLARY 2.2: Since only tautologies are prima facie "true", this implies that all analogical statements (except tautologies) are false to some degree. This leads us to:

AXIOM 3.0: All imperfect analogies are false to some degree.

AXIOM 3.0: A ≠ notA

CORROLARY 3.1: Since all non-tautological analogies are false to some degree, then all arguments based on non-tautological analogies are also false to the same degree.

COMMENTARY 2.0: The validity of all logical arguments that are not based on tautologies are matters of degree, with some arguments being based on less false analogies than others.

CONCLUSION 1: As we will see in the next sections, all forms of logical argument (i.e. transduction, induction, deduction, and abduction) necessarily rely upon non-tautological analogies. Therefore, to summarize:
All forms of logical argument (except for tautologies) are false to some degree.

Our task, therefore, is not to determine if non-tautological logical arguments are true or false, but rather to determine the degree to which they are false (and therefore the degree to which they are also true), and to then use this determination as the basis for establishing confidence in the validity of our conclusions.

SECTION TWO: ON VALIDITY, CONFIDENCE, AND LOGICAL ARGUMENT

Based on the foregoing, let us define validity as “the degree to which a logical statement is based upon false analogies.” Therefore, the closer an analogy is to a tautology, the more valid that analogy is.

DEFINITION 2.0: Validity = The degree to which a logical statement is based upon false analogies.

COMMENTARY: Given the foregoing, it should be clear at this point that (with the exception of tautologies):
There is no such thing as absolute truth; there is only degrees of validity.

In biology, it is traditional to determine the validity of an hypothesis by calculating confidence levels using statistical analyses. According to these analyses, if a hypothesis is supported by at least 95% of the data (that is, if the similarity between the observed data and the values predicted by the hypothesis being tested is at least 95%), then the hypothesis is considered to be valid. In the context of the definitions, axiom, and corrolaries developed in the previous section, this means that valid hypotheses in biology may be thought of as being at least 95% tautological (and therefore less than 5% false).

DEFINITION 2.1: Confidence = The degree to which an observed phenomenon conforms to (i.e. is similar to) a hypothetical prediction of that phenomenon.

This means that, in biology:
Validity (i.e. truth) is, by definition, a matter of degree.

Following long tradition, an argument (from the Latin argueré, meaning “to make clear”) is considered to be a statement in which a premise (or premises, if more than one, from the Latin prae, meaning “before” and mitteré, meaning “to place”) is related to a conclusion (i.e. the end of the argument). There are four kinds of argument, based on the means by which a premise (or premises) are related to a conclusion: transduction, induction, deduction, and abduction, which will be considered in order in the following sections.

DEFINITION 2.2: Argument = A statement of a relationship between a premise (or premises) and a conclusion.

Given the foregoing, the simplest possible argument is a statement of a tautology, as in A = A. Unlike all other arguments, this statement is true by definition (i.e. on the basis of AXIOM 1.0). All other arguments are only true by matter of degree, as established above.

SECTION THREE: ON TRANSDUCTION

The simplest (and least effective) form of logical argument is argument by analogy. The Swiss child psychologist Jean Piaget called this form of reasoning transduction (from the Latin trans, meaning “across” and duceré. meaning “to lead”), and showed that it is the first and simplest form of logical analysis exhibited by young children. We may define transduction as follows:

DEFINITION 3.0: Transduction = Argument by analogy alone (i.e. by simple similarity between a premise and a conclusion).

A tautology is the simplest transductive argument, and is the only one that is “true by definition.” As established above, all other arguments are “true only by matter of degree.” But to what degree? How many examples of a particular premise are necessary to establish some degree of confidence? That is, how confident can we be of a conclusion, given the number of supporting premises?

As the discussion of confidence in Section 2 states, in biology at least 95% of the observations that we make when testing a prediction that flows from an hypothesis must be similar to those predicted by the hypothesis. This, in turn, implies that there must be repeated examples of observations such that the 95% confidence level can be reached.

However, in a transductive argument, all that is usually stated is that a single object or process is similar to another object or process. That is, the basic form of a transductive argument is:

Ai => Aa

where:

Ai is an individual object or process

and

Aa is an analogous (i.e. similar, but identical, and therefore non-tautological) object or process

Since there is only a single example in the premise in such an argument, to state that there is any degree of confidence in the conclusion is very problematic (since it is nonsensical to state that a single example constitutes 95% of anything).

In science, this kind of reasoning is usually referred to as “anecdotal evidence,” and is considered to be invalid for the support of any kind of generalization. For this reason, arguments by analogy are generally not considered valid in science. As we will see, however, they are central to all other forms of argument, but there must be some additional content to such arguments for them to be considered generally valid.

EXAMPLE 3.0: To use an example that can be extended to all four types of logical argument, consider a green apple. Imagine that you have never tasted a green apple before. You do so, and observe that it is sour. What can you conclude at this point?

The only thing that you can conclude as the result of this single observation is that the individual apple that you have tasted is sour. In the formalism introduced above:

Ag => As

where:

Ag = green apple

and

As = sour apple

While this statement is valid for the particular case noted, it cannot be generalized to all green apples (on the basis of a single observation). Another way of saying this is that the validity of generalizing from a single case to an entire category that includes that case is extremely low; so low that it can be considered to be invalid for most intents and purposes.

SECTION FOUR: ON INDUCTION

A more complex form of logical argument is argument by induction. According to the Columbia Encyclopedia, induction (from the Latin in, meaning “into” and duceré, meaning “to lead”) is a form of argument in which multiple premises provide grounds for a conclusion, but do not necessitate it. Induction is contrasted with deduction, in which true premises do necessitate a conclusion.

An important form of induction is the process of reasoning from the particular to the general. The English philosopher and scientist Francis Bacon in his Novum Organum (1620) elucidated the first formal theory of inductive logic, which he proposed as a logic of scientific discovery, as opposed to deductive logic, the logic of argumentation. the Scottish philosopher David Hume has influenced 20th-century philosophers of science who have focused on the question of how to assess the strength of different kinds of inductive argument (see Nelson Goodman and Karl Popper).

We may therefore define induction as follows:

DEFINITION 4.0: Induction = Argument from individual observations to a generalization that applies to all (or most) of the individual observations.

EXAMPLE 4.0: You taste one green apple; it is sour. You taste another green apple; it is also sour. You taste yet another green apple; once again, it is sour. You continue tasting green apples until some relatively arbitrary point (which can be stated in formal terms, but which is unnecessary for the current analysis), you formulate a generalization; “(all) green apples are sour.”

In symbolic terms:

A1 + A2 + A3 + …An => As

where:

A1 + A2 + A3 + …An = individual cases of sour green apples

and

As = green apples are sour

As we have already noted, the number of similar observations (i.e. An in the formula, above) has an effect on the validity of any conclusion drawn on the basis of those observations. In general, enough observations must be made that a confidence level of 95% can be reached, either in accepting or rejecting the hypothesis upon which the conclusion is based. In practical terms, conclusions formulated on the basis of induction have a degree of validity that is directly related to the number of similar observations; the more similar observations one makes, the greater the validity of one’s conclusions.

IMPLICATION 4.0: Conclusions reached on the basis of induction are necessarily tentative and depend for their validity on the number of similar observations that support such conclusions. In other words:
Inductive reasoning cannot reveal absolute truth, as it is necessarily limited only to degrees of validity.

It is important to note that, although transduction alone is invalid as a basis for logical argument, transduction is nevertheless an absolutely essential part of induction. This is because, before one can formulate a generalization about multiple individual observations, it is necessary that one be able to relate those individual observations to each other. The only way that this can be done is via transduction (i.e. by analogy, or similarity, between the individual cases).

In the example of green apples, before one can conclude that “(all) green apples are sour” one must first conclude that “this green apple and that green apple (and all those other green apples) are similar.” Since transductive arguments are relatively weak (for the reasons discussed above), this seems to present an unresolvable paradox: no matter how many similar repetitions of a particular observation, each repetition depends for its overall validity on a transductive argument that it is “similar” to all other repetitions.

This could be called the “nominalist paradox,” in honor of the philosophical tradition founded by the English cleric and philosopher William of Ockham, of “Ockham’s razor” fame. On the face of it, there seems to be no resolution for this paradox. However, I believe that a solution is entailed by the logic of induction itself. As the number of “similar” repetitions of an observation accumulate, the very fact that there are a significant number of such repetitions provides indirect support for the assertion that the repetitions are necessarily (rather than accidentally) “similar.” That is, there is some “law-like” property that is causing the repetitions to be similar to each other, rather than such similarities being the result of random accident.

SECTION FIVE: ON DEDUCTION

A much older form of logical argument than induction is argument by deduction. According to the Columbia Encyclopedia, deduction (from the Latin de, meaning “out of” and duceré, meaning “to lead”) is a form of argument in which individual cases are derived from (and validated by) a generalization that subsumes all such cases. Unlike inductive argument, in which no amount of individual cases can prove a generalization based upon them to be “absolutely true,” the conclusion of a deductive inference is necessitated by the premises. That is, the conclusions (i.e. the individual cases) can’t be false if the premise (i.e. the generalization) is true, provided that they follow logically from it.

Deduction may be contrasted with induction, in which the premises suggest, but do not necessitate a conclusion. The ancient Greek philosopher Aristotle first laid out a systematic analysis of deductive argumentation in the Organon. As noted above, Francis Bacon elucidated the formal theory of inductive logic, which he proposed as the logic of scientific discovery.

Both processes, however, are used constantly in scientific research. By observation of events (i.e. induction) and from principles already known (i.e. deduction), new hypotheses are formulated; the hypotheses are tested by applications; as the results of the tests satisfy the conditions of the hypotheses, laws are arrived at (i.e. by induction again); from these laws future results may be determined by deduction.

We may therefore define deduction as follows:

DEFINITION 5.0: Deduction = Argument from a generalization to an individual case, and which applies to all such individual cases.

EXAMPLE 5.0: You assume that all green apples are sour. You are confronted with a particular green apple. You conclude that, since this is a green apple and green apples are sour, then “this green apple is sour.”

In symbolic terms:

As => Ai

where:

As = all apples are sour

Ai = any individual case of a green apple

As noted above, the conclusions of deductive arguments are necessarily true if the premise (i.e. the generalization) is true. However, it is not clear how such generalizations are themselves validated. In the scientific tradition, the only valid source of such generalizations is induction, and so (contrary to the Aristotelian tradition), deductive arguments are no more valid than the inductive arguments by which their major premises are validated.

IMPLICATION 5.0: Conclusions reached on the basis of deduction are, like conclusions reached on the basis of induction, necessarily tentative and depend for their validity on the number of similar observations upon which their major premises are based. In other words:
Deductive reasoning, like inductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which its major premise is based.

Hence, despite the fact that induction and deduction “argue in opposite directions,” we come to the conclusion that, in terms of natural science, the validity of both is ultimately dependent upon the number and degree of similarity of the observations that are used to infer generalizations. Therefore, unlike the case in purely formal logic (in which the validity of inductive inferences is always conditional, whereas the validity of deductive inferences is not), there is an underlying unity in the source of validity in the natural sciences:
All arguments in the natural sciences are validated by inductive inference.

SECTION SIX: ON ABDUCTION

A somewhat newer form of logical argument is argument by abduction. According to the Columbia Encyclopedia, abduction (from the Latin ab, meaning “away” and duceré, meaning “to lead”) is the process of reasoning from individual cases to the best explanation for those cases. In other words, it is a reasoning process that starts from a set of facts and derives their most likely explanation from an already validated generalization that explains them. In simple terms, the new observation(s) is/are "abducted" into the already existing generalization.

The American philosopher Charles Sanders Peirce (last name pronounced like "purse") introduced the concept of abduction into modern logic. In his works before 1900, he generally used the term abduction to mean “the use of a known rule to explain an observation,” e.g., “if it rains, the grass is wet” is a known rule used to explain why the grass is wet:

Known Rule: “If it rains, the grass is wet.”

Observation: “The grass is wet.”

Conclusion: “The grass is wet because it has rained.”

Peirce later used the term abduction to mean “creating new rules to explain new observations,” emphasizing that abduction is the only logical process that actually creates new knowledge. He described the process of science as a combination of abduction, deduction and implication, stressing that new knowledge is only created by abduction.

This is contrary to the common use of abduction in the social sciences and in artificial intelligence, where Peirce's older meaning is used. Contrary to this usage, Peirce stated in his later writings that the actual process of generating a new rule is not hampered by traditional rules of logic. Rather, he pointed out that humans have an innate ability to correctly do logical inference. Possessing this ability is explained by the evolutionary advantage it gives.

We may therefore define abduction as follows (using Peirce's original formulation):

DEFINITION 6.0: Abduction = Argument that validates a set of individual cases via a an explanation that cites the similarities between the set of individual cases and an already validated generalization.

EXAMPLE 6.0: You have a green fruit, which is not an apple. You already have a tested generalization about green apples that states that green apples are sour. You observe that since the fruit you have in hand is green and resembles a green apple, then (by analogy to the case in green apples) it is probably sour (i.e. it is analogous to green apples, which you have already validated are sour).

In symbolic terms:

(Fg = Ag) + (Ag = As) => Fg = Fs

where:

Fg = a green fruit

Ag = green apple

As = sour green apple

and

Fs = a sour green fruit

In the foregoing example, it is clear why Peirce asserted that abduction is the only way to produce new knowledge (i.e. knowledge that is not strictly derived from existing observations or generalizations). The new generalization (“this new green fruit is sour”) is a new conclusion, derived by analogy to an already existing generalization about green apples. Notice that, once again, the key to formulating an argument by abduction is the inference of an analogy between the green fruit (the taste of which is currently unknown) and green apples (which we already know, by induction, are sour).

IMPLICATION 6.0: Conclusions reached on the basis of abduction are, like conclusions reached on the basis of induction and deduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which an existing analogy is generalized to include a larger set of cases.

Again, since transduction, like induction and deduction, is only validated by repetition of similar cases (see above), abduction is ultimately just as limited as the other forms of argument as the other three:
Abductive reasoning, like inductive and deductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

SECTION SEVEN: ON CONSILIENCE

The newest form of logical argument is argument by consilience. According to Wikipedia, consilience (from the Latin con, meaning “with” and saliré, meaning “to jump”: literally "to jump together") is the process of reasoning from several similar generalizations to a generalization that covers them all. In other words, it is a reasoning process that starts from several inductive generalizations and derives a "covering" generalization that is both validated by and strengthens them all.

The English philosopher and scientist William Whewell (pronounced like "hewel") introduced the concept of consilience into the philosophy of science. In his book, The Philosophy of the Inductive Sciences, published in 1840, Whewell defined the term consilience by saying “The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.”

The concept of consilience has more recently been applied to science in general and evolutionary biology in particular by the American evolutionary biologist Edward_O._Wilson. In his book, Consilience: the Unity of Knowledge, published in 1998, Wilson reintroduced the term and applied it to the modern evolutionary synthesis. His main point was that multiple lines of evidence and inference all point to evolution bynatural selection as the most valid explanation for the origin of evolutionary adaptations and new phylogenetic taxa (e.g. species) as the result of descent with modification (Darwin's term for "evolution").

To extend the example for abduction given above, if the grass is wet (and rain is known to make the grass wet), the road is wet (and rain is known to make the road wet), and the car in the driveway is wet (and rain is known to make the car in the driveway wet), then rain can make everything outdoors wet, including objects whose wetness is not yet verified to be the result of rain.

Independent Observation: “The grass is wet.”

Already validated generalization: "Rain makes grass wet."

Independent Observation: “The road is wet.”

Already validated generalization: "Rain makes roads wet."

Independent Observation: “The car in the driveway is wet.”

Already validated generalization: "Rain makes cars in driveways wet."

Conclusion: “Rain makes everything outdoors wet.”

One can immediately generate an application of this new generalization to new observations:

New observation: "The picnic table in the back yard is wet."

New generalization: “Rain makes everything outdoors wet.”

Conclusion: "The picnic table in the back yard is wet because it has rained."

We may therefore define consilience as follows:

DEFINITION 7.0: Consilience = Argument that validates a new generalization about a set of already validated generalizations, based on similarities between the set of already validated generalizations.

EXAMPLE 7.0: You have a green peach, which when you taste it, is sour. You already have a generalization about green apples that states that green apples are sour and a generalization about green oranges that states that green oranges are sour. You observe that since the peach you have in hand is green and sour, then all green fruits are probably sour. You may then apply this new generalization to all new green fruits whose taste is currently unknown.

In symbolic terms:

(Ag = Sa) + (Og = Os) + (Pg = Ps) => Fg = Fs

where:

Ag = green apples

Sa = sour apples

Og = green oranges

Os = sour oranges

Pg = green peaches

Ps = sour peaches

Fg = green fruit

Fs = sour fruit

Given the foregoing example, it should be clear that consilience, like abduction (according to Peirce) is another way to produce new knowledge. The new generalization (“all green fruits are sour”) is a new conclusion, derived from (but not strictly reducible to) its premises. In essence, inferences based on consilience are "meta-inferences", in that they involve the formulation of new generalizations based on already existing generalizations.

IMPLICATION 7.0: Conclusions reached on the basis of consilience, like conclusions reached on the basis of induction, deduction, and abduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which existing generalizations are generalized to include all of them, and can then be applied to new, similar cases.

Again, since consilience, like induction, deduction, and abduction, is only validated by repetition of similar cases, consilience is ultimately just as limited as the other forms of argument as the other three:
Consilient reasoning, like inductive, deductive, and abductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

However, there is an increasing degree of confidence involved in the five forms of logical argument described above. Specifically, simple transduction produces the smallest degree of confidence, induction somewhat more (depending on the number of individual cases used to validate a generalization), deduction more so (since generalizations are ultimately based on induction), abduction even more (because a new set of observations is related to an already existing generalization, validated by induction), and consilience most of all (because new generalizations are formulated by induction from sets of already validated generalizations, themselves validated by induction).

CONCLUSIONS:

Transduction relates a single premise to a single conclusion, and is therefore the weakest form of logical validation.

Induction validates generalizations only via repetition of similar cases, the validity of which is strengthened by repeated transduction of similar cases.

Deduction validates individual cases based on generalizations, but is limited by the induction required to formulate such generalizations and by the transduction necessary to relate individual cases to each other and to the generalizations within which they are subsumed.

Abduction validates new generalizations via analogy between the new generalization and an already validated generalization; however, it too is limited by the formal limitations of transduction, in this case in the formulation of new generalizations.

Consilience validates a new generalization by showing via analogy that several already validated generalizations together validate the new generalization; once again, consilience is limited by the formal limitations of transduction, in this case in the validation of new generalizations via inferred analogies between existing generalizations.

• Taken together, these five forms of logical reasoning (call them "TIDAC" for short) represent five different but related means of validating statements, listed in order of increasing confidence.

• The validity of all forms of argument are therefore ultimately limited by the same thing: the logical limitations of transduction (i.e. argument by analogy).

• Therefore, there is (and can be) no ultimate certainty in any description or analysis of nature insofar as such descriptions or analyses are based on transduction, induction, deduction, abduction, and/or consilience.

• All we have (and can ever have) is relative degrees of confidence, based on repeated observations of similar objects and processes.

• Therefore, we can be most confident about those generalizations for which we have the most evidence.

• Based on the foregoing analysis, generalizations formulated via simple analogy (transduction) are the weakest and generalizations formulated via consilience are the strongest.

Comments, criticisms, and suggestions are warmly welcomed!

--Allen

Labels: , , , , , , ,