Saturday, June 17, 2006

Identity, Analogy, and Logical Argument in Science (Updated)

AUTHOR: Allen MacNeill

SOURCE: Original essay

COMMENTARY: That's up to you...
"...analogy may be a deceitful guide."
- Charles Darwin, Origin of Species

The descriptions and analysis of the functions of analogy in logical reasoning that I am about to describe are, in my opinion, not yet complete. I have been working on them for several years (actually, about 25 years all told), but I have yet to be completely satisfied with them. I am hoping, therefore, that by making them public here (and eventually elsewhere) that they can be clarified to everyone’s satisfaction.


To begin with, let us define an analogy as “a similarity between separate (but perhaps related) objects and/or processes”. As we will see, this definition may require refinement (and may ultimately rest on premises that cannot be proven - that is, axioms - rather than formal proof). But for now, let it be this:

DEFINITION 1.0: Analogy = Similarity between separate objects and/or processes (from the Greek ana, meaning “a collection” and logos, meaning “that which unifies or signifies.”)

AXIOM 1.0: The only perfect analogy to a thing is the thing itself.

COMMENTARY 1.0: This is essentially a statement of the logical validity of tautology (from the Greek tó autos meaning “the same” and logos, meaning “word” or “information”. As Ayn Rand (and, according to her, Aristotle) asserted:

AXIOM 1.0: A = A

From this essentially unprovable axiom, the following corrolary may be derived:

CORROLARY 1.1: All analogies that are not identities are necessarily imperfect.

AXIOM 2.0: Only perfect analogies are true.

CORROLARY 2.1: Only identities (i.e. tautologies, or "perfect" analogies) are true.

CORROLARY 2.2: Since only tautologies are prima facie "true", this implies that all analogical statements (except tautologies) are false to some degree. This leads us to:

AXIOM 3.0: All imperfect analogies are false to some degree.

AXIOM 3.0: A ≠ notA

CORROLARY 3.1: Since all non-tautological analogies are false to some degree, then all arguments based on non-tautological analogies are also false to the same degree.

COMMENTARY 2.0: The validity of all logical arguments that are not based on tautologies are matters of degree, with some arguments being based on less false analogies than others.

CONCLUSION 1: As we will see in the next sections, all forms of logical argument (i.e. transduction, induction, deduction, and abduction) necessarily rely upon non-tautological analogies. Therefore, to summarize:
All forms of logical argument (except for tautologies) are false to some degree.

Our task, therefore, is not to determine if non-tautological logical arguments are true or false, but rather to determine the degree to which they are false (and therefore the degree to which they are also true), and to then use this determination as the basis for establishing confidence in the validity of our conclusions.


Based on the foregoing, let us define validity as “the degree to which a logical statement is based upon false analogies.” Therefore, the closer an analogy is to a tautology, the more valid that analogy is.

DEFINITION 2.0: Validity = The degree to which a logical statement is based upon false analogies.

COMMENTARY: Given the foregoing, it should be clear at this point that (with the exception of tautologies):
There is no such thing as absolute truth; there is only degrees of validity.

In biology, it is traditional to determine the validity of an hypothesis by calculating confidence levels using statistical analyses. According to these analyses, if a hypothesis is supported by at least 95% of the data (that is, if the similarity between the observed data and the values predicted by the hypothesis being tested is at least 95%), then the hypothesis is considered to be valid. In the context of the definitions, axiom, and corrolaries developed in the previous section, this means that valid hypotheses in biology may be thought of as being at least 95% tautological (and therefore less than 5% false).

DEFINITION 2.1: Confidence = The degree to which an observed phenomenon conforms to (i.e. is similar to) a hypothetical prediction of that phenomenon.

This means that, in biology:
Validity (i.e. truth) is, by definition, a matter of degree.

Following long tradition, an argument (from the Latin argueré, meaning “to make clear”) is considered to be a statement in which a premise (or premises, if more than one, from the Latin prae, meaning “before” and mitteré, meaning “to place”) is related to a conclusion (i.e. the end of the argument). There are four kinds of argument, based on the means by which a premise (or premises) are related to a conclusion: transduction, induction, deduction, and abduction, which will be considered in order in the following sections.

DEFINITION 2.2: Argument = A statement of a relationship between a premise (or premises) and a conclusion.

Given the foregoing, the simplest possible argument is a statement of a tautology, as in A = A. Unlike all other arguments, this statement is true by definition (i.e. on the basis of AXIOM 1.0). All other arguments are only true by matter of degree, as established above.


The simplest (and least effective) form of logical argument is argument by analogy. The Swiss child psychologist Jean Piaget called this form of reasoning transduction (from the Latin trans, meaning “across” and duceré. meaning “to lead”), and showed that it is the first and simplest form of logical analysis exhibited by young children. We may define transduction as follows:

DEFINITION 3.0: Transduction = Argument by analogy alone (i.e. by simple similarity between a premise and a conclusion).

A tautology is the simplest transductive argument, and is the only one that is “true by definition.” As established above, all other arguments are “true only by matter of degree.” But to what degree? How many examples of a particular premise are necessary to establish some degree of confidence? That is, how confident can we be of a conclusion, given the number of supporting premises?

As the discussion of confidence in Section 2 states, in biology at least 95% of the observations that we make when testing a prediction that flows from an hypothesis must be similar to those predicted by the hypothesis. This, in turn, implies that there must be repeated examples of observations such that the 95% confidence level can be reached.

However, in a transductive argument, all that is usually stated is that a single object or process is similar to another object or process. That is, the basic form of a transductive argument is:

Ai => Aa


Ai is an individual object or process


Aa is an analogous (i.e. similar, but identical, and therefore non-tautological) object or process

Since there is only a single example in the premise in such an argument, to state that there is any degree of confidence in the conclusion is very problematic (since it is nonsensical to state that a single example constitutes 95% of anything).

In science, this kind of reasoning is usually referred to as “anecdotal evidence,” and is considered to be invalid for the support of any kind of generalization. For this reason, arguments by analogy are generally not considered valid in science. As we will see, however, they are central to all other forms of argument, but there must be some additional content to such arguments for them to be considered generally valid.

EXAMPLE 3.0: To use an example that can be extended to all four types of logical argument, consider a green apple. Imagine that you have never tasted a green apple before. You do so, and observe that it is sour. What can you conclude at this point?

The only thing that you can conclude as the result of this single observation is that the individual apple that you have tasted is sour. In the formalism introduced above:

Ag => As


Ag = green apple


As = sour apple

While this statement is valid for the particular case noted, it cannot be generalized to all green apples (on the basis of a single observation). Another way of saying this is that the validity of generalizing from a single case to an entire category that includes that case is extremely low; so low that it can be considered to be invalid for most intents and purposes.


A more complex form of logical argument is argument by induction. According to the Columbia Encyclopedia, induction (from the Latin in, meaning “into” and duceré, meaning “to lead”) is a form of argument in which multiple premises provide grounds for a conclusion, but do not necessitate it. Induction is contrasted with deduction, in which true premises do necessitate a conclusion.

An important form of induction is the process of reasoning from the particular to the general. The English philosopher and scientist Francis Bacon in his Novum Organum (1620) elucidated the first formal theory of inductive logic, which he proposed as a logic of scientific discovery, as opposed to deductive logic, the logic of argumentation. the Scottish philosopher David Hume has influenced 20th-century philosophers of science who have focused on the question of how to assess the strength of different kinds of inductive argument (see Nelson Goodman and Karl Popper).

We may therefore define induction as follows:

DEFINITION 4.0: Induction = Argument from individual observations to a generalization that applies to all (or most) of the individual observations.

EXAMPLE 4.0: You taste one green apple; it is sour. You taste another green apple; it is also sour. You taste yet another green apple; once again, it is sour. You continue tasting green apples until some relatively arbitrary point (which can be stated in formal terms, but which is unnecessary for the current analysis), you formulate a generalization; “(all) green apples are sour.”

In symbolic terms:

A1 + A2 + A3 + …An => As


A1 + A2 + A3 + …An = individual cases of sour green apples


As = green apples are sour

As we have already noted, the number of similar observations (i.e. An in the formula, above) has an effect on the validity of any conclusion drawn on the basis of those observations. In general, enough observations must be made that a confidence level of 95% can be reached, either in accepting or rejecting the hypothesis upon which the conclusion is based. In practical terms, conclusions formulated on the basis of induction have a degree of validity that is directly related to the number of similar observations; the more similar observations one makes, the greater the validity of one’s conclusions.

IMPLICATION 4.0: Conclusions reached on the basis of induction are necessarily tentative and depend for their validity on the number of similar observations that support such conclusions. In other words:
Inductive reasoning cannot reveal absolute truth, as it is necessarily limited only to degrees of validity.

It is important to note that, although transduction alone is invalid as a basis for logical argument, transduction is nevertheless an absolutely essential part of induction. This is because, before one can formulate a generalization about multiple individual observations, it is necessary that one be able to relate those individual observations to each other. The only way that this can be done is via transduction (i.e. by analogy, or similarity, between the individual cases).

In the example of green apples, before one can conclude that “(all) green apples are sour” one must first conclude that “this green apple and that green apple (and all those other green apples) are similar.” Since transductive arguments are relatively weak (for the reasons discussed above), this seems to present an unresolvable paradox: no matter how many similar repetitions of a particular observation, each repetition depends for its overall validity on a transductive argument that it is “similar” to all other repetitions.

This could be called the “nominalist paradox,” in honor of the philosophical tradition founded by the English cleric and philosopher William of Ockham, of “Ockham’s razor” fame. On the face of it, there seems to be no resolution for this paradox. However, I believe that a solution is entailed by the logic of induction itself. As the number of “similar” repetitions of an observation accumulate, the very fact that there are a significant number of such repetitions provides indirect support for the assertion that the repetitions are necessarily (rather than accidentally) “similar.” That is, there is some “law-like” property that is causing the repetitions to be similar to each other, rather than such similarities being the result of random accident.


A much older form of logical argument than induction is argument by deduction. According to the Columbia Encyclopedia, deduction (from the Latin de, meaning “out of” and duceré, meaning “to lead”) is a form of argument in which individual cases are derived from (and validated by) a generalization that subsumes all such cases. Unlike inductive argument, in which no amount of individual cases can prove a generalization based upon them to be “absolutely true,” the conclusion of a deductive inference is necessitated by the premises. That is, the conclusions (i.e. the individual cases) can’t be false if the premise (i.e. the generalization) is true, provided that they follow logically from it.

Deduction may be contrasted with induction, in which the premises suggest, but do not necessitate a conclusion. The ancient Greek philosopher Aristotle first laid out a systematic analysis of deductive argumentation in the Organon. As noted above, Francis Bacon elucidated the formal theory of inductive logic, which he proposed as the logic of scientific discovery.

Both processes, however, are used constantly in scientific research. By observation of events (i.e. induction) and from principles already known (i.e. deduction), new hypotheses are formulated; the hypotheses are tested by applications; as the results of the tests satisfy the conditions of the hypotheses, laws are arrived at (i.e. by induction again); from these laws future results may be determined by deduction.

We may therefore define deduction as follows:

DEFINITION 5.0: Deduction = Argument from a generalization to an individual case, and which applies to all such individual cases.

EXAMPLE 5.0: You assume that all green apples are sour. You are confronted with a particular green apple. You conclude that, since this is a green apple and green apples are sour, then “this green apple is sour.”

In symbolic terms:

As => Ai


As = all apples are sour

Ai = any individual case of a green apple

As noted above, the conclusions of deductive arguments are necessarily true if the premise (i.e. the generalization) is true. However, it is not clear how such generalizations are themselves validated. In the scientific tradition, the only valid source of such generalizations is induction, and so (contrary to the Aristotelian tradition), deductive arguments are no more valid than the inductive arguments by which their major premises are validated.

IMPLICATION 5.0: Conclusions reached on the basis of deduction are, like conclusions reached on the basis of induction, necessarily tentative and depend for their validity on the number of similar observations upon which their major premises are based. In other words:
Deductive reasoning, like inductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which its major premise is based.

Hence, despite the fact that induction and deduction “argue in opposite directions,” we come to the conclusion that, in terms of natural science, the validity of both is ultimately dependent upon the number and degree of similarity of the observations that are used to infer generalizations. Therefore, unlike the case in purely formal logic (in which the validity of inductive inferences is always conditional, whereas the validity of deductive inferences is not), there is an underlying unity in the source of validity in the natural sciences:
All arguments in the natural sciences are validated by inductive inference.


A somewhat newer form of logical argument is argument by abduction. According to the Columbia Encyclopedia, abduction (from the Latin ab, meaning “away” and duceré, meaning “to lead”) is the process of reasoning from individual cases to the best explanation for those cases. In other words, it is a reasoning process that starts from a set of facts and derives their most likely explanation from an already validated generalization that explains them. In simple terms, the new observation(s) is/are "abducted" into the already existing generalization.

The American philosopher Charles Sanders Peirce (last name pronounced like "purse") introduced the concept of abduction into modern logic. In his works before 1900, he generally used the term abduction to mean “the use of a known rule to explain an observation,” e.g., “if it rains, the grass is wet” is a known rule used to explain why the grass is wet:

Known Rule: “If it rains, the grass is wet.”

Observation: “The grass is wet.”

Conclusion: “The grass is wet because it has rained.”

Peirce later used the term abduction to mean “creating new rules to explain new observations,” emphasizing that abduction is the only logical process that actually creates new knowledge. He described the process of science as a combination of abduction, deduction and implication, stressing that new knowledge is only created by abduction.

This is contrary to the common use of abduction in the social sciences and in artificial intelligence, where Peirce's older meaning is used. Contrary to this usage, Peirce stated in his later writings that the actual process of generating a new rule is not hampered by traditional rules of logic. Rather, he pointed out that humans have an innate ability to correctly do logical inference. Possessing this ability is explained by the evolutionary advantage it gives.

We may therefore define abduction as follows (using Peirce's original formulation):

DEFINITION 6.0: Abduction = Argument that validates a set of individual cases via a an explanation that cites the similarities between the set of individual cases and an already validated generalization.

EXAMPLE 6.0: You have a green fruit, which is not an apple. You already have a tested generalization about green apples that states that green apples are sour. You observe that since the fruit you have in hand is green and resembles a green apple, then (by analogy to the case in green apples) it is probably sour (i.e. it is analogous to green apples, which you have already validated are sour).

In symbolic terms:

(Fg = Ag) + (Ag = As) => Fg = Fs


Fg = a green fruit

Ag = green apple

As = sour green apple


Fs = a sour green fruit

In the foregoing example, it is clear why Peirce asserted that abduction is the only way to produce new knowledge (i.e. knowledge that is not strictly derived from existing observations or generalizations). The new generalization (“this new green fruit is sour”) is a new conclusion, derived by analogy to an already existing generalization about green apples. Notice that, once again, the key to formulating an argument by abduction is the inference of an analogy between the green fruit (the taste of which is currently unknown) and green apples (which we already know, by induction, are sour).

IMPLICATION 6.0: Conclusions reached on the basis of abduction are, like conclusions reached on the basis of induction and deduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which an existing analogy is generalized to include a larger set of cases.

Again, since transduction, like induction and deduction, is only validated by repetition of similar cases (see above), abduction is ultimately just as limited as the other forms of argument as the other three:
Abductive reasoning, like inductive and deductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.


The newest form of logical argument is argument by consilience. According to Wikipedia, consilience (from the Latin con, meaning “with” and saliré, meaning “to jump”: literally "to jump together") is the process of reasoning from several similar generalizations to a generalization that covers them all. In other words, it is a reasoning process that starts from several inductive generalizations and derives a "covering" generalization that is both validated by and strengthens them all.

The English philosopher and scientist William Whewell (pronounced like "hewel") introduced the concept of consilience into the philosophy of science. In his book, The Philosophy of the Inductive Sciences, published in 1840, Whewell defined the term consilience by saying “The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.”

The concept of consilience has more recently been applied to science in general and evolutionary biology in particular by the American evolutionary biologist Edward_O._Wilson. In his book, Consilience: the Unity of Knowledge, published in 1998, Wilson reintroduced the term and applied it to the modern evolutionary synthesis. His main point was that multiple lines of evidence and inference all point to evolution bynatural selection as the most valid explanation for the origin of evolutionary adaptations and new phylogenetic taxa (e.g. species) as the result of descent with modification (Darwin's term for "evolution").

To extend the example for abduction given above, if the grass is wet (and rain is known to make the grass wet), the road is wet (and rain is known to make the road wet), and the car in the driveway is wet (and rain is known to make the car in the driveway wet), then rain can make everything outdoors wet, including objects whose wetness is not yet verified to be the result of rain.

Independent Observation: “The grass is wet.”

Already validated generalization: "Rain makes grass wet."

Independent Observation: “The road is wet.”

Already validated generalization: "Rain makes roads wet."

Independent Observation: “The car in the driveway is wet.”

Already validated generalization: "Rain makes cars in driveways wet."

Conclusion: “Rain makes everything outdoors wet.”

One can immediately generate an application of this new generalization to new observations:

New observation: "The picnic table in the back yard is wet."

New generalization: “Rain makes everything outdoors wet.”

Conclusion: "The picnic table in the back yard is wet because it has rained."

We may therefore define consilience as follows:

DEFINITION 7.0: Consilience = Argument that validates a new generalization about a set of already validated generalizations, based on similarities between the set of already validated generalizations.

EXAMPLE 7.0: You have a green peach, which when you taste it, is sour. You already have a generalization about green apples that states that green apples are sour and a generalization about green oranges that states that green oranges are sour. You observe that since the peach you have in hand is green and sour, then all green fruits are probably sour. You may then apply this new generalization to all new green fruits whose taste is currently unknown.

In symbolic terms:

(Ag = Sa) + (Og = Os) + (Pg = Ps) => Fg = Fs


Ag = green apples

Sa = sour apples

Og = green oranges

Os = sour oranges

Pg = green peaches

Ps = sour peaches

Fg = green fruit

Fs = sour fruit

Given the foregoing example, it should be clear that consilience, like abduction (according to Peirce) is another way to produce new knowledge. The new generalization (“all green fruits are sour”) is a new conclusion, derived from (but not strictly reducible to) its premises. In essence, inferences based on consilience are "meta-inferences", in that they involve the formulation of new generalizations based on already existing generalizations.

IMPLICATION 7.0: Conclusions reached on the basis of consilience, like conclusions reached on the basis of induction, deduction, and abduction, are ultimately based on analogy (i.e. transduction). That is, a new generalization is formulated in which existing generalizations are generalized to include all of them, and can then be applied to new, similar cases.

Again, since consilience, like induction, deduction, and abduction, is only validated by repetition of similar cases, consilience is ultimately just as limited as the other forms of argument as the other three:
Consilient reasoning, like inductive, deductive, and abductive reasoning, cannot reveal absolute truth about natural processes, as it is necessarily limited by the degree of validity upon which it premised.

However, there is an increasing degree of confidence involved in the five forms of logical argument described above. Specifically, simple transduction produces the smallest degree of confidence, induction somewhat more (depending on the number of individual cases used to validate a generalization), deduction more so (since generalizations are ultimately based on induction), abduction even more (because a new set of observations is related to an already existing generalization, validated by induction), and consilience most of all (because new generalizations are formulated by induction from sets of already validated generalizations, themselves validated by induction).


Transduction relates a single premise to a single conclusion, and is therefore the weakest form of logical validation.

Induction validates generalizations only via repetition of similar cases, the validity of which is strengthened by repeated transduction of similar cases.

Deduction validates individual cases based on generalizations, but is limited by the induction required to formulate such generalizations and by the transduction necessary to relate individual cases to each other and to the generalizations within which they are subsumed.

Abduction validates new generalizations via analogy between the new generalization and an already validated generalization; however, it too is limited by the formal limitations of transduction, in this case in the formulation of new generalizations.

Consilience validates a new generalization by showing via analogy that several already validated generalizations together validate the new generalization; once again, consilience is limited by the formal limitations of transduction, in this case in the validation of new generalizations via inferred analogies between existing generalizations.

• Taken together, these five forms of logical reasoning (call them "TIDAC" for short) represent five different but related means of validating statements, listed in order of increasing confidence.

• The validity of all forms of argument are therefore ultimately limited by the same thing: the logical limitations of transduction (i.e. argument by analogy).

• Therefore, there is (and can be) no ultimate certainty in any description or analysis of nature insofar as such descriptions or analyses are based on transduction, induction, deduction, abduction, and/or consilience.

• All we have (and can ever have) is relative degrees of confidence, based on repeated observations of similar objects and processes.

• Therefore, we can be most confident about those generalizations for which we have the most evidence.

• Based on the foregoing analysis, generalizations formulated via simple analogy (transduction) are the weakest and generalizations formulated via consilience are the strongest.

Comments, criticisms, and suggestions are warmly welcomed!


Labels: , , , , , , ,


At 6/19/2006 11:33:00 AM, Blogger Allen MacNeill said...

In a comment at Design Paradigm (, "Bilbo" wrote:

"I’m not sure I followed you on abduction. At first, I thought you were saying it was an argument to the best explanation (e.g., the best explanation for the observed movements of the planets is the heliocentric theory, with elliptical orbits of the planets). But then you seemed to suggest that it was a way to make a more generalized argument (e.g., from green apples being sour, to green fruit being sour). Those seem to be different types of arguments."

To which I replied:

Indeed, and there lies the "incompleteness" that I alluded to in the beginning of the essay. Peirce (turns out I spelled his name wrong) formulated two different (but related) versions of what he called "abduction," which correspond to the two different versions you alluded to in your comment. His earlier version was just as you describe it: "argument to the best explanation", in which a set of observations that are initially apparently unrelated are shown to be related by virtue of an already existing generalization that applies to them all.

However, in his later writings Peirce introduced the concept of "new generalizations", which do not exist until a set of existing observations are "unified" under a more general "covering law" (to use Hempel's terminology) that applies to all of them.

It is my impression (which I admit is not formally worked out...yet) that the difference between these two versions of "abduction" is that in the first version the "covering law" is already known, and simply "abducts" the disparate observations by subsuming them under a more general statement that explains them all, whereas in his second version the "covering law" is essentially "induced" de novo.

This is why Peirce argued that the second (i.e. later) version of "abduction" is the only form of logic that can actually produce "new" knowledge. However, I would argue that Peirce's second version is reducible to induction, as implied by the treatment in the Wikipedia article linked in my essay.

I would go further than this: it seems to me (and again I have no formal way of "proving" this) that induction is the only form of logic that can produce genuinely new generalizations, and that Peirce's second version of "abduction" is simply a kind of "meta-induction" in which the new generalization that subsumes (i.e. "abducts") the disparate observations is "induced" by the "realization" (and I mean that literally) that a more general covering law unites all of them.

I would like to know if any readers of this blog have any knowledge or opinions that might help clarify this point.

At 6/21/2006 03:04:00 PM, Anonymous Anonymous said...

secondary abduction is just like you said meta-induction. I'm failing to see a substantive difference in the process itself. merely different starting products. I have it in numbers and geometry in my head but can't put into words just yet.

therefore it would follow that since the mechanism itself is identical, unless there was a difference in the starting and ending products, abduction couldn't have a property that induction didn't (i.e ability to create 'new' knowledge).

Since a generalization typically contains less information than a specific observation, I don't think it can be said that abduction really does anything differently from induction. A generalization of a generalization is still a generalization, even if you call it a covering law. Just as the derivative of e^x is still e^x

To use the green apple example. This green apple is sour-->all green apples are sour-->all green fruit is sour. the first step woudl be induction, the second abduction. Yet our 'covering law' was based on the same green apple.

after I figure out what the heck i'm trying to say I'll post back ehre. I don't think that was clear.

on a slightly unrelated note, I have no problem with the idea that there are only differing degrees of falseness. In an inherently probabilistic universe (which I'm pretty sure is what we have), there can be no such thing as a function with 100% fidelity.

p.s. would you be able to contribute a piece or two at Conservatives Against Intelligent Design?

(I really don't care how sporadically. I just want as many names as possible in that 'contributors' sidebar. If it looks like a one man show it defeats the whole purpose)

At 6/21/2006 03:05:00 PM, Anonymous Anonymous said...

aw crap, wrong link Conservatives Against Intelligent Design


At 7/15/2006 09:56:00 PM, Blogger Zero said...

• >All we have (and can ever have) is relative degrees of confidence, based on repeated observations of similar objects and processes.

Comments, criticisms, and suggestions are warmly welcomed!<

Allen, who set the acceptable standard in confidence level of science at 95 %? Why not 94 or 96? How about 99.99999999999999999?

Sciene has never found two snow flakes alike, although 100 % have, by chance, symmetry, like every living thing.
Science has never found two grains of sand alike nor do any have, by chance, symmetry.
"IMO", in 100 % of my observations, chaos is natural. Order is mind made. So I accept that as truth.
Btw, did you know, two sour apples make a pair?



Post a Comment

<< Home