More on Parsimony in Biology
AUTHOR: Robert Skipper
SOURCE: Cladistic Parsimony and Ockham's Razor
COMMENTARY: Allen MacNeill
Robert Skipper has a report on his participation in the Southern Society for Philosophy and Psychology conference last weekend. He delivered a paper on Cladistic Parsimony and Ockham's Razor, a subject about which both he and I have blogged in the past. In a previous post, Skipper comes to the following (admittedly tentative) conclusions:
At the moment, my thinking is that cladistic parsimony is a special case of simplicity (if it is a case at all). But I won't make the case for that here....One thing I think we can say, from the examples, is that when we're in a situation in which we must choose among competing hypotheses or theories, and empirical evidence isn't definitive, use simplicity to make the choice.
Each of [the authors cited] urge us to run with the simplest model among the relevant alternatives unless we're forced to abandon that model for a more complicated one. What does the forcing is empirical evidence. Indeed, none of the biologists I quoted above said anything about the fact that simplicity is truth-indicative. Burnet in fact said that because the clonal theory is simplest, it's probably false! So, simplicity doesn't indicate the truth of some hypothesis or model or theory. Rather, it's a strategy that directs us toward the truth....At least we can say we've eliminated some fruitless paths of inquiry.
I think it would be helpful to consider two possibilities vis-a-vis the application of parsimony in science:
1) Parsimony is merely "useful" in the sense that it reduces the complexity of hypotheses to a level at which they are empirically testable. When I teach my students about how science is usually done, I give them the "hypothetico-deductive" model first, and then point out that this model doesn't give you criteria for formulating testable hypotheses, it only gives you a method for testing them once they have been formulated. To formulate testable hypotheses requires an additional step: one must "mentally" test one's hypothesis to determine if:
• it's empiirically testable, and
• the emipirical test that one is considering can distinguish between it and alternative hypotheses
If the answers to these two questions are both "yes," then one is ready to actually run the experiment/make the observations to test the predictions that flow from the hypothesis.
In this view, parsimony is simply "useful" in that it is more likely (on average) to yield testable hypotheses whose empirical results are more likely to unambiguously validate or falsify one's predictions.
2) Parsimony might actually be an intrinsic feature of "natural causation" itself. In evolutionary psychology (my field, BTW) there is a concept known as "computational overload (CO)." Basically, CO is used as an argument against the "blank slate" hypothesis for human cognition and motivation. That is, if the mind is a "blank slate" (i.e. relies entirely on "trail and error/reinforcement" algorithms), CO rapidly overwhelms even the largest and fastest computer imagineable. Therefore, EPs like me assume that the brain is modularized, and that each module has a fairly stringent "perceptual filter" that limits inputs as a way of minimizing CO (such filters and modules having evolved by natural selection).*
The same concept could be applied to nature, and especially biology. Biological systems are fiendishly complex, much more so than physical or chemical systems. This complexity, if not minimized in some way, would result in biological systems "seizing up" as the result of CO (where "computation" is interpreted broadly as the binary and higher level interactions between multiple influences, some competing and some complementary).
Natural selection, in other words, has resulted in the evolution of biological systems in which "parsimony" has been encoded into the interactive structure of living organisms and systems of living organisms themselves, as a way of minimizing CO and maximizing effective interaction with one's environment.
Indeed, I would be tempted to argue that (1) may even be a consequence of (2), as our minds themselves are already adapted to minimizing CO, and therefore are predisposed to parsimonious solutions to problems in general, and therefore also scientific problems. In our interactions with nature as in our science, therefore, "good enough for now" is "good enough for all".
*I also suspect that this phenomenon is the basis for the "Fundamental Attribution Error (FAE)" in humans, as a predisposition for making FAEs would be selected for as long as the results of doing so were as often adaptive as deleterious (i.e. a tendency toward "false positives" in making FAEs would simply be a kind of "worse case analysis", which is almost always adaptive...especially in a dangerous world such as ours, in which even paranoids have real enemies ;-)