Wednesday, January 26, 2022

 


Thursday, November 10, 2011



Alternate proof

of the Schroeder-Bernstein theorem


PlanetMath's standard proof

Wikipedia's standard proof

Draft 2   The following proof of the Schroeder-Bernstein theorem is, as far as I know, new as of April 2006. The standard proof can be found at the links above.   Notation:  "cardA" denotes the cardinal number of a set A. "<=" means "less than or equal to." "~" means "equinumerous to." 
Schroeder-Bernstein theorem: If, and only if, A and B are sets such that card A <= cardB and card B <= card A, then cardA = cardB.

Proof:

Remark: We grant that the transitivity of subsets is established from axioms. That is, A Í B Í C implies A Í C. Further, for nonempty sets, we accept that if A Ì B, then B Ì A is false. This follows from the definition of subset: for all x, x element of X implies x element of Y. But, if A is a proper subset of A, then, for some b, b element of B implies b is not an element of A. Hence B does not fit the definition of subset of A.

We rewrite cardA <= cardB and card B <= card A, thus:

A ~ C Í B and B ~ D Í A

If

Suppose A ~ C = B. Then we are done. Likewise for B ~ D = A. So we test the situation for nonempty sets with: A ~ C Ì B and B ~ D Ì A. (We note that if C ~ D, then A ~ B, spoiling our interim hypothesis. So we assume C is not equinumerous to D.)

We now form the sets A U C and B U D, permitting us to write (A U C) Ì (A U B) and (B U D) Ì (A U B).

We now have two options:

i) (A U C) Ì (A U B) Ì (B U D) Ì (A U B)

This is a contradiction since A U B cannot be a proper subset of itself.

ii) (A U C) Ì (A U B) É (B U D) Ì (A U B)

Also a contradiction, since B U D cannot be a proper subset of A U B and also properly contain A U B.

Only if

We let B = C and D = A.


This proof, like the standard modern proof, does not rely on the Axiom of Choice.


In search of a blind watchmaker


Richard Dawkins' web site
Wikipedia article on Dawkins
Wikipedia article on Francis Crick
Abstract of David Layzer's two-tiered adaptation
Joshua Mitteldorf's home page
Do dice play God? A book review

A discussion of The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design by Richard Dawkins  First posted Oct. 5, 2010 and revised as of Oct. 8, 2010  Please notify me of errors or other matters at "krypto78...at...gmail...dot...com"   
By PAUL CONANT

Surely it is quite unfair to review a popular science book published years ago. Writers are wont to have their views evolve over time.1 Yet in the case of Richard Dawkins' The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (W.W. Norton 1986), a discussion of the mathematical concepts seems warranted, because books by this eminent biologist have been so influential and the "blind watchmaker" paradigm is accepted by a great many people, including a number of scientists.

Dawkins' continuing importance can be gauged by the fact that his most recent book, The God Delusion (Houghton Mifflin 2006), was a best seller, and by the links above. In fact, Watchmaker, also a best seller, was re-issued in 2006.

I do not wish to disparage anyone's religious or irreligious beliefs, but I do think it important to point out that non-mathematical readers should beware the idea that Dawkins has made a strong case that the "evidence of evolution reveals a universe without design."

There is little doubt that some of Dawkins' conjectures and ideas in Watchmaker are quite reasonable. However, many readers are likely to think that he has made a mathematical case that justifies the theory(ies) of evolution, in particular the "modern synthesis" that combines the concepts of passive natural selection and genetic mutation.

Dawkins wrote his apologia back in the eighties when computers were becoming more powerful and accessible, and when PCs were beginning to capture the public fancy. So it is understandable that, in this period of burgeoning interest in computer-driven chaos, fractals and cellular automata, he might have been quite enthusiastic about his algorithmic discoveries.

However, interesting computer programs may not be quite as enlightening as at first they seem.

Cumulative selection

Let us take Dawkins' argument about "cumulative selection," in which he uses computer programs as analogs of evolution. In the case of the phrase, "METHINKS IT IS LIKE A WEASEL," the probability -- using 26 capital letters and a space -- of coming up with such a sequence randomly is 27-28 (the astonishingly remote 8.3 x 10-41). However, that is also the probability for any random string of that length, he notes, and we might add that for most probability distributions. when n is large, any distinct probability approaches 0.

Such a string would be fantastically unlikely to occur in "single step evolution," he writes. Instead, Dawkins employs cumulative selection, which begins with a random 28-character string and then "breeds from" this phrase. "It duplicates it repeatedly, but with a certain chance of random error -- 'mutation' -- in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly most resembles the target phrase, METHINKS IT IS LIKE A WEASEL."

Three experiments evolved the precise sentence in 43, 64 and 41 steps, he wrote.

Dawkins' basic point is that an extraordinarily unlikely string is not so unlikely via "cumulative selection."

Once he has the readers' attention, he concedes that his notion of natural selection precludes a long-range target and then goes on to talk about "biomorph" computer visualizations (to be discussed below).

Yet it should be obvious that Dawkins' "methinks" argument applies specifically to evolution once the mechanisms of evolution are at hand. So the fact that he has been able to design a program which behaves like a neural network, really doesn't say much about anything. He has achieved a proof of principle that was not all that interesting, although I suppose it would answer a strict creationist, which was perhaps his basic aim.

But which types of string are closer to the mean? Which ones occur most often? If we were to subdivide chemical constructs into various sets, the most complex ones -- which as far as we know are lifeforms -- would be farthest from the mean.(Dawkins, in his desire to appeal to the lay reader, avoids statistics theory other than by supplying an occasional quote from R.A. Fisher.)

Let us, like Dawkins, use a heuristic analog2. Suppose we take the set of all grammatical English sentences of 28 characters. The variable is an English word rather than a Latin letter or space. What would be the probability of any 28-character English sentence appearing randomly?

My own sampling of a dictionary found that words with eight letters appear with the highest probability of 21%. So assuming the English lexicon to contain 500,000 words, we obtain about 105,000 words of length 8.

Now let us do a Fermi-style rough estimate. For the moment ignoring spaces, we'll posit average word length of 2 to 9 as covering virtually all combinations. That is, we'll pretend there are sentences composed of only two-letter words, only three-letter and so on up to nine letters. Further, we shall put an upper bound of 105 on the set of words of any relevant length (dropping the extra 5,000 for eight-letter words as negligible for our purposes).

This leads to a total number of combinations of (105)2 + 108 + ... + 1014, which approximates 1014.

We have not considered spaces nor (directly) combinations of words of various lengths. It seems overwhelmingly likely that any increases would be canceled by the stricture that sentences be grammatical, something we haven't modeled. But, even if the number of combinations were an absurd 10 orders of magnitude higher, the area under the part of some typical probability curve that covers all grammatical English sentences of length 28 would take up a miniscule percentage of a tail.

Analogously, to follow Dawkins, we would suspect that the probability is likewise remote for random occurrence of any information structure as complex as a lifeform.

To reiterate, the entire set of English sentences of 28 characters is to be found far out in the tail of some probability distribution. Of course, we haven't specified which distribution because we have not precisely defined what is meant by "level of complexity." This is also an important omission by Dawkins.

We haven't really done much other than to underscore the lack of precision of Dawkins' analogy.

Dawkins then goes on to talk about his "biomorph" program, in which his algorithm recursively alters the pixel set, aided by his occasional selecting out of unwanted forms. He found that some algorithms eventually evolved insect-like forms, and thought this a better analogy to evolution, there having been no long-term goal. However, the fact that "visually interesting" forms show up with certain algorithms again says little. In fact, the remoteness of the probability of insect-like forms evolving was disclosed when he spent much labor trying to repeat the experiment because he had lost the exact initial conditions and parameters for his algorithm. (And, as a matter of fact, he had become an intelligent designer with a goal of finding a particular set of results.)

Again, what Dawkins has really done is use a computer to give his claims some razzle dazzle. But on inspection, the math is not terribly significant.

It is evident, however, that he hoped to counter Fred Hoyle's point that the probability of life organizing itself was equivalent to a tornado blowing through a junkyard and assembling from the scraps a fully functioning 747 jetliner, Hoyle having made this point not only with respect to the origin of life, but also with respect to evolution by natural selection.

So before discussing the origin issue, let us turn to the modern synthesis.

The modern synthesis

I have not read the work of R.A. Fisher and others who established the modern synthesis merging natural selection with genetic mutation, and so my comments should be read in this light.

Dawkins argues that, although most mutations are either neutral or harmful, there are enough progeny per generation to ensure that an adaptive mutation proliferates. And it is certainly true that, if we look at artificial selection -- as with dog breeding -- a desirable trait can proliferate in very short time periods, and there is no particular reason to doubt that if a population of dogs remained isolated on some island for tens of thousands of years that it would diverge into a new species, distinct from the many wolf sub-species.

But Dawkins is of the opinion that neutral mutations that persist because they do no harm are likely to be responsible for increased complexity. After all, relatively simple life forms are enormously successful at persisting.

And, as Stephen Wolfram points out (A New Kind of Science, Wolfram Media 2006), any realistic population size at a particular generation is extremely unlikely to produce a useful mutation because the ratio of possible mutations to the number of useful ones is some very low number. So Wolfram also believes neutral mutations drive complexity.

We have here two issues:

1. If complexity is indeed a result of neutral mutations alone, increases in complexity aren't driven by selection and don't tend to proliferate.

2. Why is any species at all extant? It is generally assumed that natural selection winnows out the lucky few, but does this idea suffice for passive filtering?

Though Dawkins is correct when he says that a particular mutation may be rather probable by being conditioned by the state of the organism (previous mutation), we must consider the entire chain of mutations represented by a species.

If we consider each species as representing a chain of mutations from the primeval organism, then we have a chain of conditional probability. A few probabilities may be high, but most are extremely low. Conditional probabilities can be graphed as trees of branching probabilities, so that a chain of mutation would be represented by one of these paths. We simply multiply each branch probability to get the total probability per path.

As a simple example, a 100-step conditional probability path with 10 probabilities of 0.9 and 60 with 0.7 and 30 with 0.5 yields a cumulative probability of 1.65 x 10-19.

In other words, the more mutations and ancestral species attributed to an extanct species, the less likely it is to exist via passive natural selection. The actual numbers are so remote as to make natural selection by passive filtering virtually impossible, though perhaps we might conjecture some nonlinear effect going on among species that tends to overcome this problem.

Dawkins' algorithm demonstrating cumulative evolution fails to account for this difficulty. Though he realizes a better computer program would have modeled lifeform competition and adaptation to environmental factors, Dawkins says such a feat was beyond his capacities. However, had he programed in low probabilities for "positive mutations," cumulative evolution would have been very hard to demonstrate.

Our second problem is what led Hoyle to revive the panspermia conjecture, in which life and proto-life forms are thought to travel through space and spark earth's biosphere. His thinking was that spaceborne lifeforms rain down through the atmosphere and give new jolts to the degrading information structures of earth life. (The panspermia notion has received much serious attention in recent years, though Hoyle's conjectures remain outside the mainstream.)

From what I can gather, one of Dawkins' aims was to counter Hoyle's sharp criticisms. But Dawkins' vigorous defense of passive natural selection does not seem to square with the probabilities, a point made decades previously by J.B.S. Haldane.

Without entering into the intelligent design argument, we can suggest that the implausible probabilities might be addressed by a neo-Lamarkian mechanism of negative feedback adaptations. Perhaps a stress signal on a particular organ is received by a parent and the signal transmitted to the next generation. But the offspring's genes are only acted upon if the other parent transmits the signal. In other words, the offspring embryo would not strengthen an organ unless a particular stress signal reached a threshold.

If that be so, passive natural selection would still play a role, particularly with respect to body parts that lose their role as essential for survival.

Dawkins said Lamarkianism had been roundly disproved, but since the time he wrote the book molecular biology has shown the possibility of reversal of genetic information (retroviruses and reverse transcription). However, my real point here is not about Lamarkianism but about Dawkins' misleading mathematics and reasoning.


Joshua Mitteldorf, an evolutionary biologist with a physics background and a Dawkins critic, points out that an idea proposed more than 30 years ago by David Layzer is just recently beginning to gain ground as a response to the cumulative probabilities issue. Roughly I would style Layzer's proposal a form of neo-Lamarckianism. The citation3 is found at the bottom of this essay and the link is posted above.


On origins

Dawkins concedes that the primeval cell presents a difficult problem, the problem of the arch. If one is building an arch, one cannot build it incrementally stone by stone because at some point, a keystone must be inserted and this requires that the proto-arch be supported until the keystone is inserted. The complete arch cannot evolve incrementally. This of course is the essential point made by the few scientists who support intelligent design.

Dawkins essentially has no answer. He says that a previous lifeform, possibly silicon-based, could have acted as "scaffolding" for current lifeforms, the scaffolding having since vanished. Clearly, this simply pushes the problem back. Is he saying that the problem of the arch wouldn't apply to the previous incarnation of "life" (or something lifelike)?

Some might argue that there is a possible answer in the concept of phase shift, in which, at a threshold energy, a disorderly system suddenly becomes more orderly. However, this idea is left unaddressed in Watchmaker. I would suggest that we would need a sequence of phase shifts that would have a very low cumulative probability, though I hasten to add that I have insufficient data for a well-informed assessment.

Cosmic probabilities

Is the probability of life in the cosmos very high, as some think? Dawkins argues that it can't be all that high, at least for intelligent life, otherwise we would have picked up signals. I'm not sure this is valid reasoning, but I do accept his notion that if there are a billion life-prone planets in the cosmos and the probability of life emerging is a billion to one, then it is virtually certain to have originated somewhere in the cosmos.

Though Dawkins seems to have not accounted for the fact that much of the cosmos is forever beyond the range of any possible detection as well as the fact that time gets to be a tricky issue on cosmic scales, let us, for the sake of argument, grant that the population of planets extends to any time and anywhere, meaning it is possible life came and went elsewhere or hasn't arisen yet, but will, elsewhere.

Such a situation might answer the point made by Peter Ward and Donald Brownlee in Rare Earth: Why Complex Life Is Uncommon in the Universe (Springer 2000) that the geophysics undergirding the biosphere represents a highly complex system (and the authors make efforts to quantify the level of complexity), meaning that the probability of another such system is extremely remote. (Though the book was written before numerous discoveries concerning extrasolar planets, thus far their essential point has not been disproved. And the possibility of non-carbon-based life is not terribly likely because carbon valences permit high levels of complexity in their compounds.)

Now some may respond that it seems terrifically implausible that our planet just happens to be the one where the, say, one-in-a-billion event occurred. However, the fact that we are here to ask the question is perhaps sufficient answer to that worry. If it had to happen somewhere, here is as good a place as any. A more serious concern is the probability that intelligent life arises in the cosmos.

The formation of multicellular organisms is perhaps the essential "phase shift" required, in that central processors are needed to organize their activities. But what is the probability of this level of complexity? Obviously, in our case, the probability is one, but, otherwise, the numbers are unavailable, mostly because of the lack of a mathematically precise definition of "level of complexity" as applied to lifeforms.

Nevertheless, probabilities tend to point in the direction of cosmically absurd: there aren't anywhere near enough atoms -- let alone planets -- to make such probabilities workable. Supposing complexity to result from neutral mutations, probability of multicellular life would be far, far lower than for unicellular forms whose speciation is driven by natural selection. Also, what is the survival advantage of self-awareness, which most would consider an essential component of human-like intelligence?

Hoyle's most recent idea was that probabilities were increased by proto-life in comets that eventually reached earth. But, despite enormous efforts to resolve the arch problem (or the "jumbo jet problem"), in my estimate he did not do so.

(Interestingly, Dawkins argues that people are attracted to the idea of intelligent design because modern engineers continually improve machinery designs, giving a seemingly striking analogy to evolution. Something that he doesn't seem to really appreciate is that every lifeform may be characterized as a negative-feedback controlled machine, which converts energy into work and obeys the second law of thermodynamics. That's quite an "arch.")

The intelligent design proponents, however, face a difficulty when relying on the arch analogy: the possibility of undecidability. As the work of Godel, Church, Turing and Post shows, some theorems cannot be proved by tracking back to axioms. They are undecidable. If we had a complete physical description of the primeval cell, we could encode that description as a "theorem." But, that doesn't mean we could track back to the axioms to determine how it emerged. If the "theorem" were undecidable, we would know it to be "true" (having the cell description in all detail), but we might be forever frustrated in trying to determine how it came to exist.

In other words, a probabilistic argument is not necessarily applicable.

The problem of sentience

Watchmaker does not examine the issue of emergence of human intelligence, other than as a matter of level of complexity.

Hoyle noted in The Intelligent Universe (Holt, Rhinehart and Winston 1984) that over a century ago, Alfred Russel Wallace was perplexed by the observation that "the outstanding talents of man... simply cannot be explained in terms of natural selection."

Hoyle quotes the Japanese biologist S. Ohno:

"Did the genome (genetic material) of our cave-dwelling predecessors contain a set or sets of genes which enable modern man to compose music of infinite complexity and write novels with profound meaning? One is compelled to give an affirmative answer...It looks as though the early Homo was already provided with the intellectual potential which was in great excess of what was needed to cope with the environment of his time."

Hoyle proposes in Intelligent that viruses are responsible for evolution, accounting for mounting complexity over time. However, this seems hard to square with the point just made that such complexity doesn't seem to occur as a result of passive natural winnowing and so there would be no selective "force" favoring its proliferation.

At any rate, I suppose that we may assume that Dawkins in Watchmaker saw the complexity inherent in human intelligence as most likely to be a consequence of neutral mutations.

Another issue not addressed by Dawkins (or Hoyle for that matter) is the question of self-awareness. Usually the mechanists see self-awareness as an epiphenomenon of a highly complex program (a notion Roger Penrose struggled to come to terms with in The Emperor's New Mind (Oxford 1986) and Shadows of the Mind (Oxford 1994).)

But let us think of robots. Isn't it possible in principle to design robots that multiply replications and maintain homeostasis until they replicate? Isn't it possible in principle to build in programs meant to increase probability of successful replication as environmental factors shift?

In fact, isn't it possible in principle to design a robot that emulates human behaviors quite well? (Certain babysitter robots are even now posing ethics concerns as to an infant's bonding with them.)

And yet there seems to be no necessity for self-awareness in such designs. Similarly, what would be the survival advantage of self-awareness for a species?

I don't suggest that some biologists haven't proposed interesting ideas for answering such questions. My point is that Watchmaker omits much, making the computer razzle dazzle that much more irrelevant.

Conclusion

In his autobiographical What Mad Pursuit (Basic Books 1988) written when he was about 70, Nobelist Francis Crick expresses enthusiasm for Dawkins' argument against intelligent design, citing with admiration the "methinks" program.

Crick, who trained as a physicist and was also a panspermia advocate (see link above), doesn't seem to have noticed the difference in issues here. If we are talking about an analog of the origin of life (one-step arrival at the "methinks" sentence), then we must go with a distinct probability of 8.3 x 10-41. If we are talking about an analog of some evolutionary algorithm, then we can be convinced that complex results can occur with application of simple iterative rules (though, again, the probabilities don't favor passive natural selection).

One can only suppose that Crick, so anxious to uphold his lifelong vision of atheism, leaped on Dawkins' argument without sufficient criticality. On the other hand, one must accept that there is a possibility his analytic powers had waned.

At any rate, it seems fair to say that the theory of evolution is far from being a clear-cut theory, in the manner of Einstein's theory of relativity. There are a number of difficulties and a great deal of disagreement as to how the evolutionary process works. This doesn't mean there is no such process, but it does mean one should listen to mechanists like Dawkins with care.

******************

1. In a 1996 introduction to Watchmaker, Dawkins wrote that "I can find no major thesis in these chapters that I would withdraw, nothing to justify the catharsis of a good recant."  2. My analogy was inadequately posed in previous drafts. Hopefully, it makes more sense now.  3. Genetic Variation and Progressive Evolution David Layzer The American Naturalist Vol. 115, No. 6 (Jun., 1980), pp. 809-826 (article consists of 18 pages) Published by: The University of Chicago Press for The American Society of Naturalists Stable URL: http://www.jstor.org/stable/2460802
Note: An early draft contained a ridiculous mathematical error that does not affect the argument but was very embarrassing. Naturally, I didn't think of it until after I was walking outdoors miles from an internet terminal. It has now been put right.



Do dice play God?
A discussion of Irreligion


A discussion of Irreligion: a mathematician explains why the arguments for God just don'add up (Hill and Wang division of Farrar, Straus and Giroux 2008)
Please contact Conant at krypto...at...gmail...dot....com to
report errors or make comments.
Thank you.
Relevant links found at bottom of page.
Posted Nov 9, 2010. Minor revision posted Sept. 7, 2012.
A previous version of this discussion is found on Angelfire. 
 
By PAUL CONANT
John Allen Paulos has done a service by compiling the various purported proofs of the existence of a (monotheistic) God and then shooting them down in his book Irreligion: a mathematician explains why the arguments for God just don't add up.

Paulos, a Temple University mathematician who writes a column for ABC News, would be the first to admit that he has not disproved the existence of God. But, he is quite skeptical of such existence, and I suppose much of the impetus for his book comes from the intelligent design versus accidental evolution controversy.(1).

Really, this review isn't exactly kosher, because I am going to cede most of the ground. My thinking is that if one could use logico-mathematical methods to prove God's existence, this would be tantamount to being able to see God, or to plumb the depths of God. Supposing there is such a God, is he likely to permit his creatures, without special permission, to go so deep?

This review might also be thought rather unfair because Paulos is writing for the general reader and thus walks a fine line on how much mathematics to use. Still, he is expert at describing the general import of certain mathematical ideas, such as Gregory Chaitin's retooling of Kurt Goedel's undecidability theorem and its application to arguments about what a human can grasp about a "higher power."

Many of Paulos' counterarguments essentially arise from a Laplacian philosophy wherein Newtonian mechanics and statistical randomness rule all and are all. The world of phenomena, of appearances, is everything. There is nothing beyond. As long as we agree with those assumptions, we're liable to agree with Paulos. 

Just because...
Yet a caveat: though mathematics is remarkably effective at describing physical relations, mathematical abstractions are not themselves the essence of being (though even on this point there is a Platonic dispute), but are typically devices used for prediction. The deepest essence of being may well be beyond mathematical or scientific description -- perhaps, in fact, beyond human ken (as Paulos implies, albeit mechanistically, when discussing Chaitin and Goedel).2

Paulos' response to the First Cause problem is to question whether postulating a highly complex Creator provides a real solution. All we have done is push back the problem, he is saying. But here we must wonder whether phenomenal, Laplacian reality is all there is. Why shouldn't there be something deeper that doesn't conform to the notion of God as gigantic robot?

But of course it is the concept of randomness that is the nub of Paulos' book, and this concept is at root philosophical, and a rather thorny bit of philosophy it is at that. The topic of randomness certainly has some wrinkles that are worth examining with respect to the intelligent design controversy.

One of Paulos' main points is that merely because some postulated event has a terribly small probability doesn't mean that event hasn't or can't happen. There is a terribly small probability that you will be struck by lightning this year. But every year, someone is nevertheless stricken. Why not you?

In fact, zero probability doesn't mean impossible. Many probability distributions closely follow the normal curve, where each distinct probability is exactly zero.

Paulos applies this line of reasoning to the probabilities for the origin of life, which the astrophysicist Fred Hoyle once likened to the chance of a tornado whipping through a junkyard and leaving a fully assembled jumbo jet in its wake. (Nick Lane in Life Ascending: The Ten Great Inventions of Evolution (W.W. Norton 2009) relates some interesting speculations about life self-organizing around undersea hydrothermal vents. So perhaps the probabilities aren't so remote after all, but, really, we don't know.) 

Shake it up, baby
What is the probability of a specific permutation of heads and tails in say 20 fair coin tosses? This is usually given as 0.520, or about one chance in a million. What is the probability of 18 heads followed by 2 tails? The same, according to one outlook.

Now that probability holds if we take all permutations, shake them up in a hat and then draw one. All permutations in that case are equiprobable.4
However, intuitively it is hard to accept that 18 heads followed by 2 tails is just as probable as any other ordering. In fact, there are various statistical methods for challenging that idea.5

One, which is quite useful, is the runs test, which determines the probability that a particular sequence falls within the random area of the related normal curve. A runs test of 18H followed by 2T gives a z score of 3.71, which isn't ridiculously high, but implies that the ordering did not occur randomly with a confidence of 0.999.

Now compare that score with this permutation: HH TTT H TT H TT HH T HH TTT H. A runs test z score gives 0.046, which is very near the normal mean.
To recap: the probability of drawing a number with 18 ones (or heads) followed by 2 zeros (or tails) from a hat full of all 20-digit strings is on the order of 10-6. The probability that that sequence is random is on the order of 10-4. For comparison, we can be highly confident the second sequence is, absent further information, random. (I actually took it from irrational root digit strings.)

Again, those permutations with high runs test z scores are considered to be almost certainly non-random.3

At the risk of flogging a dead horse, let us review Paulos' example of a very well-shuffled deck of ordinary playing cards. The probability of any particular permutation is about one in 1068, as he rightly notes. But suppose we mark each card's face with a number, ordering the deck from 1 to 52. When the well-shuffled deck is turned over one card at a time, we find that the cards come out in exact sequential order. Yes, that might be random luck. Yet the runs test z score is a very large 7.563, which implies effectively 0 probability of randomness as compared to a typical sequence. (We would feel certain that the deck had been ordered by intelligent design.) 

Does not compute
The intelligent design proponents, in my view, are trying to get at this particular point. That is, some probabilities fall, even with a lot of time, into the nonrandom area. I can't say whether they are correct about that view when it comes to the origin of life. But I would comment that when probabilities fall far out in a tail, statisticians will say that the probability of non-random influence is significantly high. They will say this if they are seeking either mechanical bias or human influence. But if human influence is out of the question, and we are not talking about mechanical bias, then some scientists dismiss the non-randomness argument simply because they don't like it.

Another issue raised by Paulos is the fact that some of Stephen Wolfram's cellular automata yield "complex" outputs. (I am currently going through Wolfram's A New Kind of Science (Wolfram Media 2002) carefully, and there are many issues worth discussing, which I'll do, hopefully, at a later date.)

Like mathematician Eric Schechter (see link below), Paulos sees cellular automaton complexity as giving plausibility to the notion that life could have resulted when some molecules knocked together in a certain way. Wolfram's Rule 110 is equivalent to a Universal Turing Machine and this shows that a simple algorithm could yield any computer program, Paulos points out.
Paulos might have added that there is a countable infinity of computer programs. Each such program is computed according to the initial conditions of the Rule 110 automaton. Those conditions are the length of the starter cell block and the colors (black or white) of each cell.

So, a relevant issue is, if one feeds a randomly selected initial state into a UTM, what is the probability it will spit out a highly ordered (or complex or non-random) string versus a random string. Runs test scores would show the obvious: so-called complex strings will fall way out under a normal curve tail. 

Grammar tool
I have run across quite a few ways of gauging complexity, but, barring an exact molecular approach, it seems to me the concept of a grammatical string is relevant.

Any cell, including the first, may be described as a machine. It transforms energy and does work (as in W = 1/2mv2). Hence it may be described with a series of logic gates. These logic gates can be combined in many ways, but most permutations won't work (the jumbo jet effect).

For example, if we have 8 symbols and a string of length 20, we have 125,970 different arrangements. But how likely is it that a random arrangement will be grammatical?

Let's consider a toy grammar with the symbols a,b,c. Our only grammatical rule is that b may not immediately follow a.

So for the first three steps, abc and cba are illegal and the other four possibilities are legal. This gives a (1/3) probability of error on the first step.
In this case, the probability of error at every third step is not independent of the previous probability as can be seen by the permutations:
 abc  bca  acb  bac  cba  cab
That is, for example, bca followed by bac gives an illegal ordering. So the probability of error increases with n.

However, suppose we hold the probability of error at (1/3). In that case the probability of a legal string where n = 30 is less than (2/3)10 = 1.73%. Even if the string can tolerate noise, the error probabilities rise rapidly. Suppose a string of 80 can tolerate 20 percent of its digits wrong. In that case we make our n = 21.333. That is the probability of success is (2/3)21.333 = 0.000175.
And this is a toy model. The actual probabilities for long grammatical strings are found far out under a normal curve tail. 

This is to inform you
A point that arises in such discussions concerns entropy (the tendency toward decrease of order) and the related idea of information, which is sometimes thought of as the surprisal value of a digit string. Sometimes a pattern such as HHHH... is considered to have low information because we can easily calculate the nth value (assuming we are using some algorithm to obtain the string). So the Chaitin-Kolmogorov complexity is low, or that is, the information is low. On the other hand a string that by some measure is effectively random is considered here to be highly informative because the observer has almost no chance of knowing the string in detail in advance.

However, we can also take the opposite tack. Using runs testing, most digit strings (multi-value strings can often be transformed, for test purposes, to bi-value strings) are found under the bulge in the runs test bell curve and represent probable randomness. So it is unsurprising to encounter such a string. It is far more surprising to come across a string with far "too few" or far "too many" runs. These highly ordered strings would then be considered to have high information value.

This distinction may help address Wolfram's attempt to cope with "highly complex" automata. By these, he means those with irregular, randomlike stuctures running through periodic "backgrounds." If a sufficiently long runs test were done on such automata, we would obtain, I suggest, z scores in the high but not outlandish range. The z score would give a gauge of complexity.

We might distinguish complicatedness from complexity by saying that a random-like permutation of our grammatical symbols is merely complicated, but a grammatical permutation, possibly adjusted for noise, is complex. (We see, by the way, that grammatical strings require conditional probabilities.) 

A jungle out there
Paulos' defense of the theory of evolution is precise as far as it goes but does not acknowledge the various controversies on speciation among biologists, paleontologists and others.

Let us look at one of his counterarguments:

The creationist argument "goes roughly as follows: A very long sequence of individually improbable mutations must occur in order for a species or a biological process to evolve. If we assume these are independent events, then the probability that all of them will occur in the right order is the product of their respective probabilities" and hence a speciation probability is miniscule. "This line of argument," says Paulos, "is deeply flawed."

He writes: "Note that there are always a fantastically huge number of evolutionary paths that might be taken by an organism (or a process), but there is only one that actually will be taken. So, if, after the fact, we observe the particular evolutionary path actually taken and then calculate the a priori probability of its having been taken, we will get the miniscule probability that creationists mistakenly attach to the process as a whole."

Though we have dealt with this argument in terms of probability of the original biological cell, we must also consider its application to evolution via mutation. We can consider mutations to follow conditional probabilities. And though a particular mutation may be rather probable by being conditioned by the state of the organism (previous mutation and current environment), we must consider the entire chain of mutations represented by an extant species.

If we consider each species as representing a chain of mutations from the primeval organism, then we have for each a chain of conditional probability. A few probabilities may be high, but most are extremely low. Conditional probabilities can be graphed as trees of branching probabilities, so that a chain of mutation would be represented by one of these paths. We simply multiply each branch probability to get the total probability per path.

As a simple example, a 100-step conditional probability path with 10 probabilities of 0.9 and 60 with 0.7 and 30 with 0.5 yields a cumulative probability of 1.65 x 10-19. In other words, the more mutations and ancestral species attributed to an extanct species, the less likely that species is to exist via passive natural selection. The actual numbers are so remote as to make natural selection by passive filtering virtually impossible, though perhaps we might conjecture some nonlinear effect going on among species that tends to overcome this problem.

Think of it this way: During an organism's lifetime, there is a fantastically large number of possible mutations. What is the probability that the organism will happen upon one that is beneficial? That event would, if we are talking only about passive natural selection, be found under a probability distribution tail (whether normal, Poisson or other). The probability of even a few useful mutations occurring over 3.5 billion years isn't all that great (though I don't know a good estimate).

A 'botific vision
Let us, for example, consider Wolfram's cellular automata, which he puts into four qualitative classes of complexity. One of Wolfram's findings is that adding complexity to an already complex system does little or nothing to increase the complexity, though randomized initial conditions might speed the trend toward a random-like output (a fact which, we acknowledge, could be relevant to evolution theory).

Now suppose we take some cellular automata and, at every nth or so step, halt the program and revise the initial conditions slightly or greatly, based on a cell block between cell n and cell n+m. What is the likelihood of increasing complexity to the extent that a Turing machine is devised? Or suppose an automaton is already a Turing machine. What is the probability that it remains one or that a more complex-output Turing machine results from the mutation?

I haven't calculated the probabilities, but I would suppose they are all out under a tail.

Paulos has elsewhere underscored the importance of Ramsey theory, which has an important role in network theory, in countering the idea that "self-organization" is unlikely. Actually, with sufficient n, "highly organized" networks are very likely.6 Whether this implies sufficient resources for the self-organization of a machine is another matter. True, high n seem to guarantee such a possibility. But, the n may be too high to be reasonable. 

Darwin on the Lam?
However, it seems passive natural selection has an active accomplice in the extraordinarily subtle genetic machinery. It seems that some form of neo-Lamarckianism is necessary, or at any rate a negative feedback system which tends to damp out minor harmful mutations without ending the lineage altogether (catastrophic mutations usually go nowhere, the offspring most often not getting a chance to mate). 

Matchmaking
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.

For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.

That is, we don't calculate, say 11^-11 (3.5x10^-15), but we have that our series approximates very closely 1 - e^-1 = 0.63.

Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?

The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.

These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.

And so
It may be that the in's and out's of evolution arguments were beyond the scope of Irreligion, but I don't think Paulos has entirely refuted the skeptics in this matter.(7)

Nevertheless, the book is a succinct reference work and deserves a place on one's bookshelf.

1. Paulos finds himself disconcerted by the "overbearing religiosity of so many humorless people."
Whenever one upholds an unpopular idea, one can expect all sorts of objections from all sorts of
people, not all of them well mannered or well informed. Comes with the territory. Unfortunately,
I think this backlash may have blinded him to the many kind, cheerful and non-judgmental
Christians and other religious types in his vicinity.  Some people, unable to persuade Paulos of
God's existence, end the conversation with "I'll pray for you..." I can well imagine that he
senses that the pride of the other person is motivating a put-down. Some of these souls might try
not letting the left hand know what the right hand is doing.

2. Paulos recounts this amusing fable:
The great mathematician Euler was called to court to debate the necessity of God's existence with
a well-known atheist. Euler opens with: "Sir, (a + bn)/n = x. Hence, God exists. Reply."
Flabbergasted, his mathematically illiterate opponent walked away, speechless. Yet, is this joke
as silly as it at first seems? After all, one might say that the mental activity of mathematics
is so profound (even if the specific equation is trivial) that  the existence of a Great Mind is
implied.

3. We should caution that the runs test, which works for n_1 and n_2, each at least equal to 8
fails for the
pattern HH TT HH TT... This failure seems to be an artifact of the runs test assumption that a
usual number of runs is about n/2. I suggest that we simply say that the probability of that
pattern is less than or equal to H T H T H T..., a pattern whose z score rises rapidly with n.
Other patterns such as HHH TTT HHH... also climb away from the randomness area slowly with n.
With these cautions, however, the runs test gives striking results.

4. Thanks to John Paulos for pointing out an embarrassing misstatement in a previous draft. I
somehow mangled the probabilities during the editing. By the way, my tendency to write flubs
when I actually know better is a real problem for me and a reason I need attentive readers to
help me out.

5. I also muddled this section. Josh Mitteldorf's sharp eyes forced a rewrite.

6. Paulos in a column writes: 'A more profound version of this line of thought can be traced
back to British mathematician Frank Ramsey, who proved a strange theorem. It stated that if you
have a sufficiently large set of geometric points and every pair of them is connected by either
a red line or a green line (but not by both), then no matter how you color the lines, there will
always be a large subset of the original set with a special property. Either every pair of the
subset's members will be connected by a red line or every pair of the subset's members will be
connected by a green line.  If, for example, you want to be certain of having at least three
points all connected by red lines or at least three points all connected by green lines, you will
need at least six points. (The answer is not as obvious as it may seem, but the proof isn't
difficult.)  For you to be certain that you will have four points, every pair of which is
connected by a red line, or four points, every pair of which is connected by a green line,
you will need 18 points, and for you to be certain that there will be five points with this
property, you will need -- it's not known exactly - between 43 and 55. With enough points,
you will inevitably find unicolored islands of order as big as you want, no matter how you color
the lines.

7. Paulos, interestingly, tells of how he lost a great deal of money by an ill-advised enthusiasm
for WorldCom stock in A Mathematician Plays the Stock Market (Basic Books, 2003). The expert
probabalist and statistician found himself under a delusion which his own background should have
fortified him against. (The book, by the way, is full of penetrating insights about the
probability and the market.) One wonders whether Paulos might also be suffering from another
delusion: that probabilities favor atheism.

Wikipedia article on Chaitin-Kolmogorov complexity
In search of a blind watchmaker (by Paul Conant)
Wikipedia article on runs test
Eric Schechter on Wolfram vs intelligent design
On Hilbert's sixth problem (by Paul Conant)
The scientific embrace of atheism (by David Berlinski) 
John Allen Paulos' home page

The knowledge delusion

First published Thursday, November 3, 2011



Reflections on The God Delusion (Houghton Mifflin 2006) by the evolutionary biologist Richard Dawkins.



Essay by PAUL CONANT

Preliminary remarks:
Our discussion focuses on the first four chapters of Dawkins' book, wherein he makes his case for the remoteness of the probability that a monolithic creator and controller god exists.

Alas, it is already November 2011, some five years after publication of 
Delusion. Such a lag is typical of me, as I prefer to discuss ideas at my leisure. This lag isn't quite as outrageous as the timing of my paper on Dawkins' The Blind Watchmaker, which I posted about a quarter century after the book first appeared.

I find that I have been quite hard on Dawkins, or, actually, on his reasoning. Even so, I have nothing but high regard for him as a fellow sojourner on spaceship Earth. Doubtless I have been unfair in not highlighting positive passages in
 Delusion, of which there are some (1). Despite my desire for objectivity, it is clear that much of the disagreement is rooted in my personal beliefs (see the link Zion below:

Summary:
Dawkins applies probabilistic reasoning to etiological foundations, without defining probability or randomness. He disdains Bayesian subjectivism without realizing that that must be the ground on which he is standing. In fact, nearly everything he writes on probability indicates a severe lack of rigor. This lack of rigor compromises his other points.

Richard Dawkins argues that he is no proponent of simplistic "scientism" and yet there is no sign in Delusion's first four chapters that in fact he isn't a victim of what might be termed the "scientism delusion." But, as Dawkins does not define scientism, he has plenty of wiggle room.

From what I can gather, those under the spell of "scientism" hold the, often unstated, assumption that the universe and its components can be understood as an engineering problem, or set of engineering problems. Perhaps there is much left to learn, goes the thinking, but it's all a matter of filling in the engineering details. (http://en.wikipedia.org/wiki/Scientism).

Though the notion of a Laplacian cosmos that requires no god to, every now and then, act to keep things stable is officially passe, nevertheless many scientists seem to be under the impression that the model basically holds, though needing a bit of tweaking to account for the effects of relativity and of quantum fluctuations.

Doubtless Dawkins is correct in his assertion that many American scientists and professionals are closet atheists, with quite a few espousing the "religion" of Einstein, who appreciated the elegance of the phenomenal universe but had no belief in a personal god (2).

Interestingly, Einstein had a severe difficulty with physical, phenomenal reality, objecting strenuously to the "probabilistic" requirement of quantum physics, famously asserting that "god" (i.e., the cosmos) "does not play dice." He agreed with Erwin Schroedinger that Schroedinger's imagined cat strongly implies the absurdity of "acausal" quantum behavior (3). It turns out that Einstein was wrong, with statistical experiments in the 1980s demonstrating that "acausality" -- within constraints -- is fundamental to quantum actions.

Many physicists have decided to avoid the quantum interpretation minefield, discretion being the better part of valor. Even so, Einstein was correct in his refusal to play down this problem, recognizing that modern science can't easily dispense with classical causality. We speak of energy in terms of vector sums of energy transfers (notice the circularity) but no one has a good handle on what the it is behind that abstraction.

A partly subjective reality at a fundamental level is anethema to someone like Einstein -- so disagreeable, in fact, that one can ponder whether the great scientist deep down suspected that such a possibility threatened his reasoning in denying a need for a personal god. Be that as it may, one can understand that a biologist might not be familiar with how nettlesome the quantum interpretation problem really is, but Dawkins has gone beyond his professional remit and taken on the roles of philosopher and etiologist. True, he rejects the label of philosopher, but his basic argument has been borrowed from the atheist philosopher Bertrand Russell.

Dawkins recapitulates Russell thus: "The designer hypothesis immediately raises the question of who designed the designer."

Further: "A designer God cannot be used to explain organized complexity because a God capable of designing anything would have to be complex enough to demand the same kind of explanation... God presents an infinite regress from which we cannot escape."

Dawkins' a priori assumption is that "anything of sufficient complexity to design anything, comes into existence only as the end product of an extended process of gradual evolution."

If there is a great designer, "the designer himself must be the end product of some kind of cumulative escalator or crane, perhaps a version of Darwinism in its own universe."

Dawkins has no truck with the idea that an omnipotent, omniscient (and seemingly paradoxical) god might not be explicable in engineering terms. Even if such a being can't be so described, why is he/she needed? Occam's razor and all that.

Dawkins does not bother with the results of Kurt Goedel and its implications for Hilbert's sixth problem: whether the laws of physics can ever be -- from a human standpoint -- both complete and consistent. Dawkins of course is rather typical of those scientists who pay little heed to that result or who have tried to minimize its importance in physics. A striking exception is the mathematical physicist Roger Penrose who saw that Goedel's result was profoundly important (though mathematicians have questioned Penrose's interpretation).

A way to intuitively think of Goedel's conundrum is via the Gestalt effect: the whole is greater than the sum of its parts. But few of the profound issues of phenomenology make their way into Dawkins' thesis. Had the biologist reflected more on Penrose's The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics (Oxford 1989), perhaps he would not have plunged in where Penrose so carefully trod.

Penrose has referred to himself, 
according to a Wikipedia article, as an atheist. In the film A Brief History of Time, the physicist said, "I think I would say that the universe has a purpose, it's not somehow just there by chance ... some people, I think, take the view that the universe is just there and it runs along -- it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it."

By contrast, we get no such ambiguity or subtlety from Dawkins. Yet, if one deploys one's prestige as a scientist to discuss the underpinnings of reality, more than superficialities are required. The unstated, a priori assumption is, essentially, a Laplacian billiard ball universe and that's it, Jack.

Dawkins embellishes the Russellian rejoinder with the language of probability: What is the probability of a superbeing, capable of listening to millions of prayers simultaneously, existing? This follows his scorning of Stephen D. Unwin's The Probability of God (Crown Forum 2003), which cites Bayesian methods to obtain a high probability of god's existence.
http://www.stephenunwin.com/

Dawkins is uninterested in Unwin's subjective prior probabilities, all the while being utterly unaware that his own probability assessment is altogether subjective. Heedless of the philosophical underpinnings of probability theory, he doesn't realize that by assigning a probability of "remote" at the extremes of etiology, he is engaging in a subtle form of circular reasoning.

The reader deserves more than an easy putdown of Unwin in any discussion of probabilities. Dawkins doesn't acknowledge that Bayesian statistics is a thriving school of research that seeks to find ways to as much as possible "objectify" the subjective assessments of knowledgeable persons. There has been strong controversy concerning Bayesian versus classical statistics, and there is a reason for that controversy: it gets at foundational matters of etiology. Nothing on this from Dawkins.

Without a Bayesian approach, Dawkins is left with a frequency interpretation of probability (law of large numbers and so forth). But we have very little -- in fact Dawkins would say zero -- information about the existence or non-existence of a sequence of all powerful gods or pre-cosmoses. Hence, there are no frequencies to analyze. Hence, use of a probability argument is in vain.

Dawkins elsewhere says (4) that he has read the great statistician Ronald Fisher, but one wonders whether he appreciates the meaning of statistical analysis. Fisher, who also opposed the use of Bayesian premises, is no solace when it comes to frequency-based probabilities. Take Fisher's combined probability test, a technique for data fusion or "meta-analysis" (analysis of analyses): What are the several different tests of probability that might be combined to assess the probability of god?

Dawkins is quick to brush off William A. Dembski, the intelligent design advocate who uses statistical methods to argue that the probability is cosmically remote that life originated in a random manner. And yet Dawkins himself seems to have little or no grasp of the basis of probabilities.

In fact, Dawkins makes no attempt to define randomness, a definition routinely brushed off in elementary statistics texts but which represents quite a lapse when getting at etiological foundations (5) and using probability as a conceptual, if not mathematical, tool.

But, to reiterate, the issue goes yet deeper. If, at the extremes, causation is not nearly so clear-cut as one might naively imagine, then at those extremes probabilistic estimates may well be inappropriate.

Curiously, Russell discovered Russell's paradox, which was ousted from set theory by fiat (axiom). Then along came Goedel who proved that axiomatic set theory (a successor to the theory of types propounded by Russell and Alfred North Whitehead in their Principia Mathematica) could not be both complete and consistent. That is, Goedel jammed Russell's paradox right down the old master's throat, and it hurt. It hurt because Goedel's result makes a mockery of the fond Russellian illusion of the universe as giant computerized robot. How does a robot plan for and build itself? Algorithmically, it is impossible. Dawkins handles this conundrum, it seems, by confounding the "great explanatory power" of natural selection -- wherein lifeform robots are controlled by robotic DNA (selfish genes) -- with the origin of the cosmos.

But the biologist, so focused on this foundational issue of etiology, manages to avert his eyes from the Goedelian "frame problem." And yet even atheistic physicists sense that the cosmos isn't simplistically causal when they describe the overarching reality as a "spacetime block." In other words, we humans are faced with some higher or other reality -- a transcendent "force" -- in which we operate and which, using standard mathematical logic, is not fully describable. This point is important. Technically, perhaps, we might add an axiom so that we can "describe" this transcendent (topological?) entity, but that just pushes the problem back and we would then need another axiom to get at the next higher entity.

Otherwise, Dawkins' idea that this higher dimensional "force" or entity should be constructed faces the Goedelian problem that such construction would evidently imply a Turing algorithm, which, if we want completeness and consistency, requires an infinite regress of axioms. That is, Dawkins' argument doesn't work because of the limits on knowledge discovered by Goedel and Alan Turing. This entity is perforce beyond human ken.

One may say that it can hardly be expected that a biologist would be familiar with such arcana of logic and philosophy. But then said biologist should beware superficial approaches to foundational matters (6).

At this juncture, you may be thinking: "Well, that's all very well, but that doesn't prove the existence of god." But here is the issue: One may say that this higher reality or "power" or entity is dead something (if it's energy, it's some kind of unknown ultra-energy) or is a superbeing, a god of some sort. Because this transcendent entity is inherently unknowable in rationalistic terms, the best someone in Dawkins' shoes might say is that there is a 50/50 chance that the entity is intelligent. I hasten to add that probabilistic arguments as to the existence of god are not very convincing (7).

A probability estimate's job is to mask out variables on the assumption that with enough trials these unknowns tend to cancel out. Implicitly, then, one is assuming that a god has decided not to influence the outcome (8). At one time, in fact, men drew lots in order to let god decide an outcome. (One of the reasons that some see gambling as sinful is because it dishonors god and enthrones Lady Randomness.)

Curiously, Dawkins pans the "argument from incredulity" proffered by some anti-Darwinians but his clearly-its-absurdly-improbable case against a higher intelligence is in fact an argument from incredulity, being based on his subjective expert estimate.

Dawkins' underlying assumption is that mechanistic hypotheses of causality are valid at the extremes, an assumption common to modern naive rationalism.

Another important oversight concerns the biologist's Dawkins-centrism. "Your reality, if too different from mine, is quite likely to be delusional. My reality is obviously logically correct, as anyone can plainly see." This attitude is quite interesting in that he very effectively gives some important information about how the brain constructs reality and how easily people might suffer from delusions, such as being convinced that they are in regular communication with god.

True, Dawkins jokingly mentions one thinker who posits a Matrix-style virtual reality for humanity and notes that he can see no way to disprove such a scenario. But plainly Dawkins rejects the possibility that his perception and belief system, with its particular limits, might be delusional.

In Dawkins' defense, we must concede that the full ramifications of quantum puzzlements have yet to sink into the scientific establishment, which -- aside from a distaste for learning that, like Wile E. Coyote, they are standing on thin air -- has a legitimate fear of being overrun by New Agers, occultists and flying saucer buffs. Yet, by skirting this matter, Dawkins does not address the greatest etiological conundrum of the 20th century which, one would think, might well have major implications in the existence-of-god controversy.

Dawkins is also rather cavalier 
about probabilities concerning the origin of life, attacking the late Fred Hoyle's "jumbo jet" analogy without coming to grips with what was bothering Hoyle and without even mentioning that scientists of the caliber of Francis Crick and Joshua Lederberg were troubled by origin-of-life probabilities long before Michael J. Behe and Dembski touted the intelligent design hypothesis.

Astrophysicist Hoyle, whose steady state theory of the universe was eventually trumped by George Gamow's big bang theory, said on several occasions that the probability of life assembling itself from some primordial ooze was equivalent to the probability that a tornado churning through a junkyard would leave a fully functioning Boeing 747 in its wake. Hoyle's atheism was shaken by this and other improbabilities, spurring him toward various panspermia (terrestrial life began elsewhere) conjectures. In the scenarios outlined by Hoyle and Chandra Wickramasinghe, microbial life or proto-life wafted down through the atmosphere from outer space, perhaps coming from "organic" interstellar dust or from comets.

One scenario had viruses every now and again floating down from space and, besides setting off the occasional pandemic, enriching the genetic structure of life on earth in such a way as to account for increasing complexity. Hoyle was not specifically arguing against natural selection, but was concerned about what he saw as statistical troubles with the process. (He wasn't the only one worried about that; there is a long tradition of scientists trying to come up with ways to make mutation theory properly synthesize with Darwinism.)

Dawkins laughs off Hoyle's puzzlement about mutational probabilities without any discussion of the reasons for Hoyle's skepticism or the proposed solutions.

There are various ideas about why natural selection is robust enough to, thus far, prevent life from petering out (9). In my essay Do dice play God? (link above), I touch on some of the difficulties and propose a neo-Lamarckian mechanism as part of a possible solution, and at some point I hope to write more about the principles that drive natural selection. At any rate, I realize that Dawkins may have felt that he had dealt with this subject elsewhere, but his four-chapter thesis omits too much. A longer, more thoughtful book -- after the fashion of Penrose's The Emperor's New Mind -- is, I would say, called for when heading into such deep waters.

Hoyle's qualms, of course, were quite unwelcome in some quarters and may have resulted in the Nobel prize committee bypassing him. And yet, though the space virus idea isn't held in much esteem, panspermia is no longer considered a disrespectable notion, especially as more and more extrasolar planets are identified. Hoyle's use of panspermia conjectures was meant to account for the probability issues he saw associated with the origin and continuation of life. (Just because life originates does not imply that it is resilient enough not to peter out after X generations.)

Hoyle, in his own way, was deploying panspermia hypotheses in order to deal with a form of the anthropic principle. If life originated as a prebiotic substance found across wide swaths of space, probabilities might become reasonable. It was the Nobelist Joshua Lederberg who made the acute observation that interstellar dust particles were about the size of organic molecules. Though this correlation has not panned out, that doesn't make Hoyle a nitwit for following up.

In fact, Lederberg was converted to the panspermia hypothesis by yet another atheist (and Marxist), J.B.S. Haldane, a statistician who was one of the chief architects of the "modern synthesis" merging Mendelism with Darwinism.

No word on any of this from Dawkins, who dispatches Hoyle with a parting shot that Hoyle (one can hear the implied chortle) believed that archaeopteryx was a forgery, after the manner of Piltdown man. The biologist declines to tell his readers about the background of that controversy and the fact that Hoyle and a group of noted scientists reached this conclusion after careful examination of the fossil evidence. Whether or not Hoyle and his colleagues were correct, the fact remains that he undertook a serious scientific investigation of the matter.

http://www.chebucto.ns.ca/Environment/NHR/archaeopteryx.html

Another committed atheist, Francis Crick, co-discoverer of the doubly helical structure of DNA, was even wilder than Hoyle in proposing a panspermia idea in order to account for probability issues. He suggested in a 1970s paper and in his book Life Itself: Its Origin and Nature (Simon & Schuster 1981) that an alien civilization had sent microbial life via rocketship to Earth in its long-ago past, perhaps as part of a program of seeding the galaxy. Why did the physicist-turned-biologist propose such a scenario? Because the DNA helixes of all earthly life twist in the same direction. That seemed staggeringly unlikely to Crick, who thought we should find some DNA screws turning left and some right.

I don't bring this up to argue with Crick, but to underscore that Dawkins plays Quick-Draw McGraw with serious people without discussing the context. I.e., his book comes across as propagandistic, rather than fair-minded. It might be contrasted with John Allen Paulos' book Irreligion (see Do dice play god? above), which tries to play fair and which doesn't make duffer logico-mathematical blunders (10).

Though Crick and Hoyle were outliers in modern panspermia conjecturing, the concept is respectable enough for NASA to take seriously.

The cheap shot method can be seen in how Dawkins deals with Carl Jung's claim of an inner knowledge of god's existence. Jung's assertion is derided with a snappy one-liner that Jung also believed that objects on his bookshelf could explode spontaneously. That takes care of Jung! -- irrespective of the many brilliant insights contained in his writings, however controversial. (Disclaimer: I am neither a Jungian nor a New Ager.).

Granted that Jung was talking about what he took to be a paranormal event and granted that Jung is an easy target for statistically minded mechanists and granted that Jung seems to have made his share of missteps, we make three points:

1. There was always the possibility that the exploding object occurred as a result of some anomalous, but natural event.

2. A parade of distinguished British scientists have expressed strong interest in paranormal matters, among them officers of paranormal study societies. The American Brian Josephson, who received a Nobel prize for the quantum physics behind the Josephson junction, speaks up for the reality of mental telepathy (for which he has been ostracized by the "billiard ball" school of scientists).

3. If Dawkins is trying to debunk the supernatural using logical analysis, then it is not legitimate to use belief in the supernatural to discredit a claim favoring the supernatural (11).

Getting back to Dawkins' use of probabilities, the biologist contends with the origin-of-life issue by invoking the anthropic principle and the principle of mediocrity, along with a verbal variant of Drake's equation http://en.wikipedia.org/wiki/Drake_equation

The mediocrity principle says that astronomical evidence shows that we live on a random speck of dust on a random dustball blowing around in a (random?) mega dust storm.

The anthropic principle says that, if there is nothing special about Earth, isn't it interesting how Earth travels about the sun in a "Goldilocks zone" ideally suited for carbon based life and how the planetary dynamics, such as tectonic shift, seem to be just what is needed for life to thrive (as discussed in the book Rare Earth: Why Complex Life is Uncommon in the Universe by Peter D. Ward and Donald Brownlee (Springer Verlag 2000))? Even further, isn't it amazing that the seemingly arbitrary constants of nature are so exactly calibrated as to permit life to exist, as a slight difference in the index of those constants known as the fine structure constant would forbid galaxies from ever forming? This all seems outrageously fortuitous.

Let us examine each of Dawkins' arguments.

Suppose, he says, that the probability of life originating on Earth is a billion to one or even a billion billion to one (10^-9 and 10^-18). If there are that many Earth-like planets in the cosmos, the probability is virtually one that life will arise spontaneously. We just happen to be the lucky winner of the cosmic lottery, which is perfectly logical thus far.

Crick, as far as I know, is the only scientist to point out that we can only include the older sectors of the cosmos, in which heavy metals have had time to coalesce from the gases left over from supernovae -- i.e., second generation stars and planets (by the way, Hoyle was the originator of this solution to the heavy metals problem). Yet still, we may concede that there may be enough para-Earths to answer the probabilities posed by Dawkins.

Though careful to say that he is no expert on the origin of life, Dawkins' probabilities, even if given for the sake of argument, are simply Bayesian "expert estimates." But, it is quite conceivable that those probabilities are far too high (though I candidly concede it is very difficult to assign any probability or probability distribution to this matter).

Consider that unicellular life, with the genes on the DNA (or RNA) acting as the "brain," exploits proteins as the cellular workhorses in a great many ways. We know that sometimes several different proteins can fill the same job, but that caveat doesn't much help what could be a mind-boggling probability issue.

Suppose that, in some primordial ooze or on some undersea volcanic slope, a prebiotic form has fallen together chemically and, in order to cross the threshold to lifeform, requires one more protein to activate. A protein is the molecule that takes on a specific shape, carrying specific electrochemical properties, after amino acids fold up. Protein molecules fit into each other and other constituents of life like lock and key (though on occasion more than one key fits the same lock).

The amino acids used by terrestrial life can, it turns out, be shuffled in many different ways to yield many different proteins. How many ways? About 10^60, which exceeds the number of stars in the observable universe by 24 orders of magnitude! And the probability of such a spark-of-life event might be in that ball park. If one considers the predecessor protein link-ups as independent events and multiplies those probabilities, we would come up with numbers even more absurd.

But, Dawkins has a way out, though he loses the thread here. His way out is that a number of physicists have posited, for various reasons, some immense -- even infinite -- number of "parallel" universes, which have no or very weak contact with this one and are hence undetectable. This could handily account for our universe having the Goldilocks fine structure constant and, though he doesn't specify this, might well provide enough suns in those universes that have galaxies to account for even immensely improbable events.

I say Dawkins loses the thread because he scoffs at religious people who see the anthropic probabilities as favoring their position concerning god's existence without, he says, realizing that the anthropic principle is meant to remove god from the picture. What Dawkins himself doesn't realize is that he mixes apples and oranges here. The anthropic issue raises a disturbing question, which some religious people see as in their favor. Some scientists then seize on the possibility of a "multiverse" to cope with that issue.

But now what about Occam's razor? Well, says Dawkins, that principle doesn't quite work here. To paraphrase Einstein, once one removes all reasonable explanations the remaining explanation, no matter how absurd it sounds, must be correct.

And yet what is Dawkins' basis for the proposition that a host of undetectable universes is more probable than some intelligent higher power? There's the rub. He is, no doubt unwittingly, making an a priori assumption that any "natural" explanation is more reasonable than a supernatural "explanation." Probabilities really have nothing to do with his assumption.

But perhaps we have labored in vain over the "multiverse" argument, for at one point we are told that a "God capable of calculating the Goldilocks values" of nature's constants would have to be "at least as improbable" as the finely tuned constants of nature, "and that's very improbable indeed." So at bottom, all we have is a Bayesian expert prior estimate.

Well, say you, perhaps a Wolfram-style
 algorithmic complexity argument can save the day. Such an argument might be applicable to biological natural selection, granted. But what selected natural selection? A general Turing machine can compute anything computable, including numerous "highly complex" outputs programed by easy-to-write inputs. But what probability does one assign to a general Turing machine spontaneously arising, say, in some electronic computer network? Wolfram found that "interesting" celullar automata were rare. Even rarer would be a complex cellular automaton that accidentally emerged from random inputs.

I don't say that such a scenario is impossible, but rather to assume that it just must be so is little more than hand-waving.

Dawkins tackles the problem of the outrageously high information values associated with complex life forms by conceding that a species, disconnected from information about causality, has only a remote probability of occurrence by random chance. But, he counters, there is in fact a non-random process at work: natural selection.

I suppose he would regard it a quibble if one were to mention that mutations occur randomly, and perhaps so it is. However, it is not quibbling to question how the powerful process of natural selection first appeared on the scene. In other words, the information values associated with the simplest known form (least number of genes) of microbial life is many orders of magnitude greater than the information values associated with background chemicals -- which was Hoyle's point in making the jumbo jet analogy.

And then there is the probability of life thriving. Just because it emerges, there is no guarantee that it would be robust enough not to peter out in a few generations (9).Dawkins dispenses with proponents of intelligent design, such as biologist Michael J. Behe, author of Darwin’s Black Box: The Biochemical Challenge to Evolution (The Free Press 1996), by resort to the conjecture that a system may exist after its "scaffolding" has vanished. This conjecture is fair, but, at this point, the nature of the scaffolding, if any, is unknown. Dawkins can't give a hint of the scaffolding's constituents because, thus far, no widely accepted hypothesis has emerged. Natural selection is a consequence of an acutely complex mechanism. The "scaffolding" is indeed a "black box" (it's there, we are told, but no one can say what's inside).

Though it cannot be said that intelligent design advocate Behe has proved "irreducible complexity," the fact is that the magnitude of organic complexity has even prompted atheist scientists to look far afield for plausible explanations.

Biologists, Dawkins writes, have had their consciousnesses raised by natural selection's "power to tame improbability" and yet that power has very little to do with the issues of the origins of life or of the universe and hence does not bolster his case against god. I suppose that if one waxes mystical about natural selection -- making it a mysterious, ultra-abstract principle, then perhaps Dawkins makes sense. Otherwise, he's amazingly naive.


Relevant links:

In search of a blind watchmaker
http://www.angelfire.com/az3/nfold/watch.html
Do dice play God?
http://www.angelfire.com/az3/nfold/dice.html

Toward a signal model of perception
http://www.angelfire.com/ult/znewz1/qball.html

On Hilbert's sixth problem
http://kryptograff.blogspot.com/2007/06/on-hilberts-sixth-problem.html

The world of null-H

http://kryptograff.blogspot.com/2007/06/world-of-null-h.html
The universe cannot be modeled as a Turing machine
http://www.angelfire.com/az3/nfold/turing.html

Biological observer-participation and Wheeler's 'law without law'
by Brian D. Josephson

http://arxiv.org/abs/1108.4860


Footnotes

1. We don't claim that none of his criticisms are worth anything. Plenty of religious people, Martin Luther included, would heartily agree with some of his complaints, which, however, are only tangentially relevant to his main argument.Anyone can agree that vast amounts of cruelty have occurred in the name of god. Yet, it doesn't appear that Dawkins has squarely faced the fact of the genocidal rampages committed under the banner of godlessness (Mao, Pol Pot, Stalin).

What drives mass violence is of course an important question. As an evolutionary biologist, Dawkins would say that such behavior is a consequence of natural selection, a point underscored by the ingrained propensity of certain simian troops to war on members of the same species. No doubt Dawkins would concede that the bellicosity of those primates had nothing to do with beliefs in some god.

So it seems that Dawkins may be placing too much emphasis on beliefs in god as a source of violent strife, though we should grant that it seems perplexing as to why a god would permit such strife.

Still, it appears that the author of Climbing Mount Improbable (W.W. Norton 1996) has confounded correlation with causation.


2. Properly this footnote, like the previous one, does not affect Dawkins' case against god's existence, which is the reason for the placement of these remarks.
In a serious lapse, Dawkins has that "there is something to be said" for treating Buddhism and Confucianism not as religions but as ethical systems. In the case of Buddhism, it may be granted that Buddhism is atheistic in the sense of denying a personal, monolithic god. But, from the perspective of a materialist like Dawkins, Buddhism certainly purveys numerous supernaturalistic ideas, with followers espousing ethical beliefs rooted in a supernatural cosmic order -- which one would think qualifies Buddhism as a religion.

True, Dawkins' chief target is the all-powerful god of Judaism, Christianity and Islam (Zoroastrianism too), with little focus on pantheism, hentheism or supernatural atheism. Yet a scientist of his standing ought be held to an exacting standard.


3. As well as conclusively proving that quantum effects can be scaled up to the "macro world."
4. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (W.W. Norton 1986).

5. The same might be said of Dembski.

6. A fine, but significant, point: Dawkins, along with many others, believes that Zeno's chief paradox has been resolved by the mathematics of bounded infinite series. However, quantum physics requires that potential energy be quantized. So height H above ground is measurable discontinuously in a finite number of lower heights. So a rock dropped from H to ground must first reach H', the next discrete height down. How does the rock in static state A at H reach static state B at H'? That question has no answer, other than to say something like "a quantum jump occurs." So Zeno makes a sly comeback.

This little point is significant because it gets down to the fundamentals of causality, something that Dawkins leaves unexamined.
7. After the triumphs of his famous theorems, Goedel stirred up more trouble by a finding a solution to Eistein's general relativity field equations which, in Goedel's estimation, demonstrated that time (and hence naive causality) is an illusion. A rotating universe, he found, could contain closed time loops such that if a rocket traveled far enough into space it would eventually reach its own past, apparently looping through spacetime forever. Einstein dismissed his friend's solution as inconsistent with physical reality.

Before agreeing with Einstein that the solution is preposterous, consider the fact that many physicists believe that there is a huge number of "parallel," though undetectable, universes.

And we can leave the door ajar, ever so slightly, to Dawkins' thought of a higher power fashioning the universe being a result of an evolutionary process. Suppose that far in our future an advanced race builds a spaceship bearing a machine that resets the constants of nature as it travels, thus establishing the conditions for the upcoming big bang in our past such that galaxies, and we, are formed. Of course, we then are faced with the question: where did the information come from?
8. Unless one assumes another god who is exactly contrary to the first, or perhaps a group of gods whose influences tend to cancel.9. Consider a child born with super-potent intelligence and strength. What are the probabilities that the traits continue?

A. If the child matures and mates successfully, the positive selection pressure from one generation to the next is faced with a countervailing tendency toward dilution. It could take many, many generations before that trait (gene set) becomes dominant, and in the meantime, especially in the earlier generations, extinction of the trait is a distinct possibility.

B. In social animals, very powerful individual advantages come linked to a very powerful disadvantage: the tendency of the group to reject as alien anything too different. Think of the recent tendency of white mobs to lynch physically superior black males. Or of the early 19th century practice of Australian tribesmen to kill mixed race offspring born to their women.


10. I have also made more than my share of those.

11. Colin J. Humphreys, a Cambridge professor, takes issue with one of Dawkins's barbs. Humphreys, a materials science professor with an interest in biblical mysteries, quotes Dawkins as saying, "The only difference between the Da Vinci Code and the gospels is that the gospels are ancient fiction while The Da Vinci Code is modern fiction."

Humphreys responds that in his book The Mystery of the Last Supper: Reconstructing the Final Days of Jesus (Cambridge University Press, 2011), he has "taken what the biblical scholar F.F. Bruce called 'the thorniest problem in the New Testament,' the date and nature of the last supper" and shown that despite the complexity of the problem "the gospels are in substantial agreement."

Humphreys did extensive research and proposes that Jesus and his disciples ate the last supper on the Wednesday before the crucifixion, not the Thursday before.




Draft 04</ br> Minor editing on Oct. 1, 2013</ br>

No comments:

Post a Comment