Herbert Yardley: king of the whistleblowers
Herbert O. Yardley is America's archetypical spook whistleblower. He had successfully modernized America's code-breaking power as an Army Signal Corps lieutenant during and shortly after World War I. But the powers that be decided to force him out of his well-paid post as a high-caliber code-cracker.
Herbert Yardley's NSA biography
http://www.nsa.gov/public_info/_files/cryptologic_spectrum/many_lives.pdf
As an NSA history says, Yardley, "with no civil service status or retirement benefits, found himself unemployed just as the stock market was collapsing and the Great Depression beginning. He left Queens and returned to his hometown of Worthington, Indiana, where he began writing what was to become the most famous book in the history of cryptology. There had never been anything like it. In today's terms, it was as if an NSA employee had publicly revealed the complete communications intelligence operations of the Agency for the past twelve years-all its techniques and major successes, its organizational structure and budget-and had, for good measure, included actual intercepts, decrypts, and translations of the communications not only of our adversaries but of our allies as well.
"The American Black Chamber created a sensation when it appeared on 1 June 1931, preceded by excerpts in the Saturday Evening Post, the leading magazine of its time. The State Department, in the best tradition of 'Mission: Impossible,' promptly disavowed any knowledge of Yardley's activities."
Government officials, though angry, decided to do nothing. According to some accounts, Yardley then went to work for the Japanese. The Canadians hired him briefly at the onset of World War I but British intelligence insisted on his ouster.
Yardley went on to write a successful book on poker strategies.
Wednesday, August 28, 2013
Note on probability and periodicty
Draft 1
Please let me know of errors. My email address is conant78@gmail.com
By PAUL CONANT
Please let me know of errors. My email address is conant78@gmail.com
By PAUL CONANT
We consider a binary string that, we assume, began specifically at the first observation.
If that string appears to follow a periodic pattern, a question often asked is whether that string was nonrandomly generated, that is, whether the probabilities for bit selection are independent.
One approach is the runs test, which matches the number of runs against a normal distribution of runs of length n. This is a very useful test, but it fails for
00110011
which has the mean number of runs for n = 8 but which one might suspect is not as likely to be random as would be an aperiodic string.
So what we want to know are the number of periods, which we calculate as 2(1 + 2C2 + 4C2). That subset's members are all unique permutations, each of which we consider to occur with equal probability, based on the provisional condition of independence of probability for each bit, which is put at 1/2.
So let us consider this test for nonrandomness on a string with length n, where n is a composite.
Let n = 8
and the string is
00110011
On length 8 we have the factors 1, 2, 4, 8. Now a string composed of all 0s or all 1s is certainly periodic. So we use the factor 1, along with factors 2 and 4. But we do not consider the factor 8 because a period of length 8 with no repetitions gives us no information about the probability of periodicity (there is no obvious periodic pattern).
So then the cardinality of the set of periods is:
2[1 + 2C2 + 4C2] = 16
which we divide by 2^8, or
8/2^7 = 1/16.
So our reasoning is that the probability of happening upon a randomly generated periodic 8-bit string is 1/16, or 6.25% in contrast to happening upon an 8-bit string of a specific permutation agreed upon in advance, which is 2^-8 or 1/256, or 0.39%. The probability of happening upon an aperiodic bit string is of course 15/16 or 93.75%. This all seems reasonable, where p(specific permutation) < p(having property of periodicity when the bit-length is composite) < p(having property of periodicity on that same string length).
So we argue that the probability of nonrandom influence is 93.75%
The general formula for strings with remainder 0 is
[1 + (A_1)C2 + ... (A_m)C2]/2^(n-1)
Let's check n = 9.
[1 + 3C2]/2^8 = 1/64
A caveat: sometimes one permutation corresponds to more than one period. It will be found that that only occurs when the number of bits equals p^m, where p is some prime and m is a positive integer. We check the case of 8 bits. Here we find that
0000,0000 0101,0101 0011,0011
and their mirror images are the only strings that have one period superposed on another. That means we might wish to subtract 3 from our set of periodic strings, giving [2(1 + 2C2 + 4C2) - 3]/2^8 = 5/2^6 = 0.078. However, as n increases we will be able to neglect this adjustment.
We have been discussing exact periodicity. Often, however, we are confronted by partial periodicity, such as this:
00100100100
So what we want to know is the probability that this is part of string 001001001001
which we obtain by 1/64(1/2) = 1/128 = 0.0078;
similarly for 0010010010
where we calculate 1/64(1/4) = 1/256 = 0.0039. This represents the probability that the string is part of a periodic string of length 12.
This probability is distinct from the probability of happening upon a periodic 12-bit string, which is:
(1 + 2C2 + 3C2 + 4C2)/2^11 = 11/2^11 = 0.00537.
Important points:
1. The periodicity probabilities change in accordance with the primes, which are not distributed smoothly.
2. As bit length n tends to infinity, the numerator 2(set of combinations of aliquot factors + 1) tends to 0 with respect to denominator 2^n. This means that with n sufficiently large, the difference between the probability of periodicity and the probability of a specific permutation are close enough to be viewed as identical.
Point 2 permits us to look at a specific string of bit length n >> 5, see that it is periodic or "near" periodic, and assign it a probability of about 2^-n. This is important because we are able to discern the probability of dependence by use of a number that is traditionally only used to predict a specific bit string.
A nicety here is that the ratio of primes to composites diminishes as bit length goes to infinity. For a prime, there are only aperiodic strings. That is, we have pC2/2^n. In the case of 11 bits, we have 55/2048 = 0.0269. So as n increases, the probability that a randomly selected number could be periodic goes up. This consideration does not affect the basic idea we have given.
(The formula of periodicity -- with no repetitions of the period -- for a prime is simply 1/2^(p-1).)
Of course periodicity isn't the only sort of pattern. One can use various algorithms -- say all 0's except at the (n^2)th bit -- to make patterns.
A simple pattern is mirror imaging, in which the string on either side of a midpoint or mid-space is a mirror of the other; that is, bits are reversed.
To wit:
001001.100100
How many mirror pairs are there? Answer 2^6. So the probability of happening upon a mirror pair is 2^6/2^12 = 1/64 = 0.015625..
Thursday, November 3, 2011
The knowledge delusion
Reflections on The God Delusion (Houghton Mifflin 2006) by the evolutionary biologist Richard Dawkins.
Preliminary remarks:
Our discussion focuses on the first four chapters of Dawkins' book, wherein he makes his case for the remoteness of the probability that a monolithic creator and controller god exists.
Alas, it is already November 2011, some five years after publication of Delusion. Such a lag is typical of me, as I prefer to discuss ideas at my leisure. This lag isn't quite as outrageous as the timing of my paper on Dawkins' The Blind Watchmaker, which I posted about a quarter century after the book first appeared.
I find that I have been quite hard on Dawkins, or, actually, on his reasoning. Even so, I have nothing but high regard for him as a fellow sojourner on spaceship Earth. Doubtless I have been unfair in not highlighting positive passages in Delusion, of which there are some (1). Despite my desire for objectivity, it is clear that much of the disagreement is rooted in my personal beliefs (see the link Zion below).
[Apologies for the helter-skelter end note system. However, there should be little real difficulty.]
Summary:
Dawkins applies probabilistic reasoning to etiological foundations, without defining probability or randomness. He disdains Bayesian subjectivism without realizing that that must be the ground on which he is standing. In fact, nearly everything he writes on probability indicates a severe lack of rigor. This lack of rigor compromises his other points.
Relevant links listed at bottom of page.
By PAUL CONANT
Richard Dawkins argues that he is no proponent of simplistic "scientism" and yet there is no sign in Delusion's first four chapters that in fact he isn't a victim of what might be termed the "scientism delusion." But, as Dawkins does not define scientism, he has plenty of wiggle room.
From what I can gather, those under the spell of "scientism" hold the, often unstated, assumption that the universe and its components can be understood as an engineering problem, or set of engineering problems. Perhaps there is much left to learn, goes the thinking, but it's all a matter of filling in the engineering details. (http://en.wikipedia.org/wiki/Scientism).
Though the notion of a Laplacian cosmos that requires no god to, every now and then, act to keep things stable is officially passe, nevertheless many scientists seem to be under the impression that the model basically holds, though needing a bit of tweaking to account for the effects of relativity and of quantum fluctuations.
Doubtless Dawkins is correct in his assertion that many American scientists and professionals are closet atheists, with quite a few espousing the "religion" of Einstein, who appreciated the elegance of the phenomenal universe but had no belief in a personal god (2).
Interestingly, Einstein had a severe difficulty with physical, phenomenal reality, objecting strenuously to the "probabilistic" requirement of quantum physics, famously asserting that "god" (i.e., the cosmos) "does not play dice." He agreed with Erwin Schroedinger that Schroedinger's imagined cat strongly implies the absurdity of "acausal" quantum behavior (3). It turns out that Einstein was wrong, with statistical experiments in the 1980s demonstrating that "acausality" -- within constraints -- is fundamental to quantum actions.
Many physicists have decided to avoid the quantum interpretation minefield, discretion being the better part of valor. Even so, Einstein was correct in his refusal to play down this problem, recognizing that modern science can't easily dispense with classical causality. We speak of energy in terms of vector sums of energy transfers (notice the circularity) but no one has a good handle on what the it is behind that abstraction.
A partly subjective reality at a fundamental level is anethema to someone like Einstein -- so disagreeable, in fact, that one can ponder whether the great scientist deep down suspected that such a possibility threatened his reasoning in denying a need for a personal god. Be that as it may, one can understand that a biologist might not be familiar with how nettlesome the quantum interpretation problem really is, but Dawkins has gone beyond his professional remit and taken on the roles of philosopher and etiologist. True, he rejects the label of philosopher, but his basic argument has been borrowed from the atheist philosopher Bertrand Russell.
Dawkins recapitulates Russell thus: "The designer hypothesis immediately raises the question of who designed the designer."
Further: "A designer God cannot be used to explain organized complexity because a God capable of designing anything would have to be complex enough to demand the same kind of explanation... God presents an infinite regress from which we cannot escape."
Dawkins' a priori assumption is that "anything of sufficient complexity to design anything, comes into existence only as the end product of an extended process of gradual evolution."
If there is a great designer, "the designer himself must be the end product of some kind of cumulative escalator or crane, perhaps a version of Darwinism in its own universe."
Dawkins has no truck with the idea that an omnipotent, omniscient (and seemingly paradoxical) god might not be explicable in engineering terms. Even if such a being can't be so described, why is he/she needed? Occam's razor and all that.
Dawkins does not bother with the results of Kurt Goedel and its implications for Hilbert's sixth problem: whether the laws of physics can ever be -- from a human standpoint -- both complete and consistent. Dawkins of course is rather typical of those scientists who pay little heed to that result or who have tried to minimize its importance in physics. A striking exception is the mathematical physicist Roger Penrose who saw that Goedel's result was profoundly important (though mathematicians have questioned Penrose's interpretation).
A way to intuitively think of Goedel's conundrum is via the Gestalt effect: the whole is greater than the sum of its parts. But few of the profound issues of phenomenology make their way into Dawkins' thesis. Had the biologist reflected more on Penrose's The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics (Oxford 1989), perhaps he would not have plunged in where Penrose so carefully trod.
Penrose has referred to himself, according to a Wikipedia article, as an atheist. In the film A Brief History of Time, the physicist said, "I think I would say that the universe has a purpose, it's not somehow just there by chance ... some people, I think, take the view that the universe is just there and it runs along -- it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it."
By contrast, we get no such ambiguity or subtlety from Dawkins. Yet, if one deploys one's prestige as a scientist to discuss the underpinnings of reality, more than superficialities are required. The unstated, a priori assumption is, essentially, a Laplacian billiard ball universe and that's it, Jack.
Dawkins embellishes the Russellian rejoinder with the language of probability: What is the probability of a superbeing, capable of listening to millions of prayers simultaneously, existing? This follows his scorning of Stephen D. Unwin's The Probability of God (Crown Forum 2003), which cites Bayesian methods to obtain a high probability of god's existence.
http://www.stephenunwin.com/
Dawkins is uninterested in Unwin's subjective prior probabilities, all the while being utterly unaware that his own probability assessment is altogether subjective. Heedless of the philosophical underpinnings of probability theory, he doesn't realize that by assigning a probability of "remote" at the extremes of etiology, he is engaging in a subtle form of circular reasoning.
The reader deserves more than an easy putdown of Unwin in any discussion of probabilities. Dawkins doesn't acknowledge that Bayesian statistics is a thriving school of research that seeks to find ways to as much as possible "objectify" the subjective assessments of knowledgeable persons. There has been strong controversy concerning Bayesian versus classical statistics, and there is a reason for that controversy: it gets at foundational matters of etiology. Nothing on this from Dawkins.
Without a Bayesian approach, Dawkins is left with a frequency interpretation of probability (law of large numbers and so forth). But we have very little -- in fact Dawkins would say zero -- information about the existence or non-existence of a sequence of all powerful gods or pre-cosmoses. Hence, there are no frequencies to analyze. Hence, use of a probability argument is in vain.
Dawkins elsewhere says (4) that he has read the great statistician Ronald Fisher, but one wonders whether he appreciates the meaning of statistical analysis. Fisher, who also opposed the use of Bayesian premises, is no solace when it comes to frequency-based probabilities. Take Fisher's combined probability test, a technique for data fusion or "meta-analysis" (analysis of analyses): What are the several different tests of probability that might be combined to assess the probability of god?
Dawkins is quick to brush off William A. Dembski, the intelligent design advocate who uses statistical methods to argue that the probability is cosmically remote that life originated in a random manner. And yet Dawkins himself seems to have little or no grasp of the basis of probabilities.
In fact, Dawkins makes no attempt to define randomness, a definition routinely brushed off in elementary statistics texts but which represents quite a lapse when getting at etiological foundations (5) and using probability as a conceptual, if not mathematical, tool.
But, to reiterate, the issue goes yet deeper. If, at the extremes, causation is not nearly so clear-cut as one might naively imagine, then at those extremes probabilistic estimates may well be inappropriate.
Curiously, Russell discovered Russell's paradox, which was ousted from set theory by fiat (axiom). Then along came Goedel who proved that axiomatic set theory (a successor to the theory of types propounded by Russell and Alfred North Whitehead in their Principia Mathematica) could not be both complete and consistent. That is, Goedel jammed Russell's paradox right down the old master's throat, and it hurt. It hurt because Goedel's result makes a mockery of the fond Russellian illusion of the universe as giant computerized robot. How does a robot plan for and build itself? Algorithmically, it is impossible. Dawkins handles this conundrum, it seems, by confounding the "great explanatory power" of natural selection -- wherein lifeform robots are controlled by robotic DNA (selfish genes) -- with the origin of the cosmos.
But the biologist, so focused on this foundational issue of etiology, manages to avert his eyes from the Goedelian "frame problem." And yet even atheistic physicists sense that the cosmos isn't simplistically causal when they describe the overarching reality as a "spacetime block." In other words, we humans are faced with some higher or other reality -- a transcendent "force" -- in which we operate and which, using standard mathematical logic, is not fully describable. This point is important. Technically, perhaps, we might add an axiom so that we can "describe" this transcendent (topological?) entity, but that just pushes the problem back and we would then need another axiom to get at the next higher entity.
Otherwise, Dawkins' idea that this higher dimensional "force" or entity should be constructed faces the Goedelian problem that such construction would evidently imply a Turing algorithm, which, if we want completeness and consistency, requires an infinite regress of axioms. That is, Dawkins' argument doesn't work because of the limits on knowledge discovered by Goedel and Alan Turing. This entity is perforce beyond human ken.
One may say that it can hardly be expected that a biologist would be familiar with such arcana of logic and philosophy. But then said biologist should beware superficial approaches to foundational matters (6).
At this juncture, you may be thinking: "Well, that's all very well, but that doesn't prove the existence of god." But here is the issue: One may say that this higher reality or "power" or entity is dead something (if it's energy, it's some kind of unknown ultra-energy) or is a superbeing, a god of some sort. Because this transcendent entity is inherently unknowable in rationalistic terms, the best someone in Dawkins' shoes might say is that there is a 50/50 chance that the entity is intelligent. I hasten to add that probabilistic arguments as to the existence of god are not very convincing (7).
Please see Appendix on a priori probability for further discussion of the issue.
A probability estimate's job is to mask out variables on the assumption that with enough trials these unknowns tend to cancel out. Implicitly, then, one is assuming that a god has decided not to influence the outcome (8). At one time, in fact, men drew lots in order to let god decide an outcome. (One of the reasons that some see gambling as sinful is because it dishonors god and enthrones Lady Randomness.)
Curiously, Dawkins pans the "argument from incredulity" proffered by some anti-Darwinians but his clearly-its-absurdly-improbable case against a higher intelligence is in fact an argument from incredulity, being based on his subjective expert estimate.
Dawkins' underlying assumption is that mechanistic hypotheses of causality are valid at the extremes, an assumption common to modern naive rationalism.
Another important oversight concerns the biologist's Dawkins-centrism. "Your reality, if too different from mine, is quite likely to be delusional. My reality is obviously logically correct, as anyone can plainly see." This attitude is quite interesting in that he very effectively gives some important information about how the brain constructs reality and how easily people might suffer from delusions, such as being convinced that they are in regular communication with god.
True, Dawkins jokingly mentions one thinker who posits a Matrix-style virtual reality for humanity and notes that he can see no way to disprove such a scenario. But plainly Dawkins rejects the possibility that his perception and belief system, with its particular limits, might be delusional.
In Dawkins' defense, we must concede that the full ramifications of quantum puzzlements have yet to sink into the scientific establishment, which -- aside from a distaste for learning that, like Wile E. Coyote, they are standing on thin air -- has a legitimate fear of being overrun by New Agers, occultists and flying saucer buffs. Yet, by skirting this matter, Dawkins does not address the greatest etiological conundrum of the 20th century which, one would think, might well have major implications in the existence-of-god controversy.
Dawkins is also rather cavalier about probabilities concerning the origin of life, attacking the late Fred Hoyle's "jumbo jet" analogy without coming to grips with what was bothering Hoyle and without even mentioning that scientists of the caliber of Francis Crick and Joshua Lederberg were troubled by origin-of-life probabilities long before Michael J. Behe and Dembski touted the intelligent design hypothesis.
Astrophysicist Hoyle, whose steady state theory of the universe was eventually trumped by George Gamow's big bang theory, said on several occasions that the probability of life assembling itself from some primordial ooze was equivalent to the probability that a tornado churning through a junkyard would leave a fully functioning Boeing 747 in its wake. Hoyle's atheism was shaken by this and other improbabilities, spurring him toward various panspermia (terrestrial life began elsewhere) conjectures. In the scenarios outlined by Hoyle and Chandra Wickramasinghe, microbial life or proto-life wafted down through the atmosphere from outer space, perhaps coming from "organic" interstellar dust or from comets.
One scenario had viruses every now and again floating down from space and, besides setting off the occasional pandemic, enriching the genetic structure of life on earth in such a way as to account for increasing complexity. Hoyle was not specifically arguing against natural selection, but was concerned about what he saw as statistical troubles with the process. (He wasn't the only one worried about that; there is a long tradition of scientists trying to come up with ways to make mutation theory properly synthesize with Darwinism.)
Dawkins laughs off Hoyle's puzzlement about mutational probabilities without any discussion of the reasons for Hoyle's skepticism or the proposed solutions.
There are various ideas about why natural selection is robust enough to, thus far, prevent life from petering out (9). In my essay Do dice play God? (link above), I touch on some of the difficulties and propose a neo-Lamarckian mechanism as part of a possible solution, and at some point I hope to write more about the principles that drive natural selection. At any rate, I realize that Dawkins may have felt that he had dealt with this subject elsewhere, but his four-chapter thesis omits too much. A longer, more thoughtful book -- after the fashion of Penrose's The Emperor's New Mind -- is, I would say, called for when heading into such deep waters.
Hoyle's qualms, of course, were quite unwelcome in some quarters and may have resulted in the Nobel prize committee bypassing him. And yet, though the space virus idea isn't held in much esteem, panspermia is no longer considered a disrespectable notion, especially as more and more extrasolar planets are identified. Hoyle's use of panspermia conjectures was meant to account for the probability issues he saw associated with the origin and continuation of life. (Just because life originates does not imply that it is resilient enough not to peter out after X generations.)
Hoyle, in his own way, was deploying panspermia hypotheses in order to deal with a form of the anthropic principle. If life originated as a prebiotic substance found across wide swaths of space, probabilities might become reasonable. It was the Nobelist Joshua Lederberg who made the acute observation that interstellar dust particles were about the size of organic molecules. Though this correlation has not panned out, that doesn't make Hoyle a nitwit for following up.
In fact, Lederberg was converted to the panspermia hypothesis by yet another atheist (and Marxist), J.B.S. Haldane, a statistician who was one of the chief architects of the "modern synthesis" merging Mendelism with Darwinism.
No word on any of this from Dawkins, who dispatches Hoyle with a parting shot that Hoyle (one can hear the implied chortle) believed that archaeopteryx was a forgery, after the manner of Piltdown man. The biologist declines to tell his readers about the background of that controversy and the fact that Hoyle and a group of noted scientists reached this conclusion after careful examination of the fossil evidence. Whether or not Hoyle and his colleagues were correct, the fact remains that he undertook a serious scientific investigation of the matter.(9,0)
http://www.chebucto.ns.ca/Environment/NHR/archaeopteryx.html
Another committed atheist, Francis Crick, co-discoverer of the doubly helical structure of DNA, was even wilder than Hoyle in proposing a panspermia idea in order to account for probability issues. He suggested in a 1970s paper and in his book Life Itself: Its Origin and Nature (Simon & Schuster 1981) that an alien civilization had sent microbial life via rocketship to Earth in its long-ago past, perhaps as part of a program of seeding the galaxy. Why did the physicist-turned-biologist propose such a scenario? Because the amino acids found in all lifeforms are left-handed; somehow none of the mirror-image right-handed compounds survived, if they were ever incorporated at all. That discovery seemed staggeringly unlikely to Crick (9:1).
I don't bring this up to argue with Crick, but to underscore that Dawkins plays Quick-Draw McGraw with serious people without discussing the context. I.e., his book comes across as propagandistic, rather than fair-minded. It might be contrasted with John Allen Paulos' book Irreligion (see Do dice play god? above), which tries to play fair and which doesn't make duffer logico-mathematical blunders (10).
Though Crick and Hoyle were outliers in modern panspermia conjecturing, the concept is respectable enough for NASA to take seriously.
The cheap shot method can be seen in how Dawkins deals with Carl Jung's claim of an inner knowledge of god's existence. Jung's assertion is derided with a snappy one-liner that Jung also believed that objects on his bookshelf could explode spontaneously. That takes care of Jung! -- irrespective of the many brilliant insights contained in his writings, however controversial. (Disclaimer: I am neither a Jungian nor a New Ager).
Granted that Jung was talking about what he took to be a paranormal event and granted that Jung is an easy target for statistically minded mechanists and granted that Jung seems to have made his share of missteps, we make three points:
1. There was always the possibility that the exploding object occurred as a result of some anomalous, but natural event.
2. A parade of distinguished British scientists have expressed strong interest in paranormal matters, among them officers of paranormal study societies. The American Brian Josephson, who received a Nobel prize for the quantum physics behind the Josephson junction, speaks up for the reality of mental telepathy (for which he has been ostracized by the "billiard ball" school of scientists).
3. If Dawkins is trying to debunk the supernatural using logical analysis, then it is not legitimate to use belief in the supernatural to discredit a claim favoring the supernatural.
Getting back to Dawkins' use of probabilities, the biologist contends with the origin-of-life issue by invoking the anthropic principle and the principle of mediocrity, along with a verbal variant of Drake's equation http://en.wikipedia.org/wiki/Drake_equation
The mediocrity principle says that astronomical evidence shows that we live on a random speck of dust on a random dustball blowing around in a (random?) mega dust storm.
The anthropic principle says that, if there is nothing special about Earth, isn't it interesting how Earth travels about the sun in a "Goldilocks zone" ideally suited for carbon based life and how the planetary dynamics, such as tectonic shift, seem to be just what is needed for life to thrive (as discussed in the book Rare Earth: Why Complex Life is Uncommon in the Universe by Peter D. Ward and Donald Brownlee (Springer Verlag 2000))? Even further, isn't it amazing that the seemingly arbitrary constants of nature are so exactly calibrated as to permit life to exist, as a slight difference in the index of those constants known as the fine structure constant would forbid galaxies from ever forming? This all seems outrageously fortuitous.
Let us examine each of Dawkins' arguments.
Suppose, he says, that the probability of life originating on Earth is a billion to one or even a billion billion to one (10^-9 and 10^-18). If there are that many Earth-like planets in the cosmos, the probability is virtually one that life will arise spontaneously. We just happen to be the lucky winner of the cosmic lottery, which is perfectly logical thus far.
Crick, as far as I know, is the only scientist to point out that we can only include the older sectors of the cosmos, in which heavy metals have had time to coalesce from the gases left over from supernovae -- i.e., second generation stars and planets (by the way, Hoyle was the originator of this solution to the heavy metals problem). Yet still, we may concede that there may be enough para-Earths to answer the probabilities posed by Dawkins.
Though careful to say that he is no expert on the origin of life, Dawkins' probabilities, even if given for the sake of argument, are simply Bayesian "expert estimates." But, it is quite conceivable that those probabilities are far too high (though I candidly concede it is very difficult to assign any probability or probability distribution to this matter).
Consider that unicellular life, with the genes on the DNA (or RNA) acting as the "brain," exploits proteins as the cellular workhorses in a great many ways. We know that sometimes several different proteins can fill the same job, but that caveat doesn't much help what could be a mind-boggling probability issue.
Suppose that, in some primordial ooze or on some undersea volcanic slope, a prebiotic form has fallen together chemically and, in order to cross the threshold to lifeform, requires one more protein to activate. A protein is the molecule that takes on a specific shape, carrying specific electrochemical properties, after amino acids fold up. Protein molecules fit into each other and other constituents of life like lock and key (though on occasion more than one key fits the same lock).
The amino acids used by terrestrial life can, it turns out, be shuffled in many different ways to yield many different proteins. How many ways? About 10^60, which exceeds the number of stars in the observable universe by 24 orders of magnitude! And the probability of such a spark-of-life event might be in that ball park. If one considers the predecessor protein link-ups as independent events and multiplies those probabilities, we would come up with numbers even more absurd.
But, Dawkins has a way out, though he loses the thread here. His way out is that a number of physicists have posited, for various reasons, some immense -- even infinite -- number of "parallel" universes, which have no or very weak contact with this one and are hence undetectable. This could handily account for our universe having the Goldilocks fine structure constant and, though he doesn't specify this, might well provide enough suns in those universes that have galaxies to account for even immensely improbable events.
I say Dawkins loses the thread because he scoffs at religious people who see the anthropic probabilities as favoring their position concerning god's existence without, he says, realizing that the anthropic principle is meant to remove god from the picture. What Dawkins himself doesn't realize is that he mixes apples and oranges here. The anthropic issue raises a disturbing question, which some religious people see as in their favor. Some scientists then seize on the possibility of a "multiverse" to cope with that issue.
But now what about Occam's razor? Well, says Dawkins, that principle doesn't quite work here. To paraphrase Einstein, once one removes all reasonable explanations the remaining explanation, no matter how absurd it sounds, must be correct.
And yet what is Dawkins' basis for the proposition that a host of undetectable universes is more probable than some intelligent higher power? There's the rub. He is, no doubt unwittingly, making an a priori assumption that any "natural" explanation is more reasonable than a supernatural "explanation." Probabilities really have nothing to do with his assumption.
But perhaps we have labored in vain over the "multiverse" argument, for at one point we are told that a "God capable of calculating the Goldilocks values" of nature's constants would have to be "at least as improbable" as the finely tuned constants of nature, "and that's very improbable indeed." So at bottom, all we have is a Bayesian expert prior estimate.
Well, say you, perhaps a Wolfram-style algorithmic complexity argument can save the day. Such an argument might be applicable to biological natural selection, granted. But what selected natural selection? A general Turing machine can compute anything computable, including numerous "highly complex" outputs programed by easy-to-write inputs. But what probability does one assign to a general Turing machine spontaneously arising, say, in some electronic computer network? Wolfram found that "interesting" celullar automata were rare. Even rarer would be a complex cellular automaton that accidentally emerged from random inputs.
I don't say that such a scenario is impossible, but rather to assume that it just must be so is little more than hand-waving.
In fact, we must be very cautious about how we use probabilities concerning emergence of high-information systems. Here is why: A sufficiently rich mix of chemical compounds may well form a negative feedback dynamical system. It would then be tempting to apply a normal probability distribution to such a system, and that distribution very well may yield reasonable results for a while. BUT, if the dynamical system is non-linear -- which most are -- the system could reach a threshold, akin to a chaos point, at which it crosses over into a positive feedback system or into a substantially different negative feedback system.
The closer the system draws to that tipping point, the less the normal distribution applies. In the chaos zone, normal probabilities are generally worthless. Hence to say that thus and such an outcome is highly improbable based on the previous state of the system is to misunderstand how non-linearities can work. This point, it should be conceded, might be a bit too abstruse for Dawkins' readers.
Dawkins tackles the problem of the outrageously high information values associated with complex life forms by conceding that a species, disconnected from information about causality, has only a remote probability of occurrence by random chance. But, he counters, there is in fact a non-random process at work: natural selection.
I suppose he would regard it a quibble if one were to mention that mutations occur randomly, and perhaps so it is. However, it is not quibbling to question how the powerful process of natural selection first appeared on the scene. In other words, the information values associated with the simplest known form (least number of genes) of microbial life is many orders of magnitude greater than the information values associated with background chemicals -- which was Hoyle's point in making the jumbo jet analogy.
And then there is the probability of life thriving. Just because it emerges, there is no guarantee that it would be robust enough not to peter out in a few generations (9).Dawkins dispenses with proponents of intelligent design, such as biologist Michael J. Behe, author of Darwin’s Black Box: The Biochemical Challenge to Evolution (The Free Press 1996), by resort to the conjecture that a system may exist after its "scaffolding" has vanished. This conjecture is fair, but, at this point, the nature of the scaffolding, if any, is unknown. Dawkins can't give a hint of the scaffolding's constituents because, thus far, no widely accepted hypothesis has emerged. Natural selection is a consequence of an acutely complex mechanism. The "scaffolding" is indeed a "black box" (it's there, we are told, but no one can say what's inside).
Though it cannot be said that intelligent design advocate Behe has proved "irreducible complexity," the fact is that the magnitude of organic complexity has even prompted atheist scientists to look far afield for plausible explanations.
Biologists, Dawkins writes, have had their consciousnesses raised by natural selection's "power to tame improbability" and yet that power has very little to do with the issues of the origins of life or of the universe and hence does not bolster his case against god. I suppose that if one waxes mystical about natural selection -- making it a mysterious, ultra-abstract principle, then perhaps Dawkins makes sense. Otherwise, he's amazingly naive.
Note
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.
For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.
That is, we don't calculate, say 11^-11 (3.5x10^-15), or some routine binomial combinatorial multiple, but we have that our series approximates very closely 1 - e^-1 = 0.63.
Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?
The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.
These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.
Appendix on a priori probability
Let us digress a bit concerning the controversy over Bayesian inference (7a,7b), which is essentially about how one deploys an a priori probability.
If confronted with an urn about which we know only that it contains some black balls and some white ones and, for some reason, we are compelled to wager whether an initial draw yields a black ball, we might agree that our optimal strategy is to assign a probability of success of 1/2. In fact, we might well agree that -- barring resort to intuition or appeal to a higher power -- this is our only strategy. Of course, we might include the cost aspect in our calculation. A classic example is Pascal's wager on the nonexistence of god. Suppose, given a probability of say 1/2, one is wrong?
Now suppose we observe say 30 draws, with replacement, which we break down into three trials of 10 draws each. In each trial, the ratio is about 2/3 blacks to whites. Three trials isn't many, but is perhaps enough to convince us that the population proportion is close to 2 to 3. We have used frequency analysis to estimate that the independent probability of choosing a black ball is close to 2/3. That is, we have used experience to revise our probability estimate, using "frequentist" reasoning. What is the difference between three trials end-to-end and one trial? This question is central to the Bayesian controversy. Is there a difference in three simultaneous trials of 10 draws each and three run consecutively? These are slippery philosophical points that won't detain us here.
But we need be clear on what the goal is. Are we using an a priori initial probability that influences subsequent probabilities? Or, are we trying to detect bias (including neutral bias of 1/2) based on accumulated evidence?
For example, suppose we skip the direct proportions approach just cited and use, for the case of replacement, the Bayesian conditional probability formula, assigning an a priori probability of b to event B of a black ball withdrawal. That is, p(B | B) = p(B & B)/p(B). Or, that is, p(b | b) = p(b | b)p(b)/p(b) = b^2. For five black balls in succession, we get b^5.
Yes, quite true that we have the case in which the Bayesian formula collapses to the simple multiplication rule for independent events. But our point is that if we apply the Bayesian formula differently to essentially the same scenario, we get a different result, as the following example shows.
Suppose the urn has a finite number N of black and white balls in unknown proportion and suppose n black balls are drawn consecutively from the urn. What is the probability the next ball will be black? According to the Bayesian formula -- applied differently than as above -- the probability is (n+1)/(n+2) (8.0).
Let N = the total number of balls drawn and to be drawn and n = those that have been drawn, with replacement. S_n is the run of consecutive draws observed as black. S_N is the total number of black draws possible, those done and those yet to be done. What is the probability that all draws will yield black given a run of S_n black? That is
what is p[S_N = N | S_n = n]?
But this
= p[S_N = N and S_n = n]/p[S_n = n]
or (1/N+1)/(1/n+1) = (n+1)/(N+1). If N = n+1, we obtain (n+1)/(n+2).
C.D. Broad, in his derivation for the finite case, according to S.L. Zabell (8.0), reasoned that all ratios j/n are equally likely and discovered that the result is not dependent on N, the population size, but only on the sample size n. Bayes' formula is applied as a recursive summation of factorials, eventually leading to (n+1)/(n+2).
This result was also derived for the infinite case by Laplace and is known as the rule of succession.
Laplace's formula, as given by Zabell (8.0) , is
[S0,1 p^(r+1)(1-p)^(m-r) dp]/[S0,1 p^r(1-p)^(m-r) dp] = (r+1)/(m+1)
Laplace's rule of succession contrasts with that of Thomas Bayes, as reported by his intellectual executor Richard Price. Bayes had considered the case where nothing is known concerning a potential event prior to any relevant trials. Bayes' idea is that all probabilities would then be equally likely.
Given this assumption and told that a black ball has been pulled from an urn n times in unfailing succession, it can be seen that
P[a < p < b] = (n+1) Sa,b p^n dp = b^(n+1) - a^(n+1)
In Zabell (8.0), this is known as Price's rule of succession. We see that this rule of succession of course might (it's a stretch) be of some value in estimating the probability that the sun will rise tomorrow but is worthless in estimating the probability of god's existence.
To recapitulate: If we know there are N black and white balls within and draw, with replacement, n black balls consecutively, there are N-n possible proportions. So one may say that, absent other information, the probability that any particular ratio is correct is 1/(N-n). That is, the distribution of the potential frequencies is uniform on grounds that each frequency is equiprobable.
So this is like asking what is the probability of the probability, a stylization some dislike. So in the finite and infinite cases, a uniform probability distribution seems to be assumed, an assumption that can be controversial -- though in the case of the urn equiprobability has a justification. I am not quite certain that there necessarily is so little information available that equiprobability is the best strategy, as I touch on in "Caution A" below.
Another point is that, once enough evidence from sampling the urn is at hand, we should decide -- using some Bayesian method perhaps -- to test various probability distributions to see how well each fits the data.
Caution A: Consider four draws in succession, all black. If we assume a probability of 1/2, the result is 0.5^4 = 0.0625, which is above the usual 5% level of significance. So are we correct in conjecturing a bias? For low numbers, the effects of random influences would seem to preclude hazarding a probability of much in excess of 1/2. For 0.5^5 = 0.03152, we might be correct to suspect bias. For the range n=5 to n=19, I suggest that the correct proportion is likely to be found between 1/2 and 3/4 and that we might use the mean of 0.625 [a note on that topic will go online soon, which will include discussion of an estimation for n >.= 20 when we do not accept the notion that all ratios are equiprobable].
Caution B: Another issue is applying the rule of succession to a system in which perhaps too much is unknown. The challenge of Hume as to the probability of the sun rising tomorrow was answered by Laplace with a calculation based on the presumed number of days that the sun had already risen. The calculation generated much derision and did much to damage the Bayesian approach. (However, computer-enhanced Bayesian methods these days enjoy wide acceptance in certain disciplines.)
The issue that arises is the inherent stability of a particular system. An urn has one of a set of ratios of white to black balls. But, a nonlinear dynamic system is problematic for modeling by an urn. Probabilities apply well to uniform, which is to say, for practical purposes, periodic systems. However, quasi-periodic systems may well give a false sense of security, perhaps masking sudden jolts into atypical, possibly chaotic, behavior. Wasn't everyone marrying and giving in marriage and conducting life as usual when in 2004 a tsunami killed 230,000 people in 14 countries bordering the Indian Ocean? (Interestingly, however, Augustus De Morgan proposed a Bayesian-style formula for the probability of the sudden emergence of something utterly unknown, such as a new species (8a)).
That said, one can nevertheless imagine a group of experts, each of whom gives a probability estimate to some event, and taking the average (perhaps weighted via degree of expertise) and arriving at a fairly useful approximate probability. In fact, one can imagine an experiment in which such expert opinion is tested against a frequency model (the event would have to be subject to frequency analysis, of course).
We might go further and say that it is quite plausible that a person well informed about a particular topic might give a viable upper or lower bound probability for a particular set of events, though not knowledgeable about precise frequencies. For example, if I notice that the word "inexorable" has appeared at least once per volume in 16 of the last 20 books I have read, I can reason that, based on previous reading experience, the probability that that particular word would appear in a book is certainly less than 10%. Hence, I can say that the probability of randomness rather than tampering by some capricious entity is, using combinatorial methods, less than one in 5 billion. True, I do not have an exact input value. But my upper bound probability is good enough.
We consider the subjectivist vs. objectivist conceptions of probability as follows:
Probability Type I is about degree of belief or uncertainty.
Two pertinent questions about P1 are:
1. How much belief does a person have that an event will happen within some time interval?
2. How much belief does a person have that an event that has occurred did so under the conditions given?
Degree of belief may be given, for example, as an integer on a scale from 0 to 10, which, as it happens can be pictured as a pie cut into 10 wedges, or percentages given in tenths of 100. When a person is being fully subjective ("guesstimating," to use a convenient barbarism), one tends to focus on easily visualizable pie portions, such as tenths.
The fact that a subjective assessment can be numbered on a scale leads easily to ratios. That is, if one is "seven pie wedges" sure, it is easy enough to take the number 7 and make it a ratio versus the complement of three pie wedges. We then may speak as if there are 3 chances in 7 that our belief is wrong.
Of course, such ratios aren't really any better than choosing a number between 0 and 10 for one's degree of belief. This is one reason why such subjective ratios are often criticized as of no import.
Probability Type II then purports to demonstrate an objective method of assigning numbers to one's degree of belief. The argument is that a thoughtful person will agree that what one doesn't know is often modelable as a mixture which contains an amount q and an amount p of something or other -- that is, the urn model. If one assumes that the mixture stays constant for a specified time, then one is entitled to use statistical methods to arrive at some number close to the true ratio. Such ratios are construed to mirror objective reality and so give a plausible reason for one's degree of belief, which can be acutely quantified, permitting tiny values.
P2 requires a classical, essentially mechanist view of phenomenal reality, an assumption that is open to challenge, though there seems little doubt that stochastic studies are good predictors for everyday affairs (though this assertion also is open to question).
1. We don't claim that none of his criticisms are worth anything. Plenty of religious people, Martin Luther included, would heartily agree with some of his complaints, which, however, are only tangentially relevant to his main argument.Anyone can agree that vast amounts of cruelty have occurred in the name of god. Yet, it doesn't appear that Dawkins has squarely faced the fact of the genocidal rampages committed under the banner of godlessness (Mao, Pol Pot, Stalin).
What drives mass violence is of course an important question. As an evolutionary biologist, Dawkins would say that such behavior is a consequence of natural selection, a point underscored by the ingrained propensity of certain simian troops to war on members of the same species. No doubt Dawkins would concede that the bellicosity of those primates had nothing to do with beliefs in some god.
So it seems that Dawkins may be placing too much emphasis on beliefs in god as a source of violent strife, though we should grant that it seems perplexing as to why a god would permit such strife.
Still, it appears that the author of Climbing Mount Improbable (W.W. Norton 1996) has confounded correlation with causation.
2. Properly this footnote, like the previous one, does not affect Dawkins' case against god's existence, which is the reason for the placement of these remarks.In a serious lapse, Dawkins has that "there is something to be said" for treating Buddhism and Confucianism not as religions but as ethical systems. In the case of Buddhism, it may be granted that Buddhism is atheistic in the sense of denying a personal, monolithic god. But, from the perspective of a materialist like Dawkins, Buddhism certainly purveys numerous supernaturalistic ideas, with followers espousing ethical beliefs rooted in a supernatural cosmic order -- which one would think qualifies Buddhism as a religion.
True, Dawkins' chief target is the all-powerful god of Judaism, Christianity and Islam (Zoroastrianism too), with little focus on pantheism, hentheism or supernatural atheism. Yet a scientist of his standing ought be held to an exacting standard.
3. As well as conclusively proving that quantum effects can be scaled up to the "macro world."4. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (W.W. Norton 1986).
5. The same might be said of Dembski.
6. A fine, but significant, point: Dawkins, along with many others, believes that Zeno's chief paradox has been resolved by the mathematics of bounded infinite series. However, quantum physics requires that potential energy be quantized. So height H above ground is measurable discontinuously in a finite number of lower heights. So a rock dropped from H to ground must first reach H', the next discrete height down. How does the rock in static state A at H reach static state B at H'? That question has no answer, other than to say something like "a quantum jump occurs." So Zeno makes a sly comeback.
This little point is significant because it gets down to the fundamentals of causality, something that Dawkins leaves unexamined.7. After the triumphs of his famous theorems, Goedel stirred up more trouble by a finding a solution to Eistein's general relativity field equations which, in Goedel's estimation, demonstrated that time (and hence naive causality) is an illusion. A rotating universe, he found, could contain closed time loops such that if a rocket traveled far enough into space it would eventually reach its own past, apparently looping through spacetime forever. Einstein dismissed his friend's solution as inconsistent with physical reality.
Before agreeing with Einstein that the solution is preposterous, consider the fact that many physicists believe that there is a huge number of "parallel," though undetectable, universes.
And we can leave the door ajar, ever so slightly, to Dawkins' thought of a higher power fashioning the universe being a result of an evolutionary process. Suppose that far in our future an advanced race builds a spaceship bearing a machine that resets the constants of nature as it travels, thus establishing the conditions for the upcoming big bang in our past such that galaxies, and we, are formed. Of course, we then are faced with the question: where did the information come from?
7a. An excellent discussion of this controversy is found in Interpreting Probability (Cambridge 2002) by David Howie.
7.b An entertaining popular discussion is found in The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Yale 2011) by Sharon Bertsch McGrayne.
8.0 C.D. Broad and others are cited with respect to this result in Symmetry and Its Discontents (Cambridge 2005) by S.L. Zabell.
7a. An excellent discussion of this controversy is found in Interpreting Probability (Cambridge 2002) by David Howie.
7.b An entertaining popular discussion is found in The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Yale 2011) by Sharon Bertsch McGrayne.
8.0 C.D. Broad and others are cited with respect to this result in Symmetry and Its Discontents (Cambridge 2005) by S.L. Zabell.
8.a Zabell offers a proof of De Morgan's formula in Symmetry (above).
8. Unless one assumes another god who is exactly contrary to the first, or perhaps a group of gods whose influences tend to cancel.9. Consider a child born with super-potent intelligence and strength. What are the probabilities that the traits continue?
A. If the child matures and mates successfully, the positive selection pressure from one generation to the next is faced with a countervailing tendency toward dilution. It could take many, many generations before that trait (gene set) becomes dominant, and in the meantime, especially in the earlier generations, extinction of the trait is a distinct possibility.
B. In social animals, very powerful individual advantages come linked to a very powerful disadvantage: the tendency of the group to reject as alien anything too different. Think of the recent tendency of white mobs to lynch physically superior black males. Or of the early 19th century practice of Australian tribesmen to kill mixed race offspring born to their women.
9.0 In another example of Dawkins' dismissive attitude toward fellow scientists, Dawkins writes:
Paul Davies' The Mind of God seems to hover somewhere between Einsteinian pantheism and an obscure form of deism -- for which he was rewarded with the Templeton Prize (a very large sum of money given annually by the Templeton Foundation, usually to a scientist who is prepared to say something nice about religion."
Dawkins goes on to upbraid scientists for taking Templeton money on grounds that they are in danger of introducing bias into their statements.
I have not read The Mind of God: The Scientific Basis for a Rational World (Simon & Schuster 1992), so I cannot comment on its content. On the other hand, it would appear that Dawkins has not read Davies' The Fifth Miracle: the search for the origins and meaning of life (Simon & Schuster 1999), or he might have been a bit more prudent.
Fifth Miracle is, as is usual with Davies, a highly informed tour de force. I have read several books by Davies, a physicist, and have never caught him in duffer errors of the type found in Dawkins' books.
By the way, Robert Shapiro (see footnote 9.1 below) didn't find Hoyle's panspermia work to be first rate, but I have the sense that that assessment may have something to do with the strong conservativism of chemists versus the tradition of informed speculation by astrophysicists. Some of Shapiro's complaints could also be lodged against string theorists.
By the way Nobel laureate biologist Lynn Margolis also denounced Hoyle's panspermia speculations, but, again what may have been going on was science culture clash.
Some of the notions of H and his collaborator, N.C. Wickramasinghe,
which seemed so outlandish in the eighties, have gained credibility with new discoveries concerning extremophiles and the potential of space-borne microorganisms.
9.1 This draft corrects a serious misstatement of Crick's point, which occurred because of my faulty memory.
In Origins: a skeptic's guide to the creation of life on earth (Summit/Simon & Schuster 1986), biochemist Robert Shapiro notes that the probability of such a circumstance is in the vicinity of 10^20 to 1.
Shapiro's book gives an excellent survey of origin of life thinking up to the early 1980s.
Shapiro also gives Dawkins a jab over Dawkins' off-the-cuff probability estimate of a billion to one against life emerging.
10. I have also made more than my share of those.Relevant links:
In search of a blind watchmaker
http://www.angelfire.com/az3/nfold/watch.htmlDo dice play God?
http://www.angelfire.com/az3/nfold/dice.htmlToward a signal model of perception
http://www.angelfire.com/ult/znewz1/qball.htmlOn Hilbert's sixth problem
http://kryptograff.blogspot.com/2007/06/on-hilberts-sixth-problem.html
The world of null-H
http://kryptograff.blogspot.com/2007/06/world-of-null-h.html
The universe cannot be modeled as a Turing machine
http://www.angelfire.com/az3/nfold/turing.htmlDrunk and disorderly: the inexorable rise of entropy
http://www.angelfire.com/az3/nfold/entropy.html
Biological observer-participation and Wheeler's 'law without law'
by Brian D. Josephsonhttp://arxiv.org/abs/1108.4860
The mathematics of changing your mind (on Bayesian methods)
by John Allen Pauloshttp://www.nytimes.com/2011/08/07/books/review/the-theory-that-would-not-die-by-sharon-bertsch-mcgrayne-book-review.html?_r=2&pagewanted=all
Where is Zion?
http://www.angelfire.com/az3/newzone/zion1.html
Other Conant pages
http://conantcensorshipissue.blogspot.com/2011/11/who-is-paul-conant-paul-conants-erdos.htmlA Dawkins link
http://users.ox.ac.uk/~dawkins/Draft 08 [Digression on a priori probability added]
Draft 09 [Correction of bad numbers plugged into a probability example in the digression]Draft 10 [Digression amplified]
Draft 11 [Digression revised and again amplified]
Draft 12 [Digression example clarified]
Draft 13 [Correction in digression due to comment by Josh Mitteldorf]
Draft 14 [Digression amplified]
Draft 15 [Digression amplified and made into an appendix]
No comments:
Post a Comment