Monday, April 15, 2013
Note on Wolfram's 'principle of computational equivalence'
Stephen Wolfram has discussed his "principle of computational equivalence" extensively in his book A New Kind of Science and elsewhere. Herewith is this writer's understanding of the reasoning behind the PCE:
1. At least one of Wolfram's cellular automata is known to be Turing complete. That is, given the proper input string, such a system can emulate an arbitrary Turing machine. Hence, such a system emulates a universal Turing machine and is called "universal."
2. One very simple algorithm is Wolfram's CA Rule 110, which Matthew Cook has proved to be Turing complete. Wolfram also asserts that another simple cellular automaton algorithm has been shown to be universal or Turing complete.
3. In general, there is no means of checking to see whether an arbitrary algorithm is Turing complete. This follows from Turing's proof that there is no general way to see whether a Turing machine will halt.
4. Hence, it can be argued that very simple algorithms are quite likely to be Turing complete, but because there is no way to determine this in general, the position taken isn't really a conjecture. Only checking one particular case after another would give any indication of the probability that a simple algorithm is universal.
5. Wolfram's principle of computational equivalence appears to reduce to the intuition that the probability is reasonable -- thinking in terms of geochrons -- that simple algorithms yield high information outputs.
Herewith the writer's comments concerning this principle:
1. Universality of a system does not imply that high information outputs are common (recalling that a bona fide Turing computation's tape halts at a finite number of steps). The normal distribution would seem to cover the situation here. One universal system is some algorithm (perhaps a Turing machine) which produces the function f(n) = n+1. We may regard this as universal in the sense that it prints out every Turing machine description number, which could then, notionally, be executed as a subroutine. Nevertheless, as n approaches infinity, the probability of happening on a description number goes to 0. It may be possible to get better efficiency, but even if one does so, many description numbers are for machines that get stuck or do low information outputs.
2. The notion that two systems in nature might both be universal, or "computationally equivalent," must be balanced against the point that no natural system can be in fact universal, being limited by energy resources and the entropy of the systems. So it is conceptually possible to have two identical systems, one of which has computation power A, based on energy resource x, and the other of which has computation power B, based on energy resource y. Just think of two clone mainframes, one of which must make do with half the electrical power of the other. The point here is that "computational equivalence" may turn out not to be terribly meaningful in nature. The probability of a high information output may be mildly improved if high computation power is fairly common in nature, but it is not easy to see that such outputs would be rather common.
A mathematician friend commented:
I'd only add that we have very reasonable ideas about "most numbers," but these intuitions depend crucially on ordering of an infinite set. For example, if I say, "Most integers are not divisible by 100", you would probably agree that is a reasonable statement. But in fact it's meaningless. For every number you show me that's not divisible by 100, I'll show you 10 numbers that are divisible by 100. I can write an algorithm for a random number generator that yields a lot more numbers that are divisible by 100 than otherwise. "But," you protest, "not every integer output is equally likely under your random number generator." And I'd have to agree, but I'd add that the same is true for any random number generator. They are all infinitely biased in favor of "small" numbers (where "small" may have a different meaning for each random number generator).
Given an ordering of the integers, it is possible to make sense of statements about the probability of a random integer being thus-and-so. And given an ordering of the cellular automata, it's possible to make sense of the statement that "a large fraction of cellular automata are Turing complete."
My reply:
There are 256 cellular automata in NKS. The most obvious way to order each of these is by input bit string, which expresses an integer. That is, the rule operates on a bit-string stacked in a pyramid of m rows. It is my thought that one would have to churn an awfully long time before hitting on a "universal." Matthew Cook's proof of the universalism of CA110 is a proof of principle, and gives no specific case.
As far as I know, there exist few strong clues that could be used to improve the probability that a specific CA is universal. Wolfram argues that those automata that show a pseudorandom string against a background "ether" can be expected to show universality (if one only knew the correct input string). However, let us remember that it is routine for functions to approach chaos via initial values yielding periodic outputs.
So one might need to prove that a set of CA members can only yield periodic outputs before proceeding to assess probabilities of universalism.
Perhaps there is a relatively efficient means of forming CA input values that imply high probability of universalism, but I am unaware of it.
Another thought: Suppose we have the set of successive integers in the interval [1,10]. Then the probability that a randomly chosen set member is even is 1/2. However, if we want to talk about an infinite set of integers, in line with my friend's point, the probability of a randomly selected number being even is meaningless (or, actually, 0, unless we invoke the axiom of choice). Suppose we order the set of natural numbers thus: {1,3,5,7,9,2,11,13,15,17,4...}. So we see that the probability of a specific property depends not only on the ordering, but on an agreement that an observation can only take place for a finite subset.
As my friend points out, perhaps the probability of hitting on a description number doesn't go to 0 with infinity; it depends on the ordering. But, we have not encountered a clever ordering and Wolfram has not presented one.
Stephen Wolfram has discussed his "principle of computational equivalence" extensively in his book A New Kind of Science and elsewhere. Herewith is this writer's understanding of the reasoning behind the PCE:
1. At least one of Wolfram's cellular automata is known to be Turing complete. That is, given the proper input string, such a system can emulate an arbitrary Turing machine. Hence, such a system emulates a universal Turing machine and is called "universal."
2. One very simple algorithm is Wolfram's CA Rule 110, which Matthew Cook has proved to be Turing complete. Wolfram also asserts that another simple cellular automaton algorithm has been shown to be universal or Turing complete.
3. In general, there is no means of checking to see whether an arbitrary algorithm is Turing complete. This follows from Turing's proof that there is no general way to see whether a Turing machine will halt.
4. Hence, it can be argued that very simple algorithms are quite likely to be Turing complete, but because there is no way to determine this in general, the position taken isn't really a conjecture. Only checking one particular case after another would give any indication of the probability that a simple algorithm is universal.
5. Wolfram's principle of computational equivalence appears to reduce to the intuition that the probability is reasonable -- thinking in terms of geochrons -- that simple algorithms yield high information outputs.
Herewith the writer's comments concerning this principle:
1. Universality of a system does not imply that high information outputs are common (recalling that a bona fide Turing computation's tape halts at a finite number of steps). The normal distribution would seem to cover the situation here. One universal system is some algorithm (perhaps a Turing machine) which produces the function f(n) = n+1. We may regard this as universal in the sense that it prints out every Turing machine description number, which could then, notionally, be executed as a subroutine. Nevertheless, as n approaches infinity, the probability of happening on a description number goes to 0. It may be possible to get better efficiency, but even if one does so, many description numbers are for machines that get stuck or do low information outputs.
2. The notion that two systems in nature might both be universal, or "computationally equivalent," must be balanced against the point that no natural system can be in fact universal, being limited by energy resources and the entropy of the systems. So it is conceptually possible to have two identical systems, one of which has computation power A, based on energy resource x, and the other of which has computation power B, based on energy resource y. Just think of two clone mainframes, one of which must make do with half the electrical power of the other. The point here is that "computational equivalence" may turn out not to be terribly meaningful in nature. The probability of a high information output may be mildly improved if high computation power is fairly common in nature, but it is not easy to see that such outputs would be rather common.
A mathematician friend commented:
I'd only add that we have very reasonable ideas about "most numbers," but these intuitions depend crucially on ordering of an infinite set. For example, if I say, "Most integers are not divisible by 100", you would probably agree that is a reasonable statement. But in fact it's meaningless. For every number you show me that's not divisible by 100, I'll show you 10 numbers that are divisible by 100. I can write an algorithm for a random number generator that yields a lot more numbers that are divisible by 100 than otherwise. "But," you protest, "not every integer output is equally likely under your random number generator." And I'd have to agree, but I'd add that the same is true for any random number generator. They are all infinitely biased in favor of "small" numbers (where "small" may have a different meaning for each random number generator).
Given an ordering of the integers, it is possible to make sense of statements about the probability of a random integer being thus-and-so. And given an ordering of the cellular automata, it's possible to make sense of the statement that "a large fraction of cellular automata are Turing complete."
My reply:
There are 256 cellular automata in NKS. The most obvious way to order each of these is by input bit string, which expresses an integer. That is, the rule operates on a bit-string stacked in a pyramid of m rows. It is my thought that one would have to churn an awfully long time before hitting on a "universal." Matthew Cook's proof of the universalism of CA110 is a proof of principle, and gives no specific case.
As far as I know, there exist few strong clues that could be used to improve the probability that a specific CA is universal. Wolfram argues that those automata that show a pseudorandom string against a background "ether" can be expected to show universality (if one only knew the correct input string). However, let us remember that it is routine for functions to approach chaos via initial values yielding periodic outputs.
So one might need to prove that a set of CA members can only yield periodic outputs before proceeding to assess probabilities of universalism.
Perhaps there is a relatively efficient means of forming CA input values that imply high probability of universalism, but I am unaware of it.
Another thought: Suppose we have the set of successive integers in the interval [1,10]. Then the probability that a randomly chosen set member is even is 1/2. However, if we want to talk about an infinite set of integers, in line with my friend's point, the probability of a randomly selected number being even is meaningless (or, actually, 0, unless we invoke the axiom of choice). Suppose we order the set of natural numbers thus: {1,3,5,7,9,2,11,13,15,17,4...}. So we see that the probability of a specific property depends not only on the ordering, but on an agreement that an observation can only take place for a finite subset.
As my friend points out, perhaps the probability of hitting on a description number doesn't go to 0 with infinity; it depends on the ordering. But, we have not encountered a clever ordering and Wolfram has not presented one.
Monday, September 24, 2012
Freaky facts about 9 and 11
Scroll down to Force 1089 for a fun psychic power trick
It's easy to come up with strange coincidences regarding the numbers 9 and 11. See, for example,
http://www.unexplained-mysteries.com/forum/index.php?showtopic=56447
How seriously you take such pecularities depends on your philosophical point of view. A typical scientist would respond that such coincidences are fairly likely by the fact that one can, with p/q the probability of an event, write (1-p/q)^n, meaning that if n is large enough the probability is fairly high of "bizarre" classically independent coincidences.
But you might also think about Schroedinger's notorious cat, whose live-dead iffy state has yet to be accounted for by Einsteinian classical thinking, as I argue in this longish article:
http://www.angelfire.com/ult/znewz1/qball.html
Elsewhere I give a mathematical explanation of why any integer can be quickly tested to determine whether 9 or 11 is an aliquot divisor.
http://www.angelfire.com/az3/nfold/iJk.html
Here are some fun facts about divisibility by 9 or 11.
# If integers k and j both divide by 9, then the integer formed by stringing k and j together also divides by 9. One can string together as many integers divisible by 9 as one wishes to obtain that result.
Example:
27, 36, 45, 81 all divide by 9
In that case, 27364581 divides by 9 (and equals 3040509)
# If k divides by 9, then all the permutations of k's digit string form integers that divide by 9.
Example:
819/9 = 91
891/9 = 99
198/9 = 22
189/9 =21
918/9 = 102
981/9 = 109
# If an integer does not divide by 9, it is easy to form a new integer that does so by a simple addition of a digit.
This follows from the method of checking for factorability by 9. To wit, we add all the numerals, to see if they add to 9. If the sum exceeds 9, then those numerals are again added and this process is repeated as many times as necessary to obtain a single digit.
Example a.:
72936. 7 + 2 + 9 + 3 + 6 = 27. 2 + 7 = 9
Example b.:
Number chosen by random number generator:
37969. 3 + 7 + 9 + 6 + 9 = 34. 3 + 4 = 7
Hence, all we need do is include a 2 somewhere in the digit string.
372969/9 = 4144
Mystify your friends. Have them pick any string of digits (say 4) and then you silently calculate (it looks better if you don't use a calculator) to see whether the number divides by 9. If so, announce, "This number divides by 9." If not, announce the digit needed to make an integer divisible by 9 (2 in the case above) and then have your friend place that digit anywhere in the integer. Then announce, "This number divides by 9."
In the case of 11, doing tricks isn't quite so easy, but possible.
We check if a number divides by 11 by adding alternate digits as positive and negative. If the sum is zero, the number divides by 11. If the sum exceeds 9, we add the numerals with alternating signs, so that a sum 11 or 77 or the like, will zero out.
Let's check 5863.
We sum 5 - 8 + 6 - 3 = 0
So we can't scramble 5863 any way and have it divide by 11.
However, we can scramble the positively signed numbers or the negatively signed numbers how we please and find that the number divides by 11.
6358 = 11*578
We can also string numbers divisible by 11 together and the resulting integer is also divisible by 11.
253 = 11*23, 143 = 11*13
143253 = 11*13023
Now let's test this pseudorandom number:
70517. The sum of digits is 18 (making it divisible by 9).
We need to get a -18. So any digit string that sums to -18 will do. The easiest way to do that in this case is to replicate the integer and append it since each positive numeral is paired to its negative.
7051770517/11 = 641070047
Now let's do a pseudorandom 4-digit number:
4556. 4 - 5 + 5 - 6 = - 2. Hence 45562 must divide by 11 (obtaining 4142).
Sometimes another trick works.
5894. 5 - 8 + 9 - 4 = 2. So we need a -2, which, in this case can be had by appending 02, ensuring that 2 is found in the negative sum.
Check: 589402/11 = 53582
Let's play with 157311.
Positive digits are 1,7,1
Negative digits are 5, 3, 1
Positive permutations are
117, 711, 171
Negative permutations are
531, 513, 315, 351, 153, 135
So integers divisible by 11 are, for example:
137115 = 11*12465
711315 = 11*64665
Sizzlin' symmetry
There's just something about symmetry...
To form a number divisible by both 9 and 11, we play around thus:
Take a number, say 18279, divisible by 9. Note that it has an odd number of digits, meaning that its copy can be appended such that the resulting number 1827918279 yields a pattern pairing each positive digit with its negative, meaning we'll obtain a 0. Hence 1827918279/11 = 166174389 and that integer divided by 9 equals 20312031. Note that 18279/9 = 2031,
We can also write 1827997281/11 = 166181571 and that number divided by 9 equals 203110809.
Suppose the string contains an even number of digits. In that case, we can write say 18271827 and find it divisible by 9 (equaling 2030203). But it won't divide by 11 in that the positives pair with positive clones and so for negatives. This is resolved by using a 0 for the midpoint.
Thence 182701827/11 = 16609257. And, by the rules given above, 182701827 is divisible by 9, that number being 20300203.
Force 1089
An amusing property of the numbers 9 and 11 in base 10 arithmetic is demonstrated by Force 1089, the name of a trick you can use to show off your "mentalist" powers.
"It is important that you clear your mind," you say to your target. "Psychic researchers have found that a bit of simple arithmetic will help your subconscious mind to focus, and create a paranormal atmosphere."
You then instruct the person to choose any three-digit number where the first and last digit differ by at least 2. "Please do not show me the number or your calculations," you insruct, handing the person a calculator to help assure they don't flub the arithmetic.
The target is then told to reverse the order and subtract the smaller from the larger number. Take that difference and do a reversal of order on that. Then add those two numbers.
You toss a couple of books over to the target and say, "Be careful to conceal your number. Now use the first three digits to find a page in one of the books you choose." Once the page is found, you instruct the person to use the last digit to count words along the top line and stop once reaching that number.
"Now, PLEASE, help me, and concentrate hard on the word you read."
After a few moments, you announce -- of course -- the exact word your target is looking at.
Your secret is that the algorithm always yields the constant 1089, meaning you only have two words to memorize.
These symmetries always, in base 10, produce numbers divisible by 9 and 11 (or 99), even though the first number may not be so factorized.
Consider 854-458 = 396. 396+693 = 1089.
Further, 396 = 4*99; 693 = 7*99; 1089 = 9*11^2.
If we differ the first and last digit by 0, of course, the algorithm zeros out on the next step. If we differ them by 1, the subtraction yields 99, as in 100 - 001 = 99, which turns out to be a constant at this step.
Consider 433-334 = 99.
In the case of a number with 2n digits, we discover that the algorithm always yields numbers divisible by 9. However, for 2n+1 digits, the algorithm always yields a set of numbers divisible by 99. BUT, it does not yield a constant.
(I have not taken the trouble to prove rigorously the material on Force 1089.)
Ah, wonderful symmetry.
Scroll down to Force 1089 for a fun psychic power trick
It's easy to come up with strange coincidences regarding the numbers 9 and 11. See, for example,
http://www.unexplained-mysteries.com/forum/index.php?showtopic=56447
How seriously you take such pecularities depends on your philosophical point of view. A typical scientist would respond that such coincidences are fairly likely by the fact that one can, with p/q the probability of an event, write (1-p/q)^n, meaning that if n is large enough the probability is fairly high of "bizarre" classically independent coincidences.
But you might also think about Schroedinger's notorious cat, whose live-dead iffy state has yet to be accounted for by Einsteinian classical thinking, as I argue in this longish article:
http://www.angelfire.com/ult/znewz1/qball.html
Elsewhere I give a mathematical explanation of why any integer can be quickly tested to determine whether 9 or 11 is an aliquot divisor.
http://www.angelfire.com/az3/nfold/iJk.html
Here are some fun facts about divisibility by 9 or 11.
# If integers k and j both divide by 9, then the integer formed by stringing k and j together also divides by 9. One can string together as many integers divisible by 9 as one wishes to obtain that result.
Example:
27, 36, 45, 81 all divide by 9
In that case, 27364581 divides by 9 (and equals 3040509)
# If k divides by 9, then all the permutations of k's digit string form integers that divide by 9.
Example:
819/9 = 91
891/9 = 99
198/9 = 22
189/9 =21
918/9 = 102
981/9 = 109
# If an integer does not divide by 9, it is easy to form a new integer that does so by a simple addition of a digit.
This follows from the method of checking for factorability by 9. To wit, we add all the numerals, to see if they add to 9. If the sum exceeds 9, then those numerals are again added and this process is repeated as many times as necessary to obtain a single digit.
Example a.:
72936. 7 + 2 + 9 + 3 + 6 = 27. 2 + 7 = 9
Example b.:
Number chosen by random number generator:
37969. 3 + 7 + 9 + 6 + 9 = 34. 3 + 4 = 7
Hence, all we need do is include a 2 somewhere in the digit string.
372969/9 = 4144
Mystify your friends. Have them pick any string of digits (say 4) and then you silently calculate (it looks better if you don't use a calculator) to see whether the number divides by 9. If so, announce, "This number divides by 9." If not, announce the digit needed to make an integer divisible by 9 (2 in the case above) and then have your friend place that digit anywhere in the integer. Then announce, "This number divides by 9."
In the case of 11, doing tricks isn't quite so easy, but possible.
We check if a number divides by 11 by adding alternate digits as positive and negative. If the sum is zero, the number divides by 11. If the sum exceeds 9, we add the numerals with alternating signs, so that a sum 11 or 77 or the like, will zero out.
Let's check 5863.
We sum 5 - 8 + 6 - 3 = 0
So we can't scramble 5863 any way and have it divide by 11.
However, we can scramble the positively signed numbers or the negatively signed numbers how we please and find that the number divides by 11.
6358 = 11*578
We can also string numbers divisible by 11 together and the resulting integer is also divisible by 11.
253 = 11*23, 143 = 11*13
143253 = 11*13023
Now let's test this pseudorandom number:
70517. The sum of digits is 18 (making it divisible by 9).
We need to get a -18. So any digit string that sums to -18 will do. The easiest way to do that in this case is to replicate the integer and append it since each positive numeral is paired to its negative.
7051770517/11 = 641070047
Now let's do a pseudorandom 4-digit number:
4556. 4 - 5 + 5 - 6 = - 2. Hence 45562 must divide by 11 (obtaining 4142).
Sometimes another trick works.
5894. 5 - 8 + 9 - 4 = 2. So we need a -2, which, in this case can be had by appending 02, ensuring that 2 is found in the negative sum.
Check: 589402/11 = 53582
Let's play with 157311.
Positive digits are 1,7,1
Negative digits are 5, 3, 1
Positive permutations are
117, 711, 171
Negative permutations are
531, 513, 315, 351, 153, 135
So integers divisible by 11 are, for example:
137115 = 11*12465
711315 = 11*64665
Sizzlin' symmetry
There's just something about symmetry...
To form a number divisible by both 9 and 11, we play around thus:
Take a number, say 18279, divisible by 9. Note that it has an odd number of digits, meaning that its copy can be appended such that the resulting number 1827918279 yields a pattern pairing each positive digit with its negative, meaning we'll obtain a 0. Hence 1827918279/11 = 166174389 and that integer divided by 9 equals 20312031. Note that 18279/9 = 2031,
We can also write 1827997281/11 = 166181571 and that number divided by 9 equals 203110809.
Suppose the string contains an even number of digits. In that case, we can write say 18271827 and find it divisible by 9 (equaling 2030203). But it won't divide by 11 in that the positives pair with positive clones and so for negatives. This is resolved by using a 0 for the midpoint.
Thence 182701827/11 = 16609257. And, by the rules given above, 182701827 is divisible by 9, that number being 20300203.
Force 1089
An amusing property of the numbers 9 and 11 in base 10 arithmetic is demonstrated by Force 1089, the name of a trick you can use to show off your "mentalist" powers.
"It is important that you clear your mind," you say to your target. "Psychic researchers have found that a bit of simple arithmetic will help your subconscious mind to focus, and create a paranormal atmosphere."
You then instruct the person to choose any three-digit number where the first and last digit differ by at least 2. "Please do not show me the number or your calculations," you insruct, handing the person a calculator to help assure they don't flub the arithmetic.
The target is then told to reverse the order and subtract the smaller from the larger number. Take that difference and do a reversal of order on that. Then add those two numbers.
You toss a couple of books over to the target and say, "Be careful to conceal your number. Now use the first three digits to find a page in one of the books you choose." Once the page is found, you instruct the person to use the last digit to count words along the top line and stop once reaching that number.
"Now, PLEASE, help me, and concentrate hard on the word you read."
After a few moments, you announce -- of course -- the exact word your target is looking at.
Your secret is that the algorithm always yields the constant 1089, meaning you only have two words to memorize.
These symmetries always, in base 10, produce numbers divisible by 9 and 11 (or 99), even though the first number may not be so factorized.
Consider 854-458 = 396. 396+693 = 1089.
Further, 396 = 4*99; 693 = 7*99; 1089 = 9*11^2.
If we differ the first and last digit by 0, of course, the algorithm zeros out on the next step. If we differ them by 1, the subtraction yields 99, as in 100 - 001 = 99, which turns out to be a constant at this step.
Consider 433-334 = 99.
In the case of a number with 2n digits, we discover that the algorithm always yields numbers divisible by 9. However, for 2n+1 digits, the algorithm always yields a set of numbers divisible by 99. BUT, it does not yield a constant.
(I have not taken the trouble to prove rigorously the material on Force 1089.)
Ah, wonderful symmetry.
Tuesday, July 24, 2012
Mathematics of evolution: a note on Chaitin's model
In his book Proving Darwin: Making Biology Mathematical (Knopf Doubleday 2012), Gregory Chaitin offers a "toy model" to demonstrate that progressive evolution works in principle. The idea is that DNA behavior is very similar to that of cyber software.
http://pantheon.knopfdoubleday.com/2012/05/08/proving-darwin-by-gregory-chaitin/
Computation of numbers then becomes the work of his evolution system. A mathematician, he finds, can "intelligently design" numbers that get very close to a Busy Beaver number with work growing at n. An exhaustive search of all possible computable numbers under some BB(n) requires exponential work. But, he found, his system of a climbing random walk arrived at numbers close to BB(n) on the order of n^2.
Chaitin posits a Turing-style oracle to sieve out the "less fit" (lower) numbers and the dud algorithms (those that get stuck without producing a number). The oracle represents the filtering of natural selection.
Caveat:
His system requires random mutations of relatively high information value. These "algorithmic mutations" alter multi-node sets. Point mutations, he found, were unproductive. Hence, he has not succeeded in demonstrating how the DNA system itself might have evolved.
Chaitin says he was concerned that there existed no mathematical justification for evolution. But, this assertion gives pause. The existence of the universal Turing machine would seem to demonstrate that progressive evolution is possible, though not necessarily highly probable. But granting that Chaitin was focused on probability, we can agree that if a system is of a high enough order, the probability of progressive evolution is strong. So in that respect, one may agree that Darwin has been proved right.
However, there's an old saying among IT people: "Garbage in, garbage out." The probability that random inputs or alterations will yield increased functionality is remote. One cannot say that Darwin has been proved right about the origin of life.
Remark
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.
For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.
That is, we don't calculate, say 11^-11 (3.5x10^-15), or some routine binomial combinatorial multiple, but we have that our series approximates very closely 1 - e^-1 = 0.63.
Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?
The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.
These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.
In his book Proving Darwin: Making Biology Mathematical (Knopf Doubleday 2012), Gregory Chaitin offers a "toy model" to demonstrate that progressive evolution works in principle. The idea is that DNA behavior is very similar to that of cyber software.
http://pantheon.knopfdoubleday.com/2012/05/08/proving-darwin-by-gregory-chaitin/
Computation of numbers then becomes the work of his evolution system. A mathematician, he finds, can "intelligently design" numbers that get very close to a Busy Beaver number with work growing at n. An exhaustive search of all possible computable numbers under some BB(n) requires exponential work. But, he found, his system of a climbing random walk arrived at numbers close to BB(n) on the order of n^2.
Chaitin posits a Turing-style oracle to sieve out the "less fit" (lower) numbers and the dud algorithms (those that get stuck without producing a number). The oracle represents the filtering of natural selection.
Caveat:
His system requires random mutations of relatively high information value. These "algorithmic mutations" alter multi-node sets. Point mutations, he found, were unproductive. Hence, he has not succeeded in demonstrating how the DNA system itself might have evolved.
Chaitin says he was concerned that there existed no mathematical justification for evolution. But, this assertion gives pause. The existence of the universal Turing machine would seem to demonstrate that progressive evolution is possible, though not necessarily highly probable. But granting that Chaitin was focused on probability, we can agree that if a system is of a high enough order, the probability of progressive evolution is strong. So in that respect, one may agree that Darwin has been proved right.
However, there's an old saying among IT people: "Garbage in, garbage out." The probability that random inputs or alterations will yield increased functionality is remote. One cannot say that Darwin has been proved right about the origin of life.
Remark
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.
For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.
That is, we don't calculate, say 11^-11 (3.5x10^-15), or some routine binomial combinatorial multiple, but we have that our series approximates very closely 1 - e^-1 = 0.63.
Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?
The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.
These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.
Saturday, June 2, 2012
Periodicity 'versus' randomness
A few observations, with no claim to originality
Please also see a more recent post on the probabilities of periodicity
http://kryptograff5.blogspot.com/2013/08/draft-1-please-let-me-know-of-errors.html
We do have a baseline notion of randomness, I suggest. Within constraints, the clicks of a Geiger counter occur, physicists believe, at truly random intervals. An idealized Geiger counter hooked up to an idealized printer might -- assuming some sort of time compression -- print out a noncomputable transcendental. The string's destiny as a noncomputable has probability 1, though, from a human vantage point, once could never be sure the string wasn't destined to be a computable number.
Stephen Wolfram's New Kind of Science, on the other hand, tends to see randomness in terms of pseudorandomness or effective randomness. In my view, an effectively random digit string is one that passes various statistical tests for randomness (most of which are based on the normal approximation of binomial probabilities, which are natural to use with binary strings).
Now, if presented with a finite segment of, say, a binary string, we have no way of knowing, prima facie, whether the string represents a substring of some random string (which we usually expect also to be of finite length). Perhaps the longer string was published by a quantum noise generator. Even so, inspection of the substring may well yield the suspicion that a nonrandom, deterministic process is at work.
Now if one encounters, absent a Turing machine (TM) executor or equivalent, the isolated run 0 0 0 0 0 0 0 0 0 0, one is likely to suspect nonrandomness. Why so? We are doubtless assuming that truly random binary digits occur with a probability of 1/2 and so our belief is that a run of ten zeros is too far from the expectation value of five zeros and five ones. Even so, such a run by itself may simply imply a strong bias for 0, but an otherwise nondeterministic process and so we need to filter out the effect of bias to see whether a process is largely deterministic.
We are much less likely to see the string 0101010101 as anything other than almost surely deterministic, regarding it as a strong candidate for nonrandomness. If we use independence of event (digit on space m) as a criterion, the probability of such a string is 2^(-10), or one chance in 1024 -- the same probability as for any other permutation. Still, such a quantification doesn't tell the whole story. If you predict the sequence 10011110101, you have one chance in 1024 of being right, but if instead you predict some sequence that contains six 0s and four 1s, then you'll find that the relevant set contains 210 strings, yielding a probability that you are correct of 210x2^(-10), or better than 20 percent.
So why do we regard 0101010101 as having a strong probability of nonrandom generation? Because it is part of a small subset of permutations with what I call symmetry. In this case, the symmetry accords with periodicity, or 10 ≡ 0 (mod 2), to be precise.
Is the sequence 100111010 likely to be the result of a random process. A quick inspection leaves one uncertain. This is because this particular string lacks any obvious symmetry. The first sequence is a member of a small subset of length 10 strings that are periodic or "nearly periodic" (having a small non-zero remainder) or that have other symmetries. Many strings, of course, are the result of deterministic processes and yet display no easily detected symmetry. (We might even consider "pseudo-symmetries" in which "periods" are not constant but are polynomial or exponential; some such strings might escape statistical detection of nonrandomness.)
Consider 01001001001
In this case we may regard the string as a candidate for nonrandomness based on its near periodicity of 10 ≡ 1 (mod 3), which includes three full periods.
Consider
0110101101011
The sequence is too short for a runs test, but we may suspect nonrandomness, because we have the period 01101, giving 13 ≡ 3 (mod 5).
We see here that strings of prime length have no periods of form a ≡ 0 (mod c), and hence the subset of "symmetrical" substrings is lower than for a nearby composite. So prime length strings are in general somewhat more likely to look nonrandom.
To get an idea of the smallness of the set of exactly periodic strings, observe that the binary string 101,XXX,XXX has 15 permutations of the last six digits, only one of which yields the periodic string 101101. By similar reasoning, we see that subsets of near-periodic strings are relatively small, as long as the remainder is small with respect to length n. It might be handy to find some particular ratio m/n -- remainder n over string length n -- that one uses to distinguish a union of a periodic and near-periodic subsets, but I have not bothered to do so.
Aside from periodicity and near-periodicity, we have what I call "flanking symmetry," which occurs for any string length. To wit:
0001000
or
0111110
And then we have mirror symmetry (comma included for clarity):
01101,10110
which is equivalent to two complete periods (no remainder) but with the right sequence reversing the order of the left.
We might try to increase the symmetries by, let's say, alternating mirror periods. But note that
0101,1010,0101 is equivalent to 010110,100101
and so there is no gain in what might loosely be called complexity.
Speaking of complexity, what of patterns such that g(n) is assigned to digit 0 and f(n) to digit 1 as a means of determining run lengths? In that case, as is generally true for decryption attacks, length of sample is important in successful detection of nonrandomness.
We may also note the possibility of a climbing function g(0,m) = run length x alternating with a substring of constant length y, each of which is composed ofpsuedorandom (or even random) digits, as in
0,101101,00,111001,000...
In this case, we have required that the pseudorandom strings be bracketed by the digit 1, thus reducing the statistical randomness, of course. And, again, sample size is important.
That is, though 010010001 has a whiff of nonrandomness, when one sees 010010000100001000001, one strongly suspects two functions. To wit, f(x) = k = 1 for runs of 1s and g(x) = next positive integer for runs of 0s. Though a human observer swiftly cognizes the pattern on the second string, a runs test would reveal a z score pointing to nonrandmoness.
So let us use the periodicity test to estimate probability of a deterministic process thus: For n = 20, we have the four aliquot factors, 2, 4, 5, 10. The permutations of length 2 are 00, 01 and their mirrors 11, 10, for 4 strings of period 2. For factor 4, we have 4C2 = 6, yielding, with mirrors, 12 strings of period 4. For factor 5, we have 5C2 =10, yielding, with mirrors, 20 strings of period 5. For factor 10, we have 10C2 = 45, yielding, with mirrors, 90 strings of period 10. So we arrive at 132 periodic strings, which gives a probability of one chance in 138,412,032 if we consider every period to be equiprobable. This concession however may sometimes be doubtful. And, the probability of nonrandomness is also affected by other elements of the set of symmetries discussed above. And of course there are standard tests, such as the runs and chi square tests that must be given due consideration.
Now consider
00000000001111111111
Why does this string strike one as nonrandom? For one thing it is "mirror periodic," with a period of 2. However, one can also discern its apparent nonrandomness using a runs test, which yields a high z score. The advantage of a runs test is that it is often effective on aperiodic strings (though this string doesn't so qualify). Similarly, a goodness of fit test can be used on aperiodic strings to detect the likeliness of human tweaking. And one might, depending on context, apply Benford's law (see http://mathworld.wolfram.com/BenfordsLaw.html ) to certain aperiodic strings.
So it is important to realize that though a small set of symmetrical strings of length n exists whose members are often construed as indicative of nonrandom action, there exists another small set of aperiodic strings of the same length whose members are considered to reveal traits of nonrandom action.
For example, a runs test of sufficiently large n would disclose the nonrandom behavior of Liouville's transcendental number.
Worthy of note is yet another conception of randomness, which is encapsulated by the question: how often does the number 31 appear in an infinite random digit string? That is, if an infinite digit string is formed by a truly random process, then a substring that might occur once in a billion digit spaces, would have probability 1 of recurring infinitely many times.
That is, in base 10, "31" reaches a 95% probability of a first occurrence at the 17,974,385th space. In base 2, "31" is expressed "111111" and a 95% probability of a first occurrence is reached at the 1441th space. Similarly, in an infinite string, we have probability 1 that a run of 10^(100^100) zeros will recur not just one, but infinitely often. Yet if one encountered a relatively short run of, say, 10^10 zeros, one would be convinced of bias and argue that the string doesn't pass statistical randomness tests.
The idea that such strings could occur randomly offends our intuition, which is to say the memory of our frequency ratios based on everyday empirical experience. However, if you were to envisions an infinity of generators of truly random binary strings lined up in parallel with strings stretching from where you stand to the horizon, there is prbability 0 that you happen upon such a stupendous run as you wander along the row of generators. (Obviously, probability 1 or 0 doesn't mean absolute certainty. There is probability 1 that Goldbach's conjecture is true, and yet perhaps there is some case over infinity where it doesn't hold.
Concerning whether "31" recurs infinitely often, one mathematician wrote: "The answer lies in an explicit definition of the term 'infinite random digit string.' Any algorithm for producing the digit string will at once tell you your answer and also raise the objection that the digit string isn't really random. A physical system for producing the digits (say, from output of a device that measures quantum noise) turns this from a mathematical into an experimental question."
Or, perhaps, a philosophical question. We know, according to Cantorian set theory -- as modified by Zermelo, Frankel and Von Neumann (and including the axiom of choice) -- that there is a nondenumerable infinity of noncomputable numbers. So one of these noncomputables, R0 -- that could only be "reached" by some eternal randomization process -- does contain a (countable) infinity of 31s. But this means there is also a number Ra that contains the same infinite string as R0 except for lacking all instances of the substring denoted 31. Of course, one doesn't know with certainty that a randomization process yields a noncomputable number. Ro might be a noncomputable and R1 a computable. Even so, such a procedure might yield a noncomputable R1. So we see that there is some noncomputable number where the string "31" never shows up.
If we regard Zermelo-Frankel set theory as our foundation, it is pretty hard to regard an element of the noncomputables as anything but a random number.
The next question to ponder is whether some pseudorandom aperiodic string mimics randomness well enough so that we could assume probability 1 for an infinity of 31s. Plenty of people believe that the pi decimal extension will eventually yield virtually every finite substring an infinity of times. And yet it is not even known whether this belief falls into the category of undecidable theorems. And even if pi could be shown to be "universal," there is no general way of determining whether an arbitrary program is universal.
No discussion of randomness can readily ignore the topic of Bayesian inference, which, despite a history of controversy, is used by philosophers of science to justify the empirical approach to establishment of scientific truth. Recurrence of well defined events is viewed as weight of evidence, with probability modified as evidence accumulates. Such thinking is the basis of "rules of succession, so that a sequence of, say, five 1s would imply a probability of (m+1)/(m+2) -- 86% -- that the next digit will also be a 1.
In this case, a uniform a priori distribution is assumed, as is dependence. So here, Bayesian inference is conjecturing some deterministic process that yields a high degree of bias, assumptions which impose strong constraints on "background" randomness. (Interestingly, Alan Turing used Bayesian methods in his code-breaking work.)
A longer discussion of Bayesian ideas is found in the appendix of my essay, The Knowledge delusion, found at http://kryptograff5.blogspot.com/2011/11/draft-03-knowledge-delusion-essay-by.html
A few observations, with no claim to originality
Please also see a more recent post on the probabilities of periodicity
http://kryptograff5.blogspot.com/2013/08/draft-1-please-let-me-know-of-errors.html
We do have a baseline notion of randomness, I suggest. Within constraints, the clicks of a Geiger counter occur, physicists believe, at truly random intervals. An idealized Geiger counter hooked up to an idealized printer might -- assuming some sort of time compression -- print out a noncomputable transcendental. The string's destiny as a noncomputable has probability 1, though, from a human vantage point, once could never be sure the string wasn't destined to be a computable number.
Stephen Wolfram's New Kind of Science, on the other hand, tends to see randomness in terms of pseudorandomness or effective randomness. In my view, an effectively random digit string is one that passes various statistical tests for randomness (most of which are based on the normal approximation of binomial probabilities, which are natural to use with binary strings).
Now, if presented with a finite segment of, say, a binary string, we have no way of knowing, prima facie, whether the string represents a substring of some random string (which we usually expect also to be of finite length). Perhaps the longer string was published by a quantum noise generator. Even so, inspection of the substring may well yield the suspicion that a nonrandom, deterministic process is at work.
Now if one encounters, absent a Turing machine (TM) executor or equivalent, the isolated run 0 0 0 0 0 0 0 0 0 0, one is likely to suspect nonrandomness. Why so? We are doubtless assuming that truly random binary digits occur with a probability of 1/2 and so our belief is that a run of ten zeros is too far from the expectation value of five zeros and five ones. Even so, such a run by itself may simply imply a strong bias for 0, but an otherwise nondeterministic process and so we need to filter out the effect of bias to see whether a process is largely deterministic.
We are much less likely to see the string 0101010101 as anything other than almost surely deterministic, regarding it as a strong candidate for nonrandomness. If we use independence of event (digit on space m) as a criterion, the probability of such a string is 2^(-10), or one chance in 1024 -- the same probability as for any other permutation. Still, such a quantification doesn't tell the whole story. If you predict the sequence 10011110101, you have one chance in 1024 of being right, but if instead you predict some sequence that contains six 0s and four 1s, then you'll find that the relevant set contains 210 strings, yielding a probability that you are correct of 210x2^(-10), or better than 20 percent.
So why do we regard 0101010101 as having a strong probability of nonrandom generation? Because it is part of a small subset of permutations with what I call symmetry. In this case, the symmetry accords with periodicity, or 10 ≡ 0 (mod 2), to be precise.
Is the sequence 100111010 likely to be the result of a random process. A quick inspection leaves one uncertain. This is because this particular string lacks any obvious symmetry. The first sequence is a member of a small subset of length 10 strings that are periodic or "nearly periodic" (having a small non-zero remainder) or that have other symmetries. Many strings, of course, are the result of deterministic processes and yet display no easily detected symmetry. (We might even consider "pseudo-symmetries" in which "periods" are not constant but are polynomial or exponential; some such strings might escape statistical detection of nonrandomness.)
Consider 01001001001
In this case we may regard the string as a candidate for nonrandomness based on its near periodicity of 10 ≡ 1 (mod 3), which includes three full periods.
Consider
0110101101011
The sequence is too short for a runs test, but we may suspect nonrandomness, because we have the period 01101, giving 13 ≡ 3 (mod 5).
We see here that strings of prime length have no periods of form a ≡ 0 (mod c), and hence the subset of "symmetrical" substrings is lower than for a nearby composite. So prime length strings are in general somewhat more likely to look nonrandom.
To get an idea of the smallness of the set of exactly periodic strings, observe that the binary string 101,XXX,XXX has 15 permutations of the last six digits, only one of which yields the periodic string 101101. By similar reasoning, we see that subsets of near-periodic strings are relatively small, as long as the remainder is small with respect to length n. It might be handy to find some particular ratio m/n -- remainder n over string length n -- that one uses to distinguish a union of a periodic and near-periodic subsets, but I have not bothered to do so.
Aside from periodicity and near-periodicity, we have what I call "flanking symmetry," which occurs for any string length. To wit:
0001000
or
0111110
And then we have mirror symmetry (comma included for clarity):
01101,10110
which is equivalent to two complete periods (no remainder) but with the right sequence reversing the order of the left.
We might try to increase the symmetries by, let's say, alternating mirror periods. But note that
0101,1010,0101 is equivalent to 010110,100101
and so there is no gain in what might loosely be called complexity.
Speaking of complexity, what of patterns such that g(n) is assigned to digit 0 and f(n) to digit 1 as a means of determining run lengths? In that case, as is generally true for decryption attacks, length of sample is important in successful detection of nonrandomness.
We may also note the possibility of a climbing function g(0,m) = run length x alternating with a substring of constant length y, each of which is composed ofpsuedorandom (or even random) digits, as in
0,101101,00,111001,000...
In this case, we have required that the pseudorandom strings be bracketed by the digit 1, thus reducing the statistical randomness, of course. And, again, sample size is important.
That is, though 010010001 has a whiff of nonrandomness, when one sees 010010000100001000001, one strongly suspects two functions. To wit, f(x) = k = 1 for runs of 1s and g(x) = next positive integer for runs of 0s. Though a human observer swiftly cognizes the pattern on the second string, a runs test would reveal a z score pointing to nonrandmoness.
So let us use the periodicity test to estimate probability of a deterministic process thus: For n = 20, we have the four aliquot factors, 2, 4, 5, 10. The permutations of length 2 are 00, 01 and their mirrors 11, 10, for 4 strings of period 2. For factor 4, we have 4C2 = 6, yielding, with mirrors, 12 strings of period 4. For factor 5, we have 5C2 =10, yielding, with mirrors, 20 strings of period 5. For factor 10, we have 10C2 = 45, yielding, with mirrors, 90 strings of period 10. So we arrive at 132 periodic strings, which gives a probability of one chance in 138,412,032 if we consider every period to be equiprobable. This concession however may sometimes be doubtful. And, the probability of nonrandomness is also affected by other elements of the set of symmetries discussed above. And of course there are standard tests, such as the runs and chi square tests that must be given due consideration.
Now consider
00000000001111111111
Why does this string strike one as nonrandom? For one thing it is "mirror periodic," with a period of 2. However, one can also discern its apparent nonrandomness using a runs test, which yields a high z score. The advantage of a runs test is that it is often effective on aperiodic strings (though this string doesn't so qualify). Similarly, a goodness of fit test can be used on aperiodic strings to detect the likeliness of human tweaking. And one might, depending on context, apply Benford's law (see http://mathworld.wolfram.com/BenfordsLaw.html ) to certain aperiodic strings.
So it is important to realize that though a small set of symmetrical strings of length n exists whose members are often construed as indicative of nonrandom action, there exists another small set of aperiodic strings of the same length whose members are considered to reveal traits of nonrandom action.
For example, a runs test of sufficiently large n would disclose the nonrandom behavior of Liouville's transcendental number.
Worthy of note is yet another conception of randomness, which is encapsulated by the question: how often does the number 31 appear in an infinite random digit string? That is, if an infinite digit string is formed by a truly random process, then a substring that might occur once in a billion digit spaces, would have probability 1 of recurring infinitely many times.
That is, in base 10, "31" reaches a 95% probability of a first occurrence at the 17,974,385th space. In base 2, "31" is expressed "111111" and a 95% probability of a first occurrence is reached at the 1441th space. Similarly, in an infinite string, we have probability 1 that a run of 10^(100^100) zeros will recur not just one, but infinitely often. Yet if one encountered a relatively short run of, say, 10^10 zeros, one would be convinced of bias and argue that the string doesn't pass statistical randomness tests.
The idea that such strings could occur randomly offends our intuition, which is to say the memory of our frequency ratios based on everyday empirical experience. However, if you were to envisions an infinity of generators of truly random binary strings lined up in parallel with strings stretching from where you stand to the horizon, there is prbability 0 that you happen upon such a stupendous run as you wander along the row of generators. (Obviously, probability 1 or 0 doesn't mean absolute certainty. There is probability 1 that Goldbach's conjecture is true, and yet perhaps there is some case over infinity where it doesn't hold.
Concerning whether "31" recurs infinitely often, one mathematician wrote: "The answer lies in an explicit definition of the term 'infinite random digit string.' Any algorithm for producing the digit string will at once tell you your answer and also raise the objection that the digit string isn't really random. A physical system for producing the digits (say, from output of a device that measures quantum noise) turns this from a mathematical into an experimental question."
Or, perhaps, a philosophical question. We know, according to Cantorian set theory -- as modified by Zermelo, Frankel and Von Neumann (and including the axiom of choice) -- that there is a nondenumerable infinity of noncomputable numbers. So one of these noncomputables, R0 -- that could only be "reached" by some eternal randomization process -- does contain a (countable) infinity of 31s. But this means there is also a number Ra that contains the same infinite string as R0 except for lacking all instances of the substring denoted 31. Of course, one doesn't know with certainty that a randomization process yields a noncomputable number. Ro might be a noncomputable and R1 a computable. Even so, such a procedure might yield a noncomputable R1. So we see that there is some noncomputable number where the string "31" never shows up.
If we regard Zermelo-Frankel set theory as our foundation, it is pretty hard to regard an element of the noncomputables as anything but a random number.
The next question to ponder is whether some pseudorandom aperiodic string mimics randomness well enough so that we could assume probability 1 for an infinity of 31s. Plenty of people believe that the pi decimal extension will eventually yield virtually every finite substring an infinity of times. And yet it is not even known whether this belief falls into the category of undecidable theorems. And even if pi could be shown to be "universal," there is no general way of determining whether an arbitrary program is universal.
No discussion of randomness can readily ignore the topic of Bayesian inference, which, despite a history of controversy, is used by philosophers of science to justify the empirical approach to establishment of scientific truth. Recurrence of well defined events is viewed as weight of evidence, with probability modified as evidence accumulates. Such thinking is the basis of "rules of succession, so that a sequence of, say, five 1s would imply a probability of (m+1)/(m+2) -- 86% -- that the next digit will also be a 1.
In this case, a uniform a priori distribution is assumed, as is dependence. So here, Bayesian inference is conjecturing some deterministic process that yields a high degree of bias, assumptions which impose strong constraints on "background" randomness. (Interestingly, Alan Turing used Bayesian methods in his code-breaking work.)
A longer discussion of Bayesian ideas is found in the appendix of my essay, The Knowledge delusion, found at http://kryptograff5.blogspot.com/2011/11/draft-03-knowledge-delusion-essay-by.html
Monday, May 7, 2012
In death's borderland
By PAUL CONANT
By PAUL CONANT
------------------------------ ------------------------------ --------------------
Time: 12:30 a.m. to 6:30 p.m. Sept. 12, 2001.
Place: In the Manhattan buffer zone uptown from the twin towers catastrophe.
------------------------------ ------------------------------ ---------------------
When I first arrived in the buffer zone between 14th and Houston streets, surrealistic scenes greeted me.
Police cars in motion covered with ash and dust; a convoy of giant earth movers filled with skyscraper rubble; emergency rescue vehicles on unspecified missions.
No one was afoot except for me and a few drunks, addicts and homeless persons.
At the key intersection of Houston and 6th Av. (also known as the Avenue of the Americas) I shared a bench with a homeless woman, watching as emergency vehicles came and went, convoys of dump trucks were deployed and city buses ferried police, firefighters, volunteers and construction workers in and out of the death zone.
I wandered up and down East Houston, noting the trucks laden with scaffolding parked and ready to roll. I stood on a footbridge over FDR Drive watching streams of emergency vehicles, some marked, some not, some with lights flashing and sirens blaring, some not, streaming in and out of Houston Street or heading around the FDR curve to approach the disaster from the toe of this forever-changed island.
In my wanderings, I frequently came across homeless men sleeping fitfully on sidewalks and loading docks, in jarring contrast to the more than 10,000 dead buried a few blocks away. Yet, I noticed around 4 a.m. that most residences in the buffer zone had all lights out, so I presumed that many New Yorkers must have simply gone to bed.
All I could think when watching the emergency activities was that New York should be glad of such efficiency and cool-headedness in response to this outrage.
Once dawn came, I saw groups of professionals hustling off toward West Street (aka the West Side Highway), apparently on their way to work.
Overheard snatches of conversation:
"We all know somebody who is dead," said a woman striding along with two men.
"We had a very, very close friend who was on the 92d floor," says a bearded man into a cellphone.
Cellphones of course were ubiquitous. People at Houston and 6th were using them to report details of what they were seeing. One man sitting on the by-now packed bench was reading his notes in French, most probably to an editor at the other end of his cellphone.
Channel 3 News from Hartford encamped at the intersection at about 5 a.m. and all day long conducted TV interviews with New Yorkers who had been at or near the catastrophe site.
Police cars in motion covered with ash and dust; a convoy of giant earth movers filled with skyscraper rubble; emergency rescue vehicles on unspecified missions.
No one was afoot except for me and a few drunks, addicts and homeless persons.
At the key intersection of Houston and 6th Av. (also known as the Avenue of the Americas) I shared a bench with a homeless woman, watching as emergency vehicles came and went, convoys of dump trucks were deployed and city buses ferried police, firefighters, volunteers and construction workers in and out of the death zone.
I wandered up and down East Houston, noting the trucks laden with scaffolding parked and ready to roll. I stood on a footbridge over FDR Drive watching streams of emergency vehicles, some marked, some not, some with lights flashing and sirens blaring, some not, streaming in and out of Houston Street or heading around the FDR curve to approach the disaster from the toe of this forever-changed island.
In my wanderings, I frequently came across homeless men sleeping fitfully on sidewalks and loading docks, in jarring contrast to the more than 10,000 dead buried a few blocks away. Yet, I noticed around 4 a.m. that most residences in the buffer zone had all lights out, so I presumed that many New Yorkers must have simply gone to bed.
All I could think when watching the emergency activities was that New York should be glad of such efficiency and cool-headedness in response to this outrage.
Once dawn came, I saw groups of professionals hustling off toward West Street (aka the West Side Highway), apparently on their way to work.
Overheard snatches of conversation:
"We all know somebody who is dead," said a woman striding along with two men.
"We had a very, very close friend who was on the 92d floor," says a bearded man into a cellphone.
Cellphones of course were ubiquitous. People at Houston and 6th were using them to report details of what they were seeing. One man sitting on the by-now packed bench was reading his notes in French, most probably to an editor at the other end of his cellphone.
Channel 3 News from Hartford encamped at the intersection at about 5 a.m. and all day long conducted TV interviews with New Yorkers who had been at or near the catastrophe site.
By 10 a.m., the pace was picking up, as more and more New Yorkers ventured out, looking for newspapers (none delivered in the buffer zone), visiting neighbors and just plain looking around.
But it was eery. A perfect summery day. The residents of the buffer zone were perhaps defiantly nonchalant. Those in the streets showed no trace of fear, spoke animatedly to one another and played with their children, doing their best to enjoy a very bad day. And they somehow were succeeding.
If the point was to terrorize the New Yorkers, I can tell you they were not at all terrorized. New Yorkers were well behaved. A few onlookers were a bit of a pain at the key intersections, but when one considers the number of people in New York, things went very well. And the harried police handled the onlookers good naturedly.
Among the contrasts:
Throngs of curious Manhattanites near the death zone acting as if it was a nice day off (and that feeling of confidence was quite clearly contagious); yet every once in a while silent rescue workers, individually and in small groups, would trudge past the police checkpoint and walk uptown. You knew who they were even if they were not in uniform. Their footgear was covered in ash. They trudged, stonefaced, staring straight ahead, overcome with exhaustion, both physical and emotional.
As a Vietnam veteran, I could identify with them somewhat. Once past the checkpoint, many in the crowds failed to notice them, but of course that didn't really matter.
Over on West Street, crowds made a point of cheering and applauding the rescue workers as they drove to and from the death zone. Many news trucks lined West Street. It gave one of the clearest views in Manhattan of the 'hole' in the skyline.
I couldn't help but wonder: is it wise to put so many people in one place? Do we really need skyscrapers anyway in this new economic era of computer teleconferencing?
I recall seeing a family walking their children up the bikepath, their little girl racing along gaily, clutching her dolly -- completely oblivious to the tower of smoke billowing up behind her.
But it was eery. A perfect summery day. The residents of the buffer zone were perhaps defiantly nonchalant. Those in the streets showed no trace of fear, spoke animatedly to one another and played with their children, doing their best to enjoy a very bad day. And they somehow were succeeding.
If the point was to terrorize the New Yorkers, I can tell you they were not at all terrorized. New Yorkers were well behaved. A few onlookers were a bit of a pain at the key intersections, but when one considers the number of people in New York, things went very well. And the harried police handled the onlookers good naturedly.
Among the contrasts:
Throngs of curious Manhattanites near the death zone acting as if it was a nice day off (and that feeling of confidence was quite clearly contagious); yet every once in a while silent rescue workers, individually and in small groups, would trudge past the police checkpoint and walk uptown. You knew who they were even if they were not in uniform. Their footgear was covered in ash. They trudged, stonefaced, staring straight ahead, overcome with exhaustion, both physical and emotional.
As a Vietnam veteran, I could identify with them somewhat. Once past the checkpoint, many in the crowds failed to notice them, but of course that didn't really matter.
Over on West Street, crowds made a point of cheering and applauding the rescue workers as they drove to and from the death zone. Many news trucks lined West Street. It gave one of the clearest views in Manhattan of the 'hole' in the skyline.
I couldn't help but wonder: is it wise to put so many people in one place? Do we really need skyscrapers anyway in this new economic era of computer teleconferencing?
I recall seeing a family walking their children up the bikepath, their little girl racing along gaily, clutching her dolly -- completely oblivious to the tower of smoke billowing up behind her.
Similar scenes played out in Washington Square Park where parents supervised toddlers laughing an giggling under the turtle sprinkler.
Looking downtown, three crosses atop churches abutting the park stood out in stark relief against the heavy pall of smoke.
I heard a helicopter whipping high overhead and was watching it for a while before I realized: I had heard it because there was no traffic making the usual Manhattan cacophony around the park.
There just wasn't enough noise in the air.
Looking downtown, three crosses atop churches abutting the park stood out in stark relief against the heavy pall of smoke.
I heard a helicopter whipping high overhead and was watching it for a while before I realized: I had heard it because there was no traffic making the usual Manhattan cacophony around the park.
There just wasn't enough noise in the air.
Saturday, March 17, 2012
Sinister brownouts of red troubles
(First published ca. 2001)
British authorities are weighing bringing charges against about 10 persons suspected of having been agents of the East German secret police, the Stasi, according to Stephen Grey and John Goetz of London's Sunday Times.
They were identified as the result of the cracking of a Stasi computer code by a computer buff who once lived under communist rule. However, not only British spies were exposed. Agents operating in the United States and elsewhere were evidently exposed.Reputedly, the CIA has tried to thwart exposure of the red agents, whether in America or elsewhere, claiming it would compromise some secret operations.
The Sunday Times tells of concerns that the CIA is covering up for a nest of high-level traitors. It will be recalled that three ex-East German agents, including a lawyer working at the top rung of the Pentagon, have been convicted.Vladimir Putin, president of Russia, was Moscow's representative to the Stasi when the newly exposed agents were active. The question is, did this red network remain active, perhaps under Russian control, after the collapse of East Germany? Putin may have been advised that the code was uncrackable.
On another Putin matter, John Sweeney of Britain's Observer links Putin and security service allies to the Moscow bombings that were used as a reason for the Chechnya war.
The most interesting bomb was the one that didn't go off, the incident being bizarrely transformed into a "training exercise." News agencies and media serving America have virtually ignored the Stasi story, as if FBI interest -- or lack of it -- in communist networks in our government is ho-hum news in America.
And, investigative news reports on the Moscow bombings are difficult to come across, particularly in the United States. Other topics on ConantNews pages include China's nuclear espionage offensive and MI6's battle to censor the press, along with a discussion of the mathematics of Florida's presidential election. Links between pages have proved inadequate.
(First published ca. 2001)
British authorities are weighing bringing charges against about 10 persons suspected of having been agents of the East German secret police, the Stasi, according to Stephen Grey and John Goetz of London's Sunday Times.
They were identified as the result of the cracking of a Stasi computer code by a computer buff who once lived under communist rule. However, not only British spies were exposed. Agents operating in the United States and elsewhere were evidently exposed.Reputedly, the CIA has tried to thwart exposure of the red agents, whether in America or elsewhere, claiming it would compromise some secret operations.
The Sunday Times tells of concerns that the CIA is covering up for a nest of high-level traitors. It will be recalled that three ex-East German agents, including a lawyer working at the top rung of the Pentagon, have been convicted.Vladimir Putin, president of Russia, was Moscow's representative to the Stasi when the newly exposed agents were active. The question is, did this red network remain active, perhaps under Russian control, after the collapse of East Germany? Putin may have been advised that the code was uncrackable.
On another Putin matter, John Sweeney of Britain's Observer links Putin and security service allies to the Moscow bombings that were used as a reason for the Chechnya war.
The most interesting bomb was the one that didn't go off, the incident being bizarrely transformed into a "training exercise." News agencies and media serving America have virtually ignored the Stasi story, as if FBI interest -- or lack of it -- in communist networks in our government is ho-hum news in America.
And, investigative news reports on the Moscow bombings are difficult to come across, particularly in the United States. Other topics on ConantNews pages include China's nuclear espionage offensive and MI6's battle to censor the press, along with a discussion of the mathematics of Florida's presidential election. Links between pages have proved inadequate.
China's nuke spy war
May 20, 2001--The Chinese espionage uproar took a new turn when the Washington Post disclosed that the Pentagon and the CIA were blocking publication of a U.S. nuclear scientist's memoirs of his visits to Chinese nuclear arms facilities.
Danny B. Stillman, a retired Los Alamos scientist and intelligence analyst, made many visits to the Chinese nuclear program between 1990 and 1999 and simply asked fellow scientists what they were doing, wrote the Post's Steve Coll. Stillman felt that the Chinese were able to make strides in nuclear arms not because of espionage but because computers aided their task.
Rep. Curt Weldon, a member of the Cox committee which probed U.S. nuclear security issues, targeted Clinton's decision to permit sale of some 700 supercomputers to the Chinese. Weldon demanded a copy of Stillman's book from federal officials, along with supporting materials.You can read the Post story or Weldon's statement by hitting ConantNews features and then hitting the links 'Scientist fights gag order' or 'Clinton faulted on supercomputers.' This report might have gone online sooner had not the Washington Post's email news alert system been down when the Stillman and Weldon stories emerged.
Computer problems prevented me from adding a page with links to those stories and, rather than waste more time playing games, I leave you the addresses: Stillman story: www.washingtonpost.com/wp-dyn/articles/A29474-2001May15.html Weldon story: www.fas.org/sgp/congress/2001/h051601.html
PRESIDENT'S MEN CITE NUKE SPY WAR
The Associated Press's online articles about reported Chinese espionage and Los Alamos security woes includes links to the congressional Cox report but not to a key White House report. The report of the President's Foreign Intelligence Advisory Board on nuclear security at Los Alamos portrays decades of incredible security negligence at the Los Alamos National Laboratory, where nuclear weapons are designed. This negligence continued in the face of repeated warnings from a variety of investigations, the report says.
The report, with the input of the FBI, CIA and other security arms, asserts that China has mounted a massive and highly successful spy war against our nuclear secrets.The report's appendix contains an eyebrow-raising chronology and damage assessment.
The New York Times, in its Feb. 4 and 5 editions, made good on its pledge to take a thorough look at the Wen Ho Lee affair. Times reporters noted that U.S. policy promoted fraternization of Chinese and American nuclear scientists, including those involved in the weapons program. In this climate, disinterest in security was rampant, it seems. Knowing the aggressiveness of Chinese intelligence, America would be foolish not to assume that the Chinese took full advantage of such neglect. Now the unpleasant question arises as to how many agents the communists have insuated in to America's weapons establishment. The Times did not address that question.
May 20, 2001--The Chinese espionage uproar took a new turn when the Washington Post disclosed that the Pentagon and the CIA were blocking publication of a U.S. nuclear scientist's memoirs of his visits to Chinese nuclear arms facilities.
Danny B. Stillman, a retired Los Alamos scientist and intelligence analyst, made many visits to the Chinese nuclear program between 1990 and 1999 and simply asked fellow scientists what they were doing, wrote the Post's Steve Coll. Stillman felt that the Chinese were able to make strides in nuclear arms not because of espionage but because computers aided their task.
Rep. Curt Weldon, a member of the Cox committee which probed U.S. nuclear security issues, targeted Clinton's decision to permit sale of some 700 supercomputers to the Chinese. Weldon demanded a copy of Stillman's book from federal officials, along with supporting materials.You can read the Post story or Weldon's statement by hitting ConantNews features and then hitting the links 'Scientist fights gag order' or 'Clinton faulted on supercomputers.' This report might have gone online sooner had not the Washington Post's email news alert system been down when the Stillman and Weldon stories emerged.
Computer problems prevented me from adding a page with links to those stories and, rather than waste more time playing games, I leave you the addresses: Stillman story: www.washingtonpost.com/wp-dyn/articles/A29474-2001May15.html Weldon story: www.fas.org/sgp/congress/2001/h051601.html
PRESIDENT'S MEN CITE NUKE SPY WAR
The Associated Press's online articles about reported Chinese espionage and Los Alamos security woes includes links to the congressional Cox report but not to a key White House report. The report of the President's Foreign Intelligence Advisory Board on nuclear security at Los Alamos portrays decades of incredible security negligence at the Los Alamos National Laboratory, where nuclear weapons are designed. This negligence continued in the face of repeated warnings from a variety of investigations, the report says.
The report, with the input of the FBI, CIA and other security arms, asserts that China has mounted a massive and highly successful spy war against our nuclear secrets.The report's appendix contains an eyebrow-raising chronology and damage assessment.
The New York Times, in its Feb. 4 and 5 editions, made good on its pledge to take a thorough look at the Wen Ho Lee affair. Times reporters noted that U.S. policy promoted fraternization of Chinese and American nuclear scientists, including those involved in the weapons program. In this climate, disinterest in security was rampant, it seems. Knowing the aggressiveness of Chinese intelligence, America would be foolish not to assume that the Chinese took full advantage of such neglect. Now the unpleasant question arises as to how many agents the communists have insuated in to America's weapons establishment. The Times did not address that question.
Subscribe to: Posts (Atom)
No comments:
Post a Comment