Monday, February 13, 2023

Psyops against the press

[March 28, 2002] -- Some years ago I rounded up a sheaf of documents on, of all things, UFOs.

The CIA, under Director Bill Casey, tried to blow me off when I sought documents released under the Freedom of Information Act. But eventually, the agency sent me a packet of 900 pages -- even though, unknown to me, quite a few more documents had already been released. The CIA package came damaged, with a tear in it. I realized that this gave the agency an 'out' in the event anything was missing, but I decided to accept it anyway.

The spooks knew I wasn't looking for little green men. They knew I was looking for spooks.

The mass of data compiled from government sources and elsewhere -- I looked at disreputable stuff with caution -- permitted me to write a report that strongly suggested that federal operatives had had a longstanding Cold War policy of promoting and using the flying saucer craze for psychological warfare against -- the American public. My guess is that the justification was that if the KGB plays head games against Americans, then the CIA must protect America, even if it meant counter-head games against Americans.

It appeared that my report didn't land anywhere, though I have discovered that other of my work was published without me learning of that fact until years later. Though the report seemingly wasn't published, it fell into the hands of experienced journalists and hence I suspect it may have enjoyed an underground existence. I hope so, because I no longer have a copy. (If you have one, please send it to me by email or surface mail; my address is at the first Conant page link above.)

During the research phase, I called up a colleague, Ted Morello , a longtime UN correspondent, who had covered some flying saucer nonsense early in his career. He told me he had years previously written a memoir on the entire episode and promised to bring me a copy at a newspaper office where we both worked off and on as copy readers.

But, in my presence, when Morello searched his rucksack for the memoir, he couldn't find it, though he was absolutely convinced he had brought it with him. Later, calling from home, he said he had found the report after all and would mail me a copy.

The envelope containing the report had a return address sticker on it. The name was printed as 'Ted Morrello.'

Though I had felt sure his name had one 'r' in it, I couldn't imagine anyone would be so petty as to falsify anything of that sort. Upshot: An error was introduced into my report, which served a purpose of tending to discredit me as being a sloppy journalist.

Sometime later I happened to see his name in his handwriting in the copy desk per-diem log. One 'r.'

I rushed home and went to my files to retrieve the envelope. His report was still there, but the envelope could not be found.

Proof, of course, is missing. And this all occurred many moons back. In fact, this episode occurred just before the Irancontra affair blew up. Yet, during that scandal and in the years since I have noticed nothing but trivial changes in anti-reporter tactics by the Department of Dirty Tricks. Control of information and sandbagging is what this bunch does for a living. You can't expect them to act in any other way. From their perspective, they're being professional, no matter what the consequences to democracy.

I remember calling up a spokesman at the Reagan White House and complaining about the excessive spook activity around me. His reaction was: If there's a national security concern, these boys shouldn't be on your case. It's the FBI's responsibility. Though he sounded appalled, he didn't react as if I was a crank. As I say, this phone call predated the Irancontra explosion by only a few weeks.

It is many incidents of the sort described that make me suspicious about hit counter problems (see Conant letter above). After all, number of hits on an essay on a 'controlled subject' is political information that could sway decisions of lawmakers and others concerning various three-letter agencies.

Gone from the net:
Conant to Reporters Committee for Freedom of the Press
[No longer on Google]
Bush deploys 'line item veto' to aid CIA by Paul Conant
[No longer on Google]
National Missile Defense: Serious technical problems by Paul Conant
[No longer on Google]
UN correspondent Ted Morello
[UN has no record]
Morello targets freedom of the press
[UN has no record]

Shadow bans: An old game

While looking for some of my old news reporter clippings, I happened on an old blog post -- one of many like it -- that tells of social media brownouts that aided the feds. The Twitter Files established what I've been saying for years.

Control of American political speech was clearly a Deep State, oligarchical imperative long before Donald Trump's 2016 victory brought even more obvious clampdowns on what could be widely discussed.

By the way, I noticed today that Google, Bing and other search engines have been scrubbed of links to samples of my newspaper work that had been reachable in the past. In addition, Google has made my internet writings increasingly difficult to find.
Saturday, June 22, 2013

POLITICAL CONTROL OF FACEBOOK POSTS
My NSA posts aren't making my Facebook general news feed, it appears.

Evidently, a Facebook program is blocking those with words like nsa, cyber, spy, data and so forth.

Perhaps Babylon the Great is hoping to prove that it isn't dead. [Sorry. Lame joke.]

My Facebook address is

[I'm not on Facebook as of Feb. 13, 2023.]

The site is public.

There probably isn't another "Paul Roger Conant" Facebook user, so you should -- barring interference -- be able to reach my page with that name. [No longer true.]
Here is a link to an old Blacklisted Journalist report:
http://www.blacklistedjournalist.com/column104e.html

Saturday, February 12, 2022

How did the twin towers fall?
Questions remain

NIST's 9/11 reports show:

  • Fireproofing scenario is puzzling
  • Theory supported by scant evidence
  • 'Mystery blasts' glossed over
  • Inquiry considered a 'small bomb'
  • WTC7 collapse baffles probers
  • NIST adds brief note on 'controlled demolition'
  • Also: Physics prof challenges NIST theory

NIST web site
Defense contractor aids stalled WTC7 probe
NOVA special on collapses, with views of MIT Professor Thomas Eagar who offers yet another theory
Weapons that affect cognitive functions
Weldon: 9/11 hijackers 'known' in 2000 (NY Times report)
GSN's report on 'Able Danger'
Weldon's 'Meet the Press' remarks in June 2005
Jim Hoffman's criticism of the NIST report
WMD's: debunking the myths (presidential commission conclusions)
Ex-FBI chief blasts 9/11 probe
Professors doubt 'official' exit poll story
Further criticism of 'exit poll' line
Collapse time data: Omissions and disparities
About Znewz1
Weapons lab scientist tries to debunk 9/11 skeptics
Chomsky tries to derail new 9/11 investigations
Scientists clash over 9/11 collapses
The symmetry problem in the WTC collapses

This page may be reproduced or published without cost. It is requested that you credit Znewz1.


Citations refer to NIST reports. In many cases, page numbers are given. However, some NIST reports use two systems of page numbers. So, it may be necessary to check the alternate page number in order to find a reference.

To search NIST reports, reach NIST's search engine through the media link and view the text version with the keyword 'NCSTAR' (and other keywords) highlighted.

This page went on-line in July 2005 and has been updated since.


Copyright 2005

By PAUL CONANT

Jetliners crashing into World Trade Center towers blew off much of the fireproofing on floor supports and girders but left intact most of the fireproofing on floor undersides, according to the official theory.

Investigators for the National Institute of Standards and Technology were unable to come up with a 'credible' collapse scenario without resorting to the seemingly contradictory assumptions. This difficulty is not apparent in the main report [NCSTAR 1] but only emerged after extensive examination of the multitude of supporting reports.

A review of the NIST investigation of the World Trade Center disaster also shows that probers were puzzled by numerous 'pressure pulses' of smoke that preceded each collapse but were reluctant to speculate on the origin of the blasts.

And, of the small amount of steel preserved as evidence, very little shows signs of the temperatures needed to critically weaken it, and photographic evidence also fails to corroborate the government's conjecture that, on the fire floors, a superhot upper layer of fiery gases caused girders to weaken critically and a large number of floor connectors to weaken and trigger a collapse sequence. Even so, in order for such a thing to happen, the fireproofing had to have been stripped from core columns and floor assembly components but not from the underside of the metal floor deck directly above, the reports -- which seem contradictory on a number of points -- show.

The NIST's main report fails to account for the fact that ceiling tile, though falling, would still have intercepted much of the high-velocity rubble before it reached the floor assembly fireproofing [Tile and debris velocities are discussed in a footnote below.]

The NIST shelved a report on the collapse of the 47-story WTC7, which has baffled investigators, until after the twin towers report's release. The twin towers report was made final in September 2005 [see footnote below] and some editorial changes were made.

However, it is unreasonable to assess probabilities of failure without including the WTC7 collapse. The NIST fails to offer a probability analysis for the collapse of three skyscrapers. [This reporter's comments on probabilities are found in a footnote below.]

John Young, a veteran architect and founder of the activist site Cryptome.org, believes that the towers were structurally unsound and shook to pieces following the impact of the jets. He says that coverup of design flaws is a routine result of building disaster investigations. [Young's remarks and can be found in a footnote below.]

Glenn P. Corbett, a fire safety expert and NIST adviser, said in October 2005 that the overall NIST effort was unsound and that in future the agency should be disqualified from forensic investigations, according to an Associated Press report.

"Instead of a gumshoe job that left no stone unturned, I believe the investigations were treated more like research projects in which they would wait for information to flow to them," Corbett, a fire science professor at John Jay College in New York, is reported to have said. An email request for further comment went unanswered.

In November 2005, Steven E. Jones, a Brigham Young University physics professor, challenged the government theory and argued that the physics pointed to the likelihood of a bomb plot [In the fall of 2006, Jones was pressured to retire as a result of his paper, which is no longer easily accesible on the internet.] He timed the mysterious pressure pulses in WTC7 and found that they had occurred too closely together to have been the result of interior collapses (more detail below). No response was received to an email query for further comment.

The NIST inserted a note in NCSTAR 1 saying that probers had found no evidence of controlled demolition of the skyscrapers. The draft NCSTAR 1 report ignored the alternative hypothesis altogether and there is no evidence in the various studies that the NIST ever examined that possibility. In fact, there is no indication in NIST supporting data that experts on controlled demolition were consulted to see whether such a scenario had any credibility.


'Able Danger' ignored

The point that government 9/11 reports are unreliable was reinforced with a furor that erupted over a claim by Rep. Curt Weldon, R-Pa., that a supersecret Pentagon 'data mining' operation called 'Able Danger' had in 1999 or 2000 identified Mohamed Atta and three other future 9/11 hijackers as suspected members of an al Qaeda cell operating in the United States. Lee Hamilton, a co-chairman of the 9/11 commission, said that though the panel had heard something about the operation, Atta's name hadn't come up. However, a 9/11 panel spokesman then said that Atta's name had been mentioned during a briefing to a staffer but that the panel decided against mentioning the information because of credibility problems. Still, the panel's reports take pains to debunk numerous theories and innuendos that came to the attention of staffers.

At any rate, the commission never mentioned the operation or the hijack suspects. Defense Secretary Donald Rumsfeld denied in August 2004 ever having heard of 'Able Danger,' raising the question of why the Pentagon did not vigorously check into Weldon's comments about Able Danger that were broadcast in June 2004. In addition, the New York Times reports that an observer from the Pentagon sat in on a commission briefing about 'Able Danger.'

As of Aug. 11, the White House was staying silent on the subject, even though Weldon said he had in October 2001 told White House national security aide Stephen J. Hadley of the operation.

However, in October 2004, Rep. Dan Burton, R-Ind., told the Times that he and Weldon had met with Hadley on Sept. 25, 2001, when Hadley was shown a chart containing pre-attack information collected by Able Danger on suspected al Qaeda operatives. Weldon said he gave Hadley the chart. At this point, a Hadley spokesman confirmed that Hadley had seen such a chart but that it could not be found after a search of White House files.

In November 2004, former FBI Director Louis J. Freeh assailed the 9/11 commission in its handling of the Able Danger matter. He noted that the FBI would normally pounce on tips of that type in order to thwart terrorist attacks. In January 2007, the Senate Intelligence Committee said that its investigators had been unable to substantiate that the Pentagon had pinpointed Atta long before Sept. 11, 2001.


Following are discussions of the government theory, the scantiness of supporting evidence, the large and small pre-collapse blasts, the NIST's lack of clarity and the WTC7 collapse problems.

THE GOVERNMENT THEORY

The government theory is that the jet impacts and ensuing fires combined to trigger the collapse of the towers, with the fires causing most weakening in World Trade Center 1 and the impact doing most of the damage in WTC2, which was struck second but collapsed first.

And the fact that inward bowing of the east face of WTC2 occurred 18.5 minutes after impact while such bowing did not occur for WTC1 until just before collapse tends to support that possibility.

Essentially, the NIST decided that core column shortening strained superheated floor links, which failed, causing the floor assemblies to sag and drag the exterior walls inward, though it also seems to favor the idea that wrecked core columns toppled, pulling floors down with them. However, this idea is problematic since an excessive number of core columns would have to have been superheated, contrary to fire analyses.

Though a number of wide-flange columns were found to have been milled from substandard steel [NCSTAR 1-3, p107], the NIST does not believe that this issue was a highly significant factor in the collapses.

The NIST's assumptions and computer models showed that key structural responses leading to collapse were:

*Floor sagging caused by failure of thermally weakened truss members, resulting in pull-in forces.

*Downward displacement of the core [the reinforced interior structure of the tower] due to jet impact and shortening of core columns due to increased load and heat effects.

The high-temperature gases primarily heated floor trusses and the bottom face of the concrete and metal floor slabs through convection and the top face through radiation. As the floor system heat increased, web diagonals buckled and truss seats failed, allowing floor assemblies to sag, the NIST conjectured [NCSTAR 1-6D, p31, NCSTAR 1-6, p319; for a similar description, see NCSTAR 1-6, page 286.

The computer models covered floor trusses, core beams, perimeter and core columns and concrete floor slabs [NCSTAR 1-5G].

However, tests showed that steel coated with spray-on concrete-and-fiber insulation held up with no significant weakening, even under intense, prolonged heating [NIST 1-5B]. Thus, the NIST decided that fireproofing must have been dislodged by the jetliner crashes. Various possibilities were examined, with probers eventually settling on 'Case B' -- which required fireproofing shorn off core columns and a wide swath of floor trusses, but not from the underside of the adjacent floor deck.

In the main report, a Case B condition is that the 'soffit remains' in order to provide a hot enough upper layer of gases to critically damage the floor assembly parts [NCSTAR 1, p124 (or alternate p178)]. ('Soffit' is a builder's word that refers to a covering of a floor underside or to an enclosure between a floor underside and a ceiling.)

However, one must go to NCSTAR 1-5B (p110 ff) to learn that the soffit refers to 48-inch (1.2-meter) Marinite fireproofing boards used in experiments to determine heat flows that would critically damage floor assembly parts. (When this reporter checked, the 'text' version of NCSTAR 1-5B was unavailable, making the word 'soffit' difficult to locate in that report.)

NCSTAR 1-5's executive summary asserts that in Cases B and D, which are WTC1 and WTC2 scenarios leading to collapse, 'a more severe representation was to leave a 1.2m soffit that would maintain a hot upper layer on each fire floor.'

The summary adds, 'This produced a fire of longer duration near the core columns and the attached core membranes.'

It then becomes apparent that the Case B condition means that the soffit represents the fireproofing on the deck underside. Without insulation, the floor deck steel would transmit the high temperatures upward, possibly crumbling the lightweight concrete slab, with the heat then radiating into the air of the floor above, NIST data indicate.

A photo of a floor assembly test unit shows fireproofing coating the deck underside [NCSTAR 1-6, p49], implying that such protection was standard for both towers. Also, a none-too-clear 1993 photo of a WTC floor assembly shows at least some spray-on insulation (or SFRM) on the deck underside [NCSTAR 1-6, p26]. For a description of floor assemblies, see NCSTAR 1-6, p65ff.

Another NIST report found that, for Case B, it was necessary to have partially opened walls in order to have enough oxygen for the fires while still trapping gases in order to raise temperatures [NCSTAR 1-5G, p179], but fails to make clear that the closed walls in the scenario are ceiling firewalls. [Also see NCSTAR 1-5F, p105.]

In order for fireproofing to be stripped off, debris must have struck the steel components with accelerations of 40 g's or more (40 x 32ft/s2 or 40 x 9.8 m/s2), the NIST found [NCSTAR 1-6A, p110, p162], with debris velocities on the order of 350 mph [NCSTAR 1, p117]. The NIST pointed out that the non-fire-rated tile hanging from ceilings would likely have fallen when the buildings were jarred by the impacts, a suggestion that accords with survivor observations. The ceiling tiles would have fallen at a rate of less than 1 g (less than 32 ft/s2 or 9.8 m/s2, meaning many wouldn't have fallen far before being struck by the spray of debris.

In addition, experiments showed that it took substantially more force to blast SFRM off a narrow-diameter bar than off flat-surface steel [NCSTAR 1-6A, p102]. Yet the government claims the SFRM was dislodged from the narrow-diameter web truss diagonal bars but not the planar decks.

Hence the government theory requires that at least some of the ceiling tiles be pulverized by the debris and become part of the debris spray, which would then strip the floor supports of insulation but not the floor deck [see below for a brief discussion of the velocities]. This reporter could find no detailed discussion of such a scenario.

Probers also suggest SFRM could have been shaken off steel elements as the building vibrated from the jet collision but decided against using the idea in their models. But, even if that suggestion were valid, the question would remain as to why the SFRM didn't also rattle loose from the floor decks.

The contention that the floor decks retained their insulation is supported by the 'cold spot' across floors 80, 81 and 82 of WTC2. Photos show what appear to be diagonally draped floor slabs showing through windows. Infrared images showed that this area didn't heat up, though it was surrounded by high temperatures [NCSTAR 1-5, p30].

Saying there was insufficient data to explain the cold spot [NIST 1-5, p36], the government probers managed not to draw attention to the point that undamaged floor deck fireproofing tended to imply undamaged floor support fireproofing.

HEAT ISSUES

Probers found that WTC steels tend to lose structural integrity if exposed to temperatures of 500oC or more for a period of at least 18 minutes. Generally, a temperature of 650oC is held to be critical.

The fuel available for the fires was estimated from the jet debris, jet fuel presumed not to have burned in the initial fireballs and from typical workstation contents. In the main report and elsewhere, NIST probers seem quite confident in their estimate of 4 pounds per square foot of combustibles on the fire floors, though NCSTAR 1-5's executive summary says that, for the relevant case, probers required 5 pounds per square foot in order to generate enough heat for each floor [NCSTAR 1-5, page xliv].

NIST is careful to debunk the myth that jet fuel raised temperatures to steel-bending levels. Its tests found that jet fuel accelerated burning of workstation contents but had little effect on temperatures [NCSTAR 1, p180].

There is very little physical evidence to support the government's theory. Of the 200,000 tons of steel from the twin towers, the NIST ended up with a paltry 236 pieces of steel for use as evidence.

However, a large number of still and video photos greatly assisted the investigation. Yet, the photographic evidence is weak when it comes to the NIST hypothesis, with investigators admitting that the trade center fires, despite variability, are mostly low-heat fires [NCSTAR 1-5, p16], but they argue that smoke may have obscured evidence of the extent and heat of the fires in the two towers.

Except for one case, fires behind windows on WTC1's east face last 6 minutes to 16 minutes. One window on floor 92 shows a fire lasting 28 minutes, which is long enough in duration to have the potential to warp steel [NCSTAR 1-3, p58].

The NIST scenarios would have oxygen-fed, but air-cooled fires near blown windows, with the hottest gases gathering near the core floor supports, with the insulated floor deck acting as a horizontal fire wall.

Also, fires at most sites on WTC1's west and north faces appeared to be of low intensity, though flames were seen on occasion belching from north and south face windows [NCSTAR 1-5, p19]. Though fires blazed across 90 percent of WTC1 floors 96, 97 and 98, they tended to die out as they went, sweeping a floor in a somewhat circular fashion dictated by still-standing interior walls [NCSTAR 1-5, p19] and by the availability of oxygen as fire pressures blew out windows.

Fires generally did not spread through WTC2 [NCSTAR 1-5, p29].

Probers theorized that about 20 percent of floor units on WTC1's 97th and 98th floors failed because of thermal weakening of vertical supports [NCSTAR 1-6, p289]. Yet, 90 percent of the 31 core floor truss connectors (core seats) recovered were intact, though probers say much damage could still have occurred [NCSTAR 1-5, p130,131].

Examination of steel columns known to come from the fire floors proved virtually fruitless.

Twenty-six columns were identified by code numbers as coming from WTC1 fire floors. Yet only one showed clearcut evidence of temperatures in excess of 250o C [NCSTAR 1-3C, Appendix E, p448ff].

Both analysis of paint cracking and microscopic differences in steel surfaces showed very little indication that WTC1 fire-floor steels were exposed to sustained temperatures of 650o C or more [NCSTAR 1, p86, NCSTAR 1-3, p101].

For example, the two WTC1 core columns subjected to paint analysis showed no sign of temperatures above 250o C [NCSTAR 1, p86].

Of perimeter panels that failed, fire played a negligible role [NCSTAR 1-3, p70].

The NIST says that its sample of columns, which includes only a few core columns, doesn't provide statistically meaningful data [NCSTAR 1-6, p86].

Additionally, the NIST's Case B fire simulation shows more than 50 percent of columns on fire floors sustaining temperatures above 250oC for various time periods [NCSTAR 1-5, p112 ff], but the sample of perimeter and core columns recovered -- if generalized statistically -- would indicate well below 50 percent of fire-floor columns reaching such temperatures. [See NCSTAR 1-3's appendix E, p448ff.]

After an outcry over the FBI's failure to obtain an order to preserve the steel evidence, volunteer experts and a professor with a National Science Foundation grant combed salvage yards and tagged steel pieces deemed to be 'structurally significant' -- suggesting that the experts did not run across core columns that showed fire damage or that piqued their interest.

Probers, citing scantiness of data, found that they faced 'challenges' in simulating WTC2 fires, forcing them to concentrate the fuel load and volatility estimates onto two floors,[citation to come] even though the aircraft impact analysis distributed these variables more widely [NCSTAR 1-2, p62].

MYSTERY BLASTS

'Pressure pulses' expelled anomalous puffs of smoke -- or possibly dust -- from both towers at various times prior to collapse, with many more blasts observed at WTC2 than WTC1 [NCSTAR 1-6, p154] -- though this difference may be because photographic evidence for WTC1 diminished sharply after WTC2 collapsed.

NIST probers still have no explanation for what they admit were highly unusual 'correlated puffs of smoke' that came from the towers, though they conjecture that another cluster of major blasts was due to ignition of pools of jet fuel, but they do not vouch for that notion.

In WTC2, at least '65 occurrences' of smaller smoke puffs were recorded along with seven large blasts of smoke and flame that lasted about one minute each. However, the lesser puffs were considered to represent forces that were 'much too small to affect the tower's structural components' [NCSTAR 1-5, p37,38] -- though other NIST probers conclude that 'numerous puffs of smoke may indicate internal changes in architectural or structural features' [NCSTAR 1-6, p166]. Basically, the blasts are ignored in assessment of the probable collapse sequences.

No response was received to an email query sent to Richard G. Gann, who led the NCSTAR 1-5 research team and who edited the main report, asking for the reasoning used to conclude that the forces behind the smaller puffs were insufficient to damage steel components [letter appears below].

At 10:18:43 a.m., smoke suddenly billowed from floors 92 through 98 on the north and west faces of WTC1. At 10:22:59, inward bowing of the south wall occurred and at 10:28:20 the building collapsed [NCSTAR 1-6, p154]. The NCSTAR 1-5 team guessed that the event was triggered by settling of the core or collapse of floors [NCSTAR 1-5, p17]. Elsewhere, the NIST's chief investigator, Shyam Sunder, had said that it would take failure of a large number of floors to initiate collapse, though later his probers put the lower limit at 3 consecutive floors (the number of apparent tilted slabs seen through WTC2 windows).

The main report says the large bursts of smoke emitted from WTC2's 79th and 80th floors between 9:30 and 9:34 a.m. may have been caused by ignited jet fuel or by shifting floor slabs [NCSTAR 1, p43]. WTC2 fell at 9:59.

Also, at 10:21:15 a.m., 7 minutes before collapse, an intense burst of fiery light, lasting 3 seconds, appeared through WTC1's 98th floor windows [NCSTAR 1-5, p17].

NIST scientists estimated that well over half of each jet's fuel pooled inside the buildings rather than being immediately burned up in the fireballs that erupted upon impact [NCSTAR 1, p24, p42]. Two conditions favored rapid fuel burning, though another condition works against it. The jet fuel was all in the wings [NCSTAR 1, p104], which could be expected to disintegrate completely on impact, and the affected floors were mostly large open areas with few firewalls [NCSTAR 1, p57]. On the other hand, there would be some delay for used-up oxygen to be replenished as air rushed in through the hole created by the jet.

New York City firefighters reported that jet fuel had flowed into elevator shafts and elsewhere [NCSTAR 1, p163]. Even so, there seems to be a difficulty with the jet fuel distributions and the WTC2 blasts, which may explain why probers expressed restrained confidence in the jet fuel conjecture.

When reviewing the reports, one must of course be careful to distinguish blasts of smoke or dust that occur very near or at the times of collapse, which would be expected, from the other blasts.

However, physicist Jones noticed that in WTC7 a sequence of downard pulses occurred 0.2 seconds apart. Using the free-fall equation (y = 1/2gt2) he found that it requires at least 0.6 seconds for a floor to drop and strike another. The NIST gave scanty data on the pressure pulses.

The NIST's computer simulations included a scenario in which a tower is sabotaged with a 'small bomb,' numerous arson fires and a wrecked sprinkler system. The simulation left the building standing [NCSTAR 1, p144].

However, the NIST did not publicly consider the possibility that the towers in fact may have been sabotaged with high-energy explosives. No scenario in which larger bombs are attached to core columns and smaller ones attached to floor connectors is considered -- though such a simulation would have been closer to observations than the 'small bomb' simulation. Of course, such a possibility requires that the Port Authority police security system have been compromised so that radio-linked explosives could be planted on many floors, the plotters having no way of knowing for sure where the planes would hit.

Because 'available information, as extensive as it was, was neither complete nor of assured precision,' the NIST 'took steps to ensure that the conclusions of the effort were credible explanations for how the buildings collapsed' [NCSTAR 1, p141].

Does this mean a sabotage scenario was ruled out in advance?

Architect Young says he photographed the Ground Zero rubble pile two weeks after collapse and saw no steel with signs of melting, an indicator of use of explosives [See his full remarks below]. However, the NIST does not report that anti-terror probers had seen no signs of molten steel but simply keeps silent. On the other hand, Jones cites reports from expert eyewitnesses in credible publications to the contrary. They saw pools of molten metal at Ground Zero.

Jones says that the molten metal is "consistent with the use of a high-temperature thermite reaction, used to cut or demolish steel," adding, "The end products of the thermite reaction are aluminum oxide and molten iron."

Jones cites a number of other factors that convince him that the towers were demolished with explosives, including the point that the WTC7 pressure pulses occurred sequentially downward, which led him to the timing problem, as well as sequentially upward, meaning interior collapse could not be a cause.

"Delayed firing is used to help direct and control the direction of fall" of big buildings, according to Gary H. Hemphill's book Blasting Operations (McGraw-Hill, 1981). Hemphill, an expert in industrial use of explosives, noted that a sequential blasting machine in use in the 1980s could be set to fire at intervals of 10 milliseconds to 200 milliseconds.

The NIST points out that it was not permitted to impute blame to individuals or institutions [NCSTAR 1, page xxxi]. Would conclusions pointing to a conspiracy within the federal government be construed as imputing blame to an institution?

The NIST's initial unwillingness to discuss a 'large bomb' sabotage scenario can only add to concerns over the federal government's failure to control the evidence. Less than 5 percent of major steel elements from the main fire floors of WTC1 were held for analysis, with virtually no steel held from WTC2 or WTC7 -- which also collapsed under mysterious circumstances.

Why the FBI -- whose forensics expertise helped crack the 1993 World Trade Center bombing and, reportedly, the 1995 Oklahoma City bombing -- would not think to sift the rubble and seek an order to preserve steel component evidence is puzzling. In addition, it is notable that then-Mayor Rudolph Giulani, a former federal prosecutor, was evidently initially unaware of a failure to preserve evidence.

And the NIST cites no reports from either the FBI, CIA or Pentagon about on-site analysis by their investigators who presumably would have checked rubble for telltale signs of bomb explosives. The CIA and Pentagon had offices in WTC7. The 9/11 commission, which sought documents from federal investigative agencies, relied on the NIST research for the technical issues concerning the collapse of the towers.

Failure to control the evidence is the more disturbing in light of the fact that -- to cite a Federal Emergency Management Agency study of the disaster -- 'many knowledgeable structural engineers' were astonished by the collapses.

In fact, NIST adviser Corbett had previously called for a major probe to replace FEMA's inquiry, which he decried as poorly funded and lacking subpoena power. The journal Fire Engineering, with which Corbett is affiliated, first raised the alarm in January 2002 about the disposal of the steel evidence.

Corbett was listed by Popular Mechanics as one of the experts it consulted for its March 2005 article: "9/11: debunking the myths" which backed the official position. In a CNN interview about the PM piece, he said that there was insufficient evidence to prove the towers had been blown by explosives, but he also said it is understandable that the issue lacks closure "because there's still a lot of unanswered questions."

A LACK OF CLARITY

The quality of much of the NIST's experimental work appears to be highly credible, but the need to create a dubious narrative seems to have clouded professional standards.

Readers of the NIST's main report are not helped to understand that the '1.2m soffit' mentioned in a table and accompanying paragraph [NCSTAR 1, p124] implies that the NIST's 'credible' collapse model requires that the floor deck underside fireproofing remain intact, in contrast to nearby floor joist and girder fireproofing. In fact, the NIST is unclear about whether fireproofing remained on joists in its computer simulation.

In addition, the various scenarios or 'cases' cited show variations among the reports and it is not easy to discern whether apparent inconsistencies are important.

Here are some instances:

* NCSTAR 1-5, p105, gives the most influential variable in the scenarios as density of combustibles, with 5 1b/ft2 cited as necessary, though elsewhere 4 lb/ft2 is indicated for the same scenario [NCSTAR 1, p76].

* According to NCSTAR 1, p142, six scenarios, or cases were considered for WTC1 and WTC2. But the two base cases were tossed out as not conforming to observations. Remaining were Cases A and B for WTC1 and C and D for WTC2 which gave two levels of assumed pre-collapse damage for each building. Modelers found that only cases B and D led to collapse.

But NCSTAR 1-5G, p179, tells the reader that fire simulation results for Case B were 'not qualitatively different' from those for Case A, which posits the less-severe pre-collapse conditions. This is surprising in that the main report says Case A was disregarded once it was clear it wouldn't trigger collapse [NCSTAR 1].

* NCSTAR 1-6D, p31, distinguishes two sets of cases for aircraft damage estimates and fire damage estimates and aircraft damage cases are label Ai, Ci and so on.

* NCSTAR 1-6, p224, relates that simulations for WTC1 'used Case B impact damage and temperature histories' and for WTC2 'used Case D impact damage and temperature histories, as described in previous chapters,' where earlier scenarios dubbed Ai and so forth are discussed [NCSTAR 1-6, p121].

Still, NCSTAR 1-5G, p179, discloses that the researchers did not use Case B aircraft damage to get Case B fire damage. That is, less severe aircraft damage was needed to preserve key areas of fireproofing.

Also, the base case simulation 'provided a much better match to the observed damage' than did the worst case, NCSTAR 1-2, p48, relates.

At any rate, readers of the main report are not well informed as to the essential meanings of the various cases.

UNWARRANTED CONFIDENCE?

The NIST's narrative of twin tower events is simply a conjecture, but a casual reader who missed the disclaimers might think its 'facts' have mostly been proved.

NIST, though warning that the destruction of records in the collapse and the failure to preserve the steel left holes in the input data used for their computer simulations, asserts that it was 'able to gather sufficient evidence and documentation to reach firm findings and recommendations' [NCSTAR 1, p19]. However, the air of confidence rests on selective presentation of evidence and issues in part 2 of the main report [NCSTAR 1].

For example, the reader sees the table that refers to 'soffit' in NCSTAR 1, p142, but is left with little further information in the main report.

A reader desiring further guidance is directed to NCSTAR 1's appendix B, which lists some supporting reports. However, the main report is not properly footnoted or indexed, making it difficult to closely examine assertions.

In addition, NCSTAR 1-6 notes that the NIST's analytical methods strained the limits of structural engineering experience and training [NCSTAR 1-6, p9] and NCSTAR 1 says each step of the simulations 'stretched the state of technology and tested the limits of software tools' [NCSTAR 1, page xlii].

THE THIRD COLLAPSE

At 5:20 p.m., some seven hours after WTC1 fell, the 47-story WTC7 collapsed almost straight down, meaning collapse must have begun on a lower floor.

The NIST's principal analysis of that collapse has been 'decoupled' from twin towers analyses and postponed; the agency says staff workload necessitated the separation.

In a March 2005 Popular Mechanics article titled, '9/11: debunking the myths,' lead investigator Sunder is quoted as saying that new evidence indicates that WTC7 showed severe structural damage following the tower collapses and that this weakening, abetted by a longterm fire, was the agency's working hypothesis.

Some 10 lower stories, or about 25 percent of the building vertically, was 'scooped out,' he is reported to have said. (A Federal Emergency Management Agency, or FEMA, report also cited such damage, but did not view it as compelling.)

Yet NCSTAR 1-3, p114, says that NIST made no effort to check high-strain or impact properties of the type of steel used in WTC7 because 'WTC7 did not suffer any high strain rate events.'

Sunder said that an oddball design implied that failure of even one column on a lower floor might trigger collapse and suggested that a fuel-oil-fed fire contributed critical weakening.

However, NCSTAR 1-1J, found that the standard safeguards for the building's several fuel oil systems would likely have blocked a longterm fuel-oil-fed fire, an idea first mentioned by skeptical FEMA probers.

The most likely source of the leaking fuel oil would have been the Salomon Brothers system, NCSTAR 1-1J says, with probers citing two possibilities: a fuel spill from a 250-gallon 'day tank' on the fifth floor or fuel continually pumped up from an underground tank. But they suggest failsafes should have worked.

FEMA probers have said 250 gallons couldn't yield enough heat to inflict critical damage.

The NIST's contracted probers, Raymond A. Grill and Duane A. Johnson, say it is barely conceivable that an electrical malfunction caused pumps to keep bringing up fuel from a 6,000-gallon tank buried underground. But they are puzzled as to the source of the electricity. Power to the building would have been shut off the morning of Sept. 11, though the building's emergency generators were powered by fuel oil.

The electrical schematics for the fuel system are missing, along with building maintenance records that might have yielded clues to the electrical system. Grill and Johnson succeeded in finding much other WTC7 documentation, however.

In the May 2002 FEMA report, investigators wrote: 'Although the total diesel fuel on the premises contained massive potential energy, the best hypothesis' for fire-fed building collapse 'has only a low probability of occurrence.' They demanded further inquiry as to how key supports could have given way.

In addition, the fuel oil had to pool in a mechanical room where possibly a truss was not firesheathed, they said.

In general, however, the FEMA report is not nearly so pointed. That report was edited by Theresa P. McAllister, who handled much of NIST's collapse analyses. She coauthored a lengthy report, NCSTAR 1-6, on the collapse scenarios that makes no mention of soffit.

It has been reported that Larry A. Silverstein, the real estate man who ran the trade center, was quoted in a PBS report as saying that he gave the go-ahead to the 'er-Fire Department' to 'pull' the building. A search of the PBS site for the interview proved fruitless, but Silverstein has put out a statement saying the FEMA report determined that fire was responsible for WTC7's collapse.

There is no record of steel-frame buildings over 10 stories high collapsing as a result of fire, probers say. The FEMA inquiry points out that in the 1990s the British Steel and Building Research Establishment fire-tested an eight-story steel structure, leaving secondary beams unsheathed by fireproofing. The building remained upright at the end of all six experiments.

The public comment period for the twin towers draft report ended Aug. 4, 2005, with the final version issued in September 2005.

During the public comment phase of the twin towers report, the NIST web site did not make clear that the principal WTC7 report had been omitted. Since then, the NIST has posted a sketchy document dated April 2005 that has a series of photos and a limited discussion with little supporting data. It contains a large disclaimer saying the agency had found no evidence of destruction by controlled demolition, missiles or bombs, but does not substantiate that assertion. [See 'NIST reports vanish' below]

Even so, the NIST has issed a set of findings and recommendations for building safety improvements without bringing in WTC7 data.


Please report errors to the email address below. Paul Conant's telephone number: 732-947-0749
Tile and debris velocities

We assume typical debris velocity is 350 mph and hold that number constant (though in reality the velocities vary with time).

We neglect the velocity of the compression wave triggered by impact, which is about 177,000 ft/s through steel.

A WTC floor was 209 feet on a side. So debris reaching the far wall from the point of initial impact does so in about 0.41s. We are safe to neglect diagonal distance, which is only slightly greater than the horizontal here, even though debris that could strike the upper floor assembly would mostly be at angles to the horizontal.

We now find that in 0.41s, a tile can have dropped no more than (1/2)(32)(0.41)2 = 2.7 feet. Even then, the probability is strong that only one edge will have dropped that far, so that the tile would still intercept much debris before it reached floor joists.

However, the government theory requires most of the damage to steel connectors that link floors and core columns nearer the initial impact wall. So we use 100 feet for the horizontal (angled debris path lengths can also be neglected here). We have 0.2s for the debris to travel 100 feet, with ceiling tile dropping no more than 7 inches. Again, it is likely that only one edge would have dropped that far, meaning the tile would intercept debris.

On the other hand, it is quite possible that more debris might follow the intercepted debris and still reach the floor supports. So it is not implausible that some floor system areas were stripped of fireproofing -- but further consideration seems necessary for the assumptions that stripped from each fire floor were a 40-foot swath of floor supports [NCSTAR 1, p21] or perhaps a volume above 2/3 of a floor area [NCSTAR 1-6, p121].


NIST reports vanish

On its website, the NIST says that the main WTC7 report has been deferred until October 2005.

It is unclear whether the postponed report is the one titled 'Structural analysis of the response of World Trade Center 7 to debris damage and fire' which is cited in prefatory material to various NCSTAR reports but is nowhere to be found on the NIST website. The NIST awarded Ramon Gilsanz and his New York engineering firm a contract to do computer simulations of the WTC7 collapse. Omitted from the NIST website is NCSTAR 1-6F, the report by Gilsanz and nine others.

Also omitted was NCSTAR 1-6G: "Analysis of Sept. 11, 2001 seismogram data" by W. Kim. Won-Young Kim of Lamont-Doherty Earth Observatory has done previous analyses of 9/11 seismographic data.

No explanation is given for the seismic report's deletion, though it likely contains information concerning WTC7's collapse.


Bloomberg ignores the probabilities

It is not straightforward to estimate the single-event probability of the collapse of WTC1 or WTC2. We know that beforehand the probability of collapse for one tower would have been considered quite low, below 5%. Still, it is quite credible for a one-time fluke to occur.

We are aware now that the NIST gives two divergent scenarios for the collapse of each of the twin towers: for WTC1, the main cause is given as fire with a lesser contribution from load-structure problems; for WTC2 the main cause is given as load-structure problems with a lesser contribution from fire.

Hence one might construe that the collapses of WTC1 and WTC2 are effectively independent events.

As for WTC7, the event is only tenuously causally connected to the other collapses, and is easily regarded as independent of the other two.

Probabilities for independent events may be multiplied.

So, let's be very generous and estimate an 80% probability of collapse for WTC1 and likewise for WTC2. Knowing that government experts saw the probabilitiy of collapse for WTC7 as very low, we'll be generous and assign a 50% probability to that event.

In that case, the probability that all three buildings would collapse is 0.82x 0.5 = 32%, implying a 68% probability of conspiracy.

However, a slightly more realistic collapse probability for WTC7 is 25%. Keeping the tower probabilities at 80%, the new calculation gives an 84% probability of conspiracy.

Now suppose we be extremely generous and consider the collapse of the twin towers as a single event with a 95% probability. We multiply 0.95 x 0.5 and get a 52.5% probability of conspiracy.

If we still consider the collapse of the twin towers as a single event but plug in a more realistic probability of 70% and plug in a more realistic 10% (still too high) for WTC7, we have a probability of conspiracy of 93%.

Michael Bloomberg, the mayor of New York and owner of a major financial news service, has raised no hue and cry over the problem of probability. Seemingly the many analysts at his beck and call failed to make him aware of such improbabilities.


Conant's NIST query

Date: Mon, 18 Jul 2005 09:44:25-0700 (PDT)

From: 'P.R. Conant' (prconant@yahoo.com)

Subject: Regarding Case B

To: rgannns@nist.gov, inquiries@nist.gov, director@nist.gov

Dear Dr. Ganns [sic],

I have reviewed the NCSTAR reports and can locate nothing of substance regarding the soffit accounted for in the Case B model [because the NIST search engine was defective].

Specifically, can you tell me what the soffit was composed of and precisely what its location was in relation to the floor system. Also, what does '1.2m' mean in the phrase '1.2m soffit'? Is this a side length or vertical length from the metal deck?

NIST's main report also uses the table with the 'soffit remains' note. Yet after reading NCSTAR 1-6, I was unable to be sure whether the 'soffit remains' scenario was used in the final computer models that describe the sequences for WTCI and II.

At another point in your report [NCSTAR 1-5], your team says that the pressure pulses from the anomalous puffs of smoke did not imply a force sufficient to cause structural damage. What reasoning was used to substantiate this assertion?

I am taking the liberty of sending copies of this email to two other NIST addresses.

I may quote from this email and your response for future publication.

Thank you.

Best regards,

Paul Conant

732 947 0749


Architect counters Conant but doubts government

Date: Wed, 27 Jul 2005 19:57:11-0700

To: Paul Conant [and others]

From: 'John Young' (jya@pipeline.org)

Re: A troubling WTC fireproofing scenario

Paul,

I appreciate your forwarding these comments on the WTC collapse, but I'll offer counter points to these latest.

Fireproofing on the floor supports was probably dislodged by vibration passing through the slender rods and top and bottom chords of the bar joists because the floor plate would have been more resistant due to its comparatively greater mass and solidity. I have seen fireproofing come off such bar joists in non-impacted structures due to the lesser quality application often given to such joists due to their hard to reach upper surfaces, compared, say, to the easily sprayed deck underside. Fireproofing inspectors regularly ask for corrective work on such joists for this reason.

The 'smoke pulses' were probably dust pulses indicating the early stages of collapse before the whole floor succumbed. Various parts of the structure are weaker than others, and these give way according to their weakness, say, for example, gypsum board partitions being crushed explosively by the descending much heavier floor parts. This kind of dust can be seen on ordinary demolition jobs when the big hammers and balls cave-in walls onto light-weight interior elements.

I looked at the collapse heap about two weeks after the attack (on October 3, 2001), and took several dozen photos. I was looking for evidence of what caused the hard to believe collapse. By then a lot of steel had been carted away, and apparently sold to recyclers overseas, a hard to understand dispersal of evidence. Still, from what I could see from shelf angles which supported the bar joists, the breakage was clean and not due to melting. No doubt I could see only what was on top of the heap, but many of the giant columns were still in the pile jutting out of it, and the exterior lattice frame was visible in large sections.

No melting could be seen in any of the structural parts. This is not to say that there was no evidence buried under the pile.

My supposition is that the collapse was due in large part to separation of the floor support joists from their supporting columns, from impact and from subsequent swaying beyond design limits, along with oscillation uniquely fostered by the tubular design of the towers -- this oscillation would not be so prominent in ordinary framed-steel structures.

Basically, the towers shook themselves apart once the initial impact set off vibration, oscillation and separation of the various components, with the floor joist-to-column connection being the weakest part of the system.

Two 3/4-inch bolts at each end of the bar joist attached the joist to the shelf angle, with the angle welded or bolted to the column (as noted above, many of these shelf angles remained attached to columns on October 3). These are very weak joints with almost no capacity to resist lateral loading, and certainly not violent sway and oscillation.

Note that the exterior tube remained intact the longest, with the floor structures collapsing inside. This tube was whiplashing the floor structures due to it keeping its integrity as a lattice, and heaving the floors against the stiffer interior core.

The floors collapsed onto one another until they created a force strong enough to blow out the enclosing lattice, and during this sequence a lot of explosive dust and debris would have been created.

WTC7 is indeed a special case, and its delayed investigation is due to the building not being owned by the Port Authority. Silverstein and his insurers are under no obligation to make the investigation public, and no doubt have many liability reasons to keep it confidential.

Court cases may eventually reveal part of the story, but it is very common for building collapse suits to be sealed by agreement of all parties. I have tried to get several such cases unsealed so the public can learn more about unsafe conditions, but so far none have been unsealed, and it is likely that a lot about WTC will never be revealed as well -- so remain skeptical of whatever is officially released due to the long-standing practice of concealing building hazards but releasing only the information that fits conventional wisdom and sustains the inflated value of urban high-rise property.

We design professionals are quite complicit in this unsavory practice.


Conant responds to Young

Your idea that the fireproofing vibrated off the joists but not the decks is not addressed in the NIST analyses. NIST mentioned the vibration possibility but decided against using it because probers felt it was too iffy.

As for bar fireproofing being less reliable because of application problems, the NIST does mention that point, but goes on to say that in recent years all fireproofing in the floor assemblies had been upgraded to a greater thickness and reapplied.

Regarding the 'correlated' smaller blasts of smoke or dust, it was NIST investigators who described them as very unusual.

Your notion that the towers essentially shook themselves apart is of interest in light of the fact that the fires were largely of low heat. However, NIST analyzed the sway of WTC2 after impact and found that its period of oscillation was within what would be expected for a very windy day (there was no wind on Sept. 11). Of course, you still could be right.

As for WTC7, the NIST says its researchers were simply too busy with the twin towers to complete their WTC7 work.

Both FEMA and NIST investigators are clearly highly disconcerted by that collapse.

With respect to the possibility that government probers were shielding builders, the report goes to great lengths to accentuate the positive, particularly underscoring that most steel components met or exceeded specifications and that the use of closely-packed columns tended to provide redundant load-carriers, so that failure of a few would not compromise the building.

The probers note that two documents from the 1960s reported that each tower was designed to withstand the impact of a Boeing 707 flying at 600 mph (thus carrying significantly more potential energy than the planes that actually hit), but could find no detailed calculations. The NIST probers also cited an MIT study that found that the Sept. 11 impacts were enough to topple the towers without fire but said that their analysis showed the towers should stand; the probers suggested that MIT had not taken into account the fact that the floor slabs would absorb much of the initial impact.

The possibility that the government probers were responding to a perceived need to protect Rockefeller family prestige cannot be altogether dismissed, considering that the World Trade Center was the pet project of David Rockefeller, the powerful New York banker.

Still, it seems unlikely that scientists would risk being part of a coverup related to such a cataclysmic event without being told that 'national security' required the evasions.

Email: paulrc3@yahoo.com
Site hosted by Angelfire.com: Build your free website today!

 

Wednesday, January 26, 2022

 Herbert Yardley: king of the whistleblowers

Herbert O. Yardley is America's archetypical spook whistleblower. He had successfully modernized America's code-breaking power as an Army Signal Corps lieutenant during and shortly after World War I. But the powers that be decided to force him out of his well-paid post as a high-caliber code-cracker.

Herbert Yardley's NSA biography
http://www.nsa.gov/public_info/_files/cryptologic_spectrum/many_lives.pdf

As an NSA history says, Yardley, "with no civil service status or retirement benefits, found himself unemployed just as the stock market was collapsing and the Great Depression beginning. He left Queens and returned to his hometown of Worthington, Indiana, where he began writing what was to become the most famous book in the history of cryptology. There had never been anything like it. In today's terms, it was as if an NSA employee had publicly revealed the complete communications intelligence operations of the Agency for the past twelve years-all its techniques and major successes, its organizational structure and budget-and had, for good measure, included actual intercepts, decrypts, and translations of the communications not only of our adversaries but of our allies as well.

"The American Black Chamber created a sensation when it appeared on 1 June 1931, preceded by excerpts in the Saturday Evening Post, the leading magazine of its time. The State Department, in the best tradition of 'Mission: Impossible,' promptly disavowed any knowledge of Yardley's activities."

Government officials, though angry, decided to do nothing. According to some accounts, Yardley then went to work for the Japanese. The Canadians hired him briefly at the onset of World War I but British intelligence insisted on his ouster.

Yardley went on to write a successful book on poker strategies.

Wednesday, August 28, 2013

Note on probability and periodicty

Draft 1

Please let me know of errors. My email address is conant78@gmail.com



By PAUL CONANT

We consider a binary string that, we assume, began specifically at the first observation.

If that string appears to follow a periodic pattern, a question often asked is whether that string was nonrandomly generated, that is, whether the probabilities for bit selection are independent.

One approach is the runs test, which matches the number of runs against a normal distribution of runs of length n. This is a very useful test, but it fails for

00110011

which has the mean number of runs for n = 8 but which one might suspect is not as likely to be random as would be an aperiodic string.

So what we want to know are the number of periods, which we calculate as 2(1 + 2C2 + 4C2). That subset's members are all unique permutations, each of which we consider to occur with equal probability, based on the provisional condition of independence of probability for each bit, which is put at 1/2.

So let us consider this test for nonrandomness on a string with length n, where n is a composite.

Let n = 8

and the string is

00110011

On length 8 we have the factors 1, 2, 4, 8. Now a string composed of all 0s or all 1s is certainly periodic. So we use the factor 1, along with factors 2 and 4. But we do not consider the factor 8 because a period of length 8 with no repetitions gives us no information about the probability of periodicity (there is no obvious periodic pattern).

So then the cardinality of the set of periods is:

2[1 + 2C2 + 4C2] = 16

which we divide by 2^8, or

8/2^7 = 1/16.

So our reasoning is that the probability of happening upon a randomly generated periodic 8-bit string is 1/16, or 6.25% in contrast to happening upon an 8-bit string of a specific permutation agreed upon in advance, which is 2^-8 or 1/256, or 0.39%. The probability of happening upon an aperiodic bit string is of course 15/16 or 93.75%. This all seems reasonable, where p(specific permutation) < p(having property of periodicity when the bit-length is composite) < p(having property of periodicity on that same string length).

So we argue that the probability of nonrandom influence is 93.75% 

The general formula for strings with remainder 0 is

[1 + (A_1)C2 + ... (A_m)C2]/2^(n-1)

Let's check n = 9.

[1 + 3C2]/2^8 = 1/64

A caveat: sometimes one permutation corresponds to more than one period. It will be found that that only occurs when the number of bits equals p^m, where p is some prime and m is a positive integer. We check the case of 8 bits. Here we find that

0000,0000    0101,0101    0011,0011

and their mirror images are the only strings that have one period superposed on another. That means we might wish to subtract 3 from our set of periodic strings, giving  [2(1 + 2C2 + 4C2) - 3]/2^8 = 5/2^6 = 0.078. However, as n increases we will be able to neglect this adjustment.

We have been discussing exact periodicity. Often, however, we are confronted by partial periodicity, such as this:

00100100100

So what we want to know is the probability that this is part of string 001001001001

which we obtain by 1/64(1/2) = 1/128 = 0.0078;

similarly for 0010010010

where we calculate 1/64(1/4) = 1/256 =  0.0039. This represents the probability that the string is part of a periodic string of length 12.

This probability is distinct from the probability of happening upon a periodic 12-bit string, which is:

(1 + 2C2 + 3C2 + 4C2)/2^11 = 11/2^11 = 0.00537.

Important points:

1. The periodicity probabilities change in accordance with the primes, which are not distributed smoothly.

2. As bit length n tends to infinity, the numerator 2(set of combinations of aliquot factors + 1) tends to 0 with respect to denominator 2^n. This means that with n sufficiently large, the difference between the probability of periodicity and the probability of a specific permutation are close enough to be viewed as identical.

Point 2 permits us to look at a specific string of bit length n >> 5, see that it is periodic or "near" periodic, and assign it a probability of about 2^-n. This is important because we are able to discern the probability of dependence by use of a number that is traditionally only used to predict a specific bit string.

A nicety here is that the ratio of primes to composites diminishes as bit length goes to infinity. For a prime, there are only aperiodic strings. That is, we have pC2/2^n. In the case of 11 bits, we have 55/2048 = 0.0269. So as n increases, the probability that a randomly selected number could be periodic goes up. This consideration does not affect the basic idea we have given.

(The formula of periodicity -- with no repetitions of the period -- for a prime is simply 1/2^(p-1).)

Of course periodicity isn't the only sort of pattern. One can use various algorithms -- say all 0's except at the (n^2)th bit -- to make patterns.

A simple pattern is mirror imaging, in which the string on either side of a midpoint or mid-space is a mirror of the other; that is, bits are reversed.

To wit:

001001.100100

How many mirror pairs are there? Answer 2^6. So the probability of  happening upon a mirror pair is 2^6/2^12 =  1/64 = 0.015625..

Thursday, November 3, 2011

The knowledge delusion

Reflections on The God Delusion (Houghton Mifflin 2006) by the evolutionary biologist Richard Dawkins.

Preliminary remarks:
Our discussion focuses on the first four chapters of Dawkins' book, wherein he makes his case for the remoteness of the probability that a monolithic creator and controller god exists.

Alas, it is already November 2011, some five years after publication of 
Delusion. Such a lag is typical of me, as I prefer to discuss ideas at my leisure. This lag isn't quite as outrageous as the timing of my paper on Dawkins' The Blind Watchmaker, which I posted about a quarter century after the book first appeared.

I find that I have been quite hard on Dawkins, or, actually, on his reasoning. Even so, I have nothing but high regard for him as a fellow sojourner on spaceship Earth. Doubtless I have been unfair in not highlighting positive passages in
 Delusion, of which there are some (1). Despite my desire for objectivity, it is clear that much of the disagreement is rooted in my personal beliefs (see the link Zion below).

[Apologies for the helter-skelter end note system. However, there should be little real difficulty.]


Summary:
Dawkins applies probabilistic reasoning to etiological foundations, without defining probability or randomness. He disdains Bayesian subjectivism without realizing that that must be the ground on which he is standing. In fact, nearly everything he writes on probability indicates a severe lack of rigor. This lack of rigor compromises his other points.

Relevant links listed at bottom of page.


By PAUL CONANT

Richard Dawkins argues that he is no proponent of simplistic "scientism" and yet there is no sign in Delusion's first four chapters that in fact he isn't a victim of what might be termed the "scientism delusion." But, as Dawkins does not define scientism, he has plenty of wiggle room.

From what I can gather, those under the spell of "scientism" hold the, often unstated, assumption that the universe and its components can be understood as an engineering problem, or set of engineering problems. Perhaps there is much left to learn, goes the thinking, but it's all a matter of filling in the engineering details. (http://en.wikipedia.org/wiki/Scientism).

Though the notion of a Laplacian cosmos that requires no god to, every now and then, act to keep things stable is officially passe, nevertheless many scientists seem to be under the impression that the model basically holds, though needing a bit of tweaking to account for the effects of relativity and of quantum fluctuations.

Doubtless Dawkins is correct in his assertion that many American scientists and professionals are closet atheists, with quite a few espousing the "religion" of Einstein, who appreciated the elegance of the phenomenal universe but had no belief in a personal god (2).

Interestingly, Einstein had a severe difficulty with physical, phenomenal reality, objecting strenuously to the "probabilistic" requirement of quantum physics, famously asserting that "god" (i.e., the cosmos) "does not play dice." He agreed with Erwin Schroedinger that Schroedinger's imagined cat strongly implies the absurdity of "acausal" quantum behavior (3). It turns out that Einstein was wrong, with statistical experiments in the 1980s demonstrating that "acausality" -- within constraints -- is fundamental to quantum actions.

Many physicists have decided to avoid the quantum interpretation minefield, discretion being the better part of valor. Even so, Einstein was correct in his refusal to play down this problem, recognizing that modern science can't easily dispense with classical causality. We speak of energy in terms of vector sums of energy transfers (notice the circularity) but no one has a good handle on what the it is behind that abstraction.

A partly subjective reality at a fundamental level is anethema to someone like Einstein -- so disagreeable, in fact, that one can ponder whether the great scientist deep down suspected that such a possibility threatened his reasoning in denying a need for a personal god. Be that as it may, one can understand that a biologist might not be familiar with how nettlesome the quantum interpretation problem really is, but Dawkins has gone beyond his professional remit and taken on the roles of philosopher and etiologist. True, he rejects the label of philosopher, but his basic argument has been borrowed from the atheist philosopher Bertrand Russell.

Dawkins recapitulates Russell thus: "The designer hypothesis immediately raises the question of who designed the designer."

Further: "A designer God cannot be used to explain organized complexity because a God capable of designing anything would have to be complex enough to demand the same kind of explanation... God presents an infinite regress from which we cannot escape."

Dawkins' a priori assumption is that "anything of sufficient complexity to design anything, comes into existence only as the end product of an extended process of gradual evolution."

If there is a great designer, "the designer himself must be the end product of some kind of cumulative escalator or crane, perhaps a version of Darwinism in its own universe."

Dawkins has no truck with the idea that an omnipotent, omniscient (and seemingly paradoxical) god might not be explicable in engineering terms. Even if such a being can't be so described, why is he/she needed? Occam's razor and all that.

Dawkins does not bother with the results of Kurt Goedel and its implications for Hilbert's sixth problem: whether the laws of physics can ever be -- from a human standpoint -- both complete and consistent. Dawkins of course is rather typical of those scientists who pay little heed to that result or who have tried to minimize its importance in physics. A striking exception is the mathematical physicist Roger Penrose who saw that Goedel's result was profoundly important (though mathematicians have questioned Penrose's interpretation).

A way to intuitively think of Goedel's conundrum is via the Gestalt effect: the whole is greater than the sum of its parts. But few of the profound issues of phenomenology make their way into Dawkins' thesis. Had the biologist reflected more on Penrose's The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics (Oxford 1989), perhaps he would not have plunged in where Penrose so carefully trod.

Penrose has referred to himself, 
according to a Wikipedia article, as an atheist. In the film A Brief History of Time, the physicist said, "I think I would say that the universe has a purpose, it's not somehow just there by chance ... some people, I think, take the view that the universe is just there and it runs along -- it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it."

By contrast, we get no such ambiguity or subtlety from Dawkins. Yet, if one deploys one's prestige as a scientist to discuss the underpinnings of reality, more than superficialities are required. The unstated, a priori assumption is, essentially, a Laplacian billiard ball universe and that's it, Jack.

Dawkins embellishes the Russellian rejoinder with the language of probability: What is the probability of a superbeing, capable of listening to millions of prayers simultaneously, existing? This follows his scorning of Stephen D. Unwin's The Probability of God (Crown Forum 2003), which cites Bayesian methods to obtain a high probability of god's existence.
http://www.stephenunwin.com/

Dawkins is uninterested in Unwin's subjective prior probabilities, all the while being utterly unaware that his own probability assessment is altogether subjective. Heedless of the philosophical underpinnings of probability theory, he doesn't realize that by assigning a probability of "remote" at the extremes of etiology, he is engaging in a subtle form of circular reasoning.

The reader deserves more than an easy putdown of Unwin in any discussion of probabilities. Dawkins doesn't acknowledge that Bayesian statistics is a thriving school of research that seeks to find ways to as much as possible "objectify" the subjective assessments of knowledgeable persons. There has been strong controversy concerning Bayesian versus classical statistics, and there is a reason for that controversy: it gets at foundational matters of etiology. Nothing on this from Dawkins.

Without a Bayesian approach, Dawkins is left with a frequency interpretation of probability (law of large numbers and so forth). But we have very little -- in fact Dawkins would say zero -- information about the existence or non-existence of a sequence of all powerful gods or pre-cosmoses. Hence, there are no frequencies to analyze. Hence, use of a probability argument is in vain.

Dawkins elsewhere says (4) that he has read the great statistician Ronald Fisher, but one wonders whether he appreciates the meaning of statistical analysis. Fisher, who also opposed the use of Bayesian premises, is no solace when it comes to frequency-based probabilities. Take Fisher's combined probability test, a technique for data fusion or "meta-analysis" (analysis of analyses): What are the several different tests of probability that might be combined to assess the probability of god?

Dawkins is quick to brush off William A. Dembski, the intelligent design advocate who uses statistical methods to argue that the probability is cosmically remote that life originated in a random manner. And yet Dawkins himself seems to have little or no grasp of the basis of probabilities.

In fact, Dawkins makes no attempt to define randomness, a definition routinely brushed off in elementary statistics texts but which represents quite a lapse when getting at etiological foundations (5) and using probability as a conceptual, if not mathematical, tool.

But, to reiterate, the issue goes yet deeper. If, at the extremes, causation is not nearly so clear-cut as one might naively imagine, then at those extremes probabilistic estimates may well be inappropriate.

Curiously, Russell discovered Russell's paradox, which was ousted from set theory by fiat (axiom). Then along came Goedel who proved that axiomatic set theory (a successor to the theory of types propounded by Russell and Alfred North Whitehead in their Principia Mathematica) could not be both complete and consistent. That is, Goedel jammed Russell's paradox right down the old master's throat, and it hurt. It hurt because Goedel's result makes a mockery of the fond Russellian illusion of the universe as giant computerized robot. How does a robot plan for and build itself? Algorithmically, it is impossible. Dawkins handles this conundrum, it seems, by confounding the "great explanatory power" of natural selection -- wherein lifeform robots are controlled by robotic DNA (selfish genes) -- with the origin of the cosmos.

But the biologist, so focused on this foundational issue of etiology, manages to avert his eyes from the Goedelian "frame problem." And yet even atheistic physicists sense that the cosmos isn't simplistically causal when they describe the overarching reality as a "spacetime block." In other words, we humans are faced with some higher or other reality -- a transcendent "force" -- in which we operate and which, using standard mathematical logic, is not fully describable. This point is important. Technically, perhaps, we might add an axiom so that we can "describe" this transcendent (topological?) entity, but that just pushes the problem back and we would then need another axiom to get at the next higher entity.

Otherwise, Dawkins' idea that this higher dimensional "force" or entity should be constructed faces the Goedelian problem that such construction would evidently imply a Turing algorithm, which, if we want completeness and consistency, requires an infinite regress of axioms. That is, Dawkins' argument doesn't work because of the limits on knowledge discovered by Goedel and Alan Turing. This entity is perforce beyond human ken.

One may say that it can hardly be expected that a biologist would be familiar with such arcana of logic and philosophy. But then said biologist should beware superficial approaches to foundational matters (6).

At this juncture, you may be thinking: "Well, that's all very well, but that doesn't prove the existence of god." But here is the issue: One may say that this higher reality or "power" or entity is dead something (if it's energy, it's some kind of unknown ultra-energy) or is a superbeing, a god of some sort. Because this transcendent entity is inherently unknowable in rationalistic terms, the best someone in Dawkins' shoes might say is that there is a 50/50 chance that the entity is intelligent. I hasten to add that probabilistic arguments as to the existence of god are not very convincing (7).

Please see Appendix on a priori probability for further discussion of the issue.

A probability estimate's job is to mask out variables on the assumption that with enough trials these unknowns tend to cancel out. Implicitly, then, one is assuming that a god has decided not to influence the outcome (8). At one time, in fact, men drew lots in order to let god decide an outcome. (One of the reasons that some see gambling as sinful is because it dishonors god and enthrones Lady Randomness.)

Curiously, Dawkins pans the "argument from incredulity" proffered by some anti-Darwinians but his clearly-its-absurdly-improbable case against a higher intelligence is in fact an argument from incredulity, being based on his subjective expert estimate.

Dawkins' underlying assumption is that mechanistic hypotheses of causality are valid at the extremes, an assumption common to modern naive rationalism.

Another important oversight concerns the biologist's Dawkins-centrism. "Your reality, if too different from mine, is quite likely to be delusional. My reality is obviously logically correct, as anyone can plainly see." This attitude is quite interesting in that he very effectively gives some important information about how the brain constructs reality and how easily people might suffer from delusions, such as being convinced that they are in regular communication with god.

True, Dawkins jokingly mentions one thinker who posits a Matrix-style virtual reality for humanity and notes that he can see no way to disprove such a scenario. But plainly Dawkins rejects the possibility that his perception and belief system, with its particular limits, might be delusional.

In Dawkins' defense, we must concede that the full ramifications of quantum puzzlements have yet to sink into the scientific establishment, which -- aside from a distaste for learning that, like Wile E. Coyote, they are standing on thin air -- has a legitimate fear of being overrun by New Agers, occultists and flying saucer buffs. Yet, by skirting this matter, Dawkins does not address the greatest etiological conundrum of the 20th century which, one would think, might well have major implications in the existence-of-god controversy.

Dawkins is also rather cavalier 
about probabilities concerning the origin of life, attacking the late Fred Hoyle's "jumbo jet" analogy without coming to grips with what was bothering Hoyle and without even mentioning that scientists of the caliber of Francis Crick and Joshua Lederberg were troubled by origin-of-life probabilities long before Michael J. Behe and Dembski touted the intelligent design hypothesis.

Astrophysicist Hoyle, whose steady state theory of the universe was eventually trumped by George Gamow's big bang theory, said on several occasions that the probability of life assembling itself from some primordial ooze was equivalent to the probability that a tornado churning through a junkyard would leave a fully functioning Boeing 747 in its wake. Hoyle's atheism was shaken by this and other improbabilities, spurring him toward various panspermia (terrestrial life began elsewhere) conjectures. In the scenarios outlined by Hoyle and Chandra Wickramasinghe, microbial life or proto-life wafted down through the atmosphere from outer space, perhaps coming from "organic" interstellar dust or from comets.

One scenario had viruses every now and again floating down from space and, besides setting off the occasional pandemic, enriching the genetic structure of life on earth in such a way as to account for increasing complexity. Hoyle was not specifically arguing against natural selection, but was concerned about what he saw as statistical troubles with the process. (He wasn't the only one worried about that; there is a long tradition of scientists trying to come up with ways to make mutation theory properly synthesize with Darwinism.)

Dawkins laughs off Hoyle's puzzlement about mutational probabilities without any discussion of the reasons for Hoyle's skepticism or the proposed solutions.

There are various ideas about why natural selection is robust enough to, thus far, prevent life from petering out (9). In my essay Do dice play God? (link above), I touch on some of the difficulties and propose a neo-Lamarckian mechanism as part of a possible solution, and at some point I hope to write more about the principles that drive natural selection. At any rate, I realize that Dawkins may have felt that he had dealt with this subject elsewhere, but his four-chapter thesis omits too much. A longer, more thoughtful book -- after the fashion of Penrose's The Emperor's New Mind -- is, I would say, called for when heading into such deep waters.

Hoyle's qualms, of course, were quite unwelcome in some quarters and may have resulted in the Nobel prize committee bypassing him. And yet, though the space virus idea isn't held in much esteem, panspermia is no longer considered a disrespectable notion, especially as more and more extrasolar planets are identified. Hoyle's use of panspermia conjectures was meant to account for the probability issues he saw associated with the origin and continuation of life. (Just because life originates does not imply that it is resilient enough not to peter out after X generations.)

Hoyle, in his own way, was deploying panspermia hypotheses in order to deal with a form of the anthropic principle. If life originated as a prebiotic substance found across wide swaths of space, probabilities might become reasonable. It was the Nobelist Joshua Lederberg who made the acute observation that interstellar dust particles were about the size of organic molecules. Though this correlation has not panned out, that doesn't make Hoyle a nitwit for following up.

In fact, Lederberg was converted to the panspermia hypothesis by yet another atheist (and Marxist), J.B.S. Haldane, a statistician who was one of the chief architects of the "modern synthesis" merging Mendelism with Darwinism.

No word on any of this from Dawkins, who dispatches Hoyle with a parting shot that Hoyle (one can hear the implied chortle) believed that archaeopteryx was a forgery, after the manner of Piltdown man. The biologist declines to tell his readers about the background of that controversy and the fact that Hoyle and a group of noted scientists reached this conclusion after careful examination of the fossil evidence. Whether or not Hoyle and his colleagues were correct, the fact remains that he undertook a serious scientific investigation of the matter.(9,0)

http://www.chebucto.ns.ca/Environment/NHR/archaeopteryx.html

Another committed atheist, Francis Crick, co-discoverer of the doubly helical structure of DNA, was even wilder than Hoyle in proposing a panspermia idea in order to account for probability issues. He suggested in a 1970s paper and in his book Life Itself: Its Origin and Nature (Simon & Schuster 1981) that an alien civilization had sent microbial life via rocketship to Earth in its long-ago past, perhaps as part of a program of seeding the galaxy. Why did the physicist-turned-biologist propose such a scenario? Because the amino acids found in all lifeforms are left-handed; somehow none of the mirror-image right-handed compounds survived, if they were ever incorporated at all. That discovery seemed staggeringly unlikely to Crick (9:1).

I don't bring this up to argue with Crick, but to underscore that Dawkins plays Quick-Draw McGraw with serious people without discussing the context. I.e., his book comes across as propagandistic, rather than fair-minded. It might be contrasted with John Allen Paulos' book Irreligion (see Do dice play god? above), which tries to play fair and which doesn't make duffer logico-mathematical blunders (10).

Though Crick and Hoyle were outliers in modern panspermia conjecturing, the concept is respectable enough for NASA to take seriously.

The cheap shot method can be seen in how Dawkins deals with Carl Jung's claim of an inner knowledge of god's existence. Jung's assertion is derided with a snappy one-liner that Jung also believed that objects on his bookshelf could explode spontaneously. That takes care of Jung! -- irrespective of the many brilliant insights contained in his writings, however controversial. (Disclaimer: I am neither a Jungian nor a New Ager).

Granted that Jung was talking about what he took to be a paranormal event and granted that Jung is an easy target for statistically minded mechanists and granted that Jung seems to have made his share of missteps, we make three points:

1. There was always the possibility that the exploding object occurred as a result of some anomalous, but natural event.

2. A parade of distinguished British scientists have expressed strong interest in paranormal matters, among them officers of paranormal study societies. The American Brian Josephson, who received a Nobel prize for the quantum physics behind the Josephson junction, speaks up for the reality of mental telepathy (for which he has been ostracized by the "billiard ball" school of scientists).

3. If Dawkins is trying to debunk the supernatural using logical analysis, then it is not legitimate to use belief in the supernatural to discredit a claim favoring the supernatural.

Getting back to Dawkins' use of probabilities, the biologist contends with the origin-of-life issue by invoking the anthropic principle and the principle of mediocrity, along with a verbal variant of Drake's equation http://en.wikipedia.org/wiki/Drake_equation

The mediocrity principle says that astronomical evidence shows that we live on a random speck of dust on a random dustball blowing around in a (random?) mega dust storm.

The anthropic principle says that, if there is nothing special about Earth, isn't it interesting how Earth travels about the sun in a "Goldilocks zone" ideally suited for carbon based life and how the planetary dynamics, such as tectonic shift, seem to be just what is needed for life to thrive (as discussed in the book Rare Earth: Why Complex Life is Uncommon in the Universe by Peter D. Ward and Donald Brownlee (Springer Verlag 2000))? Even further, isn't it amazing that the seemingly arbitrary constants of nature are so exactly calibrated as to permit life to exist, as a slight difference in the index of those constants known as the fine structure constant would forbid galaxies from ever forming? This all seems outrageously fortuitous.

Let us examine each of Dawkins' arguments.

Suppose, he says, that the probability of life originating on Earth is a billion to one or even a billion billion to one (10^-9 and 10^-18). If there are that many Earth-like planets in the cosmos, the probability is virtually one that life will arise spontaneously. We just happen to be the lucky winner of the cosmic lottery, which is perfectly logical thus far.

Crick, as far as I know, is the only scientist to point out that we can only include the older sectors of the cosmos, in which heavy metals have had time to coalesce from the gases left over from supernovae -- i.e., second generation stars and planets (by the way, Hoyle was the originator of this solution to the heavy metals problem). Yet still, we may concede that there may be enough para-Earths to answer the probabilities posed by Dawkins.

Though careful to say that he is no expert on the origin of life, Dawkins' probabilities, even if given for the sake of argument, are simply Bayesian "expert estimates." But, it is quite conceivable that those probabilities are far too high (though I candidly concede it is very difficult to assign any probability or probability distribution to this matter).

Consider that unicellular life, with the genes on the DNA (or RNA) acting as the "brain," exploits proteins as the cellular workhorses in a great many ways. We know that sometimes several different proteins can fill the same job, but that caveat doesn't much help what could be a mind-boggling probability issue.

Suppose that, in some primordial ooze or on some undersea volcanic slope, a prebiotic form has fallen together chemically and, in order to cross the threshold to lifeform, requires one more protein to activate. A protein is the molecule that takes on a specific shape, carrying specific electrochemical properties, after amino acids fold up. Protein molecules fit into each other and other constituents of life like lock and key (though on occasion more than one key fits the same lock).

The amino acids used by terrestrial life can, it turns out, be shuffled in many different ways to yield many different proteins. How many ways? About 10^60, which exceeds the number of stars in the observable universe by 24 orders of magnitude! And the probability of such a spark-of-life event might be in that ball park. If one considers the predecessor protein link-ups as independent events and multiplies those probabilities, we would come up with numbers even more absurd.

But, Dawkins has a way out, though he loses the thread here. His way out is that a number of physicists have posited, for various reasons, some immense -- even infinite -- number of "parallel" universes, which have no or very weak contact with this one and are hence undetectable. This could handily account for our universe having the Goldilocks fine structure constant and, though he doesn't specify this, might well provide enough suns in those universes that have galaxies to account for even immensely improbable events.

I say Dawkins loses the thread because he scoffs at religious people who see the anthropic probabilities as favoring their position concerning god's existence without, he says, realizing that the anthropic principle is meant to remove god from the picture. What Dawkins himself doesn't realize is that he mixes apples and oranges here. The anthropic issue raises a disturbing question, which some religious people see as in their favor. Some scientists then seize on the possibility of a "multiverse" to cope with that issue.

But now what about Occam's razor? Well, says Dawkins, that principle doesn't quite work here. To paraphrase Einstein, once one removes all reasonable explanations the remaining explanation, no matter how absurd it sounds, must be correct.

And yet what is Dawkins' basis for the proposition that a host of undetectable universes is more probable than some intelligent higher power? There's the rub. He is, no doubt unwittingly, making an a priori assumption that any "natural" explanation is more reasonable than a supernatural "explanation." Probabilities really have nothing to do with his assumption.

But perhaps we have labored in vain over the "multiverse" argument, for at one point we are told that a "God capable of calculating the Goldilocks values" of nature's constants would have to be "at least as improbable" as the finely tuned constants of nature, "and that's very improbable indeed." So at bottom, all we have is a Bayesian expert prior estimate.

Well, say you, perhaps a Wolfram-style
 algorithmic complexity argument can save the day. Such an argument might be applicable to biological natural selection, granted. But what selected natural selection? A general Turing machine can compute anything computable, including numerous "highly complex" outputs programed by easy-to-write inputs. But what probability does one assign to a general Turing machine spontaneously arising, say, in some electronic computer network? Wolfram found that "interesting" celullar automata were rare. Even rarer would be a complex cellular automaton that accidentally emerged from random inputs.

I don't say that such a scenario is impossible, but rather to assume that it just must be so is little more than hand-waving.

In fact, we must be very cautious about how we use probabilities concerning emergence of high-information systems. Here is why: A sufficiently rich mix of chemical compounds may well form a negative feedback dynamical system. It would then be tempting to apply a normal probability distribution to such a system, and that distribution very well may yield reasonable results for a while. BUT, if the dynamical system is non-linear -- which most are -- the system could reach a threshold, akin to a chaos point, at which it crosses over into a positive feedback system or into a substantially different negative feedback system.

The closer the system draws to that tipping point, the less the normal distribution applies. In the chaos zone, normal probabilities are generally worthless. Hence to say that thus and such an outcome is highly improbable based on the previous state of the system is to misunderstand how non-linearities can work. This point, it should be conceded, might be a bit too abstruse for Dawkins' readers.

Dawkins tackles the problem of the outrageously high information values associated with complex life forms by conceding that a species, disconnected from information about causality, has only a remote probability of occurrence by random chance. But, he counters, there is in fact a non-random process at work: natural selection.

I suppose he would regard it a quibble if one were to mention that mutations occur randomly, and perhaps so it is. However, it is not quibbling to question how the powerful process of natural selection first appeared on the scene. In other words, the information values associated with the simplest known form (least number of genes) of microbial life is many orders of magnitude greater than the information values associated with background chemicals -- which was Hoyle's point in making the jumbo jet analogy.

And then there is the probability of life thriving. Just because it emerges, there is no guarantee that it would be robust enough not to peter out in a few generations (9).Dawkins dispenses with proponents of intelligent design, such as biologist Michael J. Behe, author of Darwin’s Black Box: The Biochemical Challenge to Evolution (The Free Press 1996), by resort to the conjecture that a system may exist after its "scaffolding" has vanished. This conjecture is fair, but, at this point, the nature of the scaffolding, if any, is unknown. Dawkins can't give a hint of the scaffolding's constituents because, thus far, no widely accepted hypothesis has emerged. Natural selection is a consequence of an acutely complex mechanism. The "scaffolding" is indeed a "black box" (it's there, we are told, but no one can say what's inside).

Though it cannot be said that intelligent design advocate Behe has proved "irreducible complexity," the fact is that the magnitude of organic complexity has even prompted atheist scientists to look far afield for plausible explanations.

Biologists, Dawkins writes, have had their consciousnesses raised by natural selection's "power to tame improbability" and yet that power has very little to do with the issues of the origins of life or of the universe and hence does not bolster his case against god. I suppose that if one waxes mystical about natural selection -- making it a mysterious, ultra-abstract principle, then perhaps Dawkins makes sense. Otherwise, he's amazingly naive.

Note
It must be acknowledged that in microbiological matters, probabilities need not always follow a routine independence multiplication rule. In cases where random matching is important, we have the number 0.63 turning up quite often.

For example, if one has n addressed envelopes and n identically addressed letters are randomly shuffled and then put in the envelopes, what is the probability that at least one letter arrives at the correct destination? The surprising answer is that it is the sum 1 - 1/2! + 1/3! ... up to n. For n greater than 10 the probability converges near 63%.

That is, we don't calculate, say 11^-11 (3.5x10^-15), or some routine binomial combinatorial multiple, but we have that our series approximates very closely 1 - e^-1 = 0.63.

Similarly, suppose one has eight distinct pairs of socks randomly strewn in a drawer and thoughtlessly pulls out six one by one. What is the probability of at least one matching pair?

The first sock has no match. The probability the second will fail to match the first is 14/15. The probability for the third failing to match is 12/14 and so on until the sixth sock. Multiplying all these probabilities to get the probability of no match at all yields 32/143. Hence the probability of at least one match is 1 - 32/143 or about 78%.

These are minor points, perhaps, but they should be acknowledged when considering probabilities in an evolutionary context.


Appendix on a priori probability
Let us digress a bit concerning the controversy over Bayesian inference (7a,7b), which is essentially about how one deploys an a priori probability.

If confronted with an urn about which we know only that it contains some black balls and some white ones and, for some reason, we are compelled to wager whether an initial draw yields a black ball, we might agree that our optimal strategy is to assign a probability of success of 1/2. In fact, we might well agree that -- barring resort to intuition or appeal to a higher power -- this is our only strategy. Of course, we might include the cost aspect in our calculation. A classic example is Pascal's wager on the nonexistence of god. Suppose, given a probability of say 1/2, one is wrong?

Now suppose we observe say 30 draws, with replacement, which we break down into three trials of 10 draws each. In each trial, the ratio is about 2/3 blacks to whites. Three trials isn't many, but is perhaps enough to convince us that the population proportion is close to 2 to 3. We have used frequency analysis to estimate that the independent probability of choosing a black ball is close to 2/3. That is, we have used experience to revise our probability estimate, using "frequentist" reasoning. What is the difference between three trials end-to-end and one trial? This question is central to the Bayesian controversy. Is there a difference in three simultaneous trials of 10 draws each and three run consecutively? These are slippery philosophical points that won't detain us here.

But we need be clear on what the goal is. Are we using an a priori initial probability that influences subsequent probabilities? Or, are we trying to detect bias (including neutral bias of 1/2) based on accumulated evidence?

For example, suppose we skip the direct proportions approach just cited and use, for the case of replacement, the Bayesian conditional probability formula, assigning an a priori probability of b to event B of a black ball withdrawal. That is, p(B | B) = p(B & B)/p(B). Or, that is, p(b | b) = p(b | b)p(b)/p(b) = b^2. For five black balls in succession, we get b^5.

Yes, quite true that we have the case in which the Bayesian formula collapses to the simple multiplication rule for independent events. But our point is that if we apply the Bayesian formula differently to essentially the same scenario, we get a different result, as the following example shows.

Suppose the urn has a finite number N of black and white balls in unknown proportion and suppose n black balls are drawn consecutively from the urn. What is the probability the next ball will be black? According to the Bayesian formula -- applied differently than as above -- the probability is (n+1)/(n+2) (8.0).

Let N = the total number of balls drawn and to be drawn and n = those that have been drawn, with replacement. S_n is the run of consecutive draws observed as black. S_N is the total number of black draws possible, those done and those yet to be done. What is the probability that all draws will yield black given a run of S_n black? That is

what is p[S_N = N | S_n = n]?

But this

= p[S_N = N and S_n = n]/p[S_n = n]

or (1/N+1)/(1/n+1) = (n+1)/(N+1). If N = n+1, we obtain (n+1)/(n+2).

C.D. Broad, in his derivation for the finite case, according to S.L. Zabell (8.0), reasoned that all ratios j/n are equally likely and discovered that the result is not dependent on N, the population size, but only on the sample size n. Bayes' formula is applied as a recursive summation of factorials, eventually leading to (n+1)/(n+2).


This result was also derived for the infinite case by Laplace and is known as the rule of succession.

Laplace's formula, as given by Zabell (8.0) , is


[
S
0,1 p^(r+1)(1-p)^(m-r) dp]/[S0,1 p^r(1-p)^(m-r) dp] = (r+1)/(m+1)

Laplace's rule of succession contrasts with that of Thomas Bayes, as reported by his intellectual executor Richard Price. Bayes had considered the case where nothing is known concerning a potential event prior to any relevant trials. Bayes' idea is that all probabilities would then be equally likely.

Given this assumption and told that a black ball has been pulled from an urn n times in unfailing succession, it can be seen that

P[a < p < b] = (n+1) Sa,b p^n dp = b^(n+1) - a^(n+1)

In Zabell (8.0), this is known as Price's rule of succession. We see that this rule of succession of course might (it's a stretch) be of some value in estimating the probability that the sun will rise tomorrow but is worthless in estimating the probability of god's existence.

To recapitulate: If we know there are N black and white balls within and draw, with replacement, n black balls consecutively, there are N-n possible proportions. So one may say that, absent other information, the probability that any particular ratio is correct is 1/(N-n). That is, the distribution of the potential frequencies is uniform on grounds that each frequency is equiprobable.

So this is like asking what is the probability of the probability, a stylization some dislike. So in the finite and infinite cases, a uniform probability distribution seems to be assumed, an assumption that can be controversial -- though in the case of the urn equiprobability has a justification. I am not quite certain that there necessarily is so little information available that equiprobability is the best strategy, as I touch on in "Caution A" below.

Another point is that, once enough evidence from sampling the urn is at hand, we should decide -- using some Bayesian method perhaps -- to test various probability distributions to see how well each fits the data.

Caution A: Consider four draws in succession, all black. If we assume a probability of 1/2, the result is 0.5^4 = 0.0625, which is above the usual 5% level of significance. So are we correct in conjecturing a bias? For low numbers, the effects of random influences would seem to preclude hazarding a probability of much in excess of 1/2. For 0.5^5 = 0.03152, we might be correct to suspect bias. For the range n=5 to n=19, I suggest that the correct proportion is likely to be found between 1/2 and 3/4 and that we might use the mean of 0.625 [a note on that topic will go online soon, which will include discussion of an estimation for n >.= 20 when we do not accept the notion that all ratios are equiprobable].

Caution B: Another issue is applying the rule of succession to a system in which perhaps too much is unknown. The challenge of Hume as to the probability of the sun rising tomorrow was answered by Laplace with a calculation based on the presumed number of days that the sun had already risen. The calculation generated much derision and did much to damage the Bayesian approach(However, computer-enhanced Bayesian methods these days enjoy wide acceptance in certain disciplines.)

The issue that arises is the inherent stability of a particular system. An urn has one of a set of ratios of white to black balls. But, a nonlinear dynamic system is problematic for modeling by an urn. Probabilities apply well to uniform, which is to say, for practical purposes, periodic systems. However, quasi-periodic systems may well give a false sense of security, perhaps masking sudden jolts into atypical, possibly chaotic, behavior. Wasn't everyone marrying and giving in marriage and conducting life as usual when in 2004 a tsunami killed 230,000 people in 14 countries bordering the Indian Ocean? (Interestingly, however, Augustus De Morgan proposed a Bayesian-style formula for the probability of the sudden emergence of something utterly unknown, such as a new species (8a)).

That said, one can nevertheless imagine a group of experts, each of whom gives a probability estimate to some event, and taking the average (perhaps weighted via degree of expertise) and arriving at a fairly useful approximate probability. In fact, one can imagine an experiment in which such expert opinion is tested against a frequency model (the event would have to be subject to frequency analysis, of course).

We might go further and say that it is quite plausible that a person well informed about a particular topic might give a viable upper or lower bound probability for a particular set of events, though not knowledgeable about precise frequencies. For example, if I notice that the word "inexorable" has appeared at least once per volume in 16 of the last 20 books I have read, I can reason that, based on previous reading experience, the probability that that particular word would appear in a book is certainly less than 10%. Hence, I can say that the probability of randomness rather than tampering by some capricious entity is, using combinatorial methods, less than one in 5 billion. True, I do not have an exact input value. But my upper bound probability is good enough.

We consider the subjectivist vs. objectivist conceptions of probability as follows:

Probability Type I 
is about degree of belief or uncertainty.

Two pertinent questions about P1 are:

1. How much belief does a person have that an event will happen within some time interval?

2. How much belief does a person have that an event that has occurred did so under the conditions given?

Degree of belief may be given, for example, as an integer on a scale from 0 to 10, which, as it happens can be pictured as a pie cut into 10 wedges, or percentages given in tenths of 100. When a person is being fully subjective ("guesstimating," to use a convenient barbarism), one tends to focus on easily visualizable pie portions, such as tenths.

The fact that a subjective assessment can be numbered on a scale leads easily to ratios. That is, if one is "seven pie wedges" sure, it is easy enough to take the number 7 and make it a ratio versus the complement of three pie wedges. We then may speak as if there are 3 chances in 7 that our belief is wrong.

Of course, such ratios aren't really any better than choosing a number between 0 and 10 for one's degree of belief. This is one reason why such subjective ratios are often criticized as of no import.

Probability Type II
 then purports to demonstrate an objective method of assigning numbers to one's degree of belief. The argument is that a thoughtful person will agree that what one doesn't know is often modelable as a mixture which contains an amount q and an amount p of something or other -- that is, the urn model. If one assumes that the mixture stays constant for a specified time, then one is entitled to use statistical methods to arrive at some number close to the true ratio. Such ratios are construed to mirror objective reality and so give a plausible reason for one's degree of belief, which can be acutely quantified, permitting tiny values.

P2 requires a classical, essentially mechanist view of phenomenal reality, an assumption that is open to challenge, though there seems little doubt that stochastic studies are good predictors for everyday affairs (though this assertion also is open to question).


1. We don't claim that none of his criticisms are worth anything. Plenty of religious people, Martin Luther included, would heartily agree with some of his complaints, which, however, are only tangentially relevant to his main argument.Anyone can agree that vast amounts of cruelty have occurred in the name of god. Yet, it doesn't appear that Dawkins has squarely faced the fact of the genocidal rampages committed under the banner of godlessness (Mao, Pol Pot, Stalin).

What drives mass violence is of course an important question. As an evolutionary biologist, Dawkins would say that such behavior is a consequence of natural selection, a point underscored by the ingrained propensity of certain simian troops to war on members of the same species. No doubt Dawkins would concede that the bellicosity of those primates had nothing to do with beliefs in some god.

So it seems that Dawkins may be placing too much emphasis on beliefs in god as a source of violent strife, though we should grant that it seems perplexing as to why a god would permit such strife.

Still, it appears that the author of Climbing Mount Improbable (W.W. Norton 1996) has confounded correlation with causation.


2. Properly this footnote, like the previous one, does not affect Dawkins' case against god's existence, which is the reason for the placement of these remarks.
In a serious lapse, Dawkins has that "there is something to be said" for treating Buddhism and Confucianism not as religions but as ethical systems. In the case of Buddhism, it may be granted that Buddhism is atheistic in the sense of denying a personal, monolithic god. But, from the perspective of a materialist like Dawkins, Buddhism certainly purveys numerous supernaturalistic ideas, with followers espousing ethical beliefs rooted in a supernatural cosmic order -- which one would think qualifies Buddhism as a religion.

True, Dawkins' chief target is the all-powerful god of Judaism, Christianity and Islam (Zoroastrianism too), with little focus on pantheism, hentheism or supernatural atheism. Yet a scientist of his standing ought be held to an exacting standard.


3. As well as conclusively proving that quantum effects can be scaled up to the "macro world."
4. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (W.W. Norton 1986).

5. The same might be said of Dembski.

6. A fine, but significant, point: Dawkins, along with many others, believes that Zeno's chief paradox has been resolved by the mathematics of bounded infinite series. However, quantum physics requires that potential energy be quantized. So height H above ground is measurable discontinuously in a finite number of lower heights. So a rock dropped from H to ground must first reach H', the next discrete height down. How does the rock in static state A at H reach static state B at H'? That question has no answer, other than to say something like "a quantum jump occurs." So Zeno makes a sly comeback.

This little point is significant because it gets down to the fundamentals of causality, something that Dawkins leaves unexamined.
7. After the triumphs of his famous theorems, Goedel stirred up more trouble by a finding a solution to Eistein's general relativity field equations which, in Goedel's estimation, demonstrated that time (and hence naive causality) is an illusion. A rotating universe, he found, could contain closed time loops such that if a rocket traveled far enough into space it would eventually reach its own past, apparently looping through spacetime forever. Einstein dismissed his friend's solution as inconsistent with physical reality.

Before agreeing with Einstein that the solution is preposterous, consider the fact that many physicists believe that there is a huge number of "parallel," though undetectable, universes.

And we can leave the door ajar, ever so slightly, to Dawkins' thought of a higher power fashioning the universe being a result of an evolutionary process. Suppose that far in our future an advanced race builds a spaceship bearing a machine that resets the constants of nature as it travels, thus establishing the conditions for the upcoming big bang in our past such that galaxies, and we, are formed. Of course, we then are faced with the question: where did the information come from?

7a. An excellent discussion of this controversy is found in Interpreting Probability (Cambridge 2002) by David Howie.

7.b An entertaining popular discussion is found in The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Yale 2011) by 
Sharon Bertsch McGrayne.

8.0 
C.D. Broad and others are cited with respect to this result in Symmetry and Its Discontents (Cambridge 2005) by S.L. Zabell.

7a. An excellent discussion of this controversy is found in Interpreting Probability (Cambridge 2002) by David Howie.

7.b An entertaining popular discussion is found in The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Yale 2011) by 
Sharon Bertsch McGrayne.

8.0 
C.D. Broad and others are cited with respect to this result in Symmetry and Its Discontents (Cambridge 2005) by S.L. Zabell.

8.a Zabell offers a proof of De Morgan's formula in Symmetry (above).

8. Unless one assumes another god who is exactly contrary to the first, or perhaps a group of gods whose influences tend to cancel.9. Consider a child born with super-potent intelligence and strength. What are the probabilities that the traits continue?

A. If the child matures and mates successfully, the positive selection pressure from one generation to the next is faced with a countervailing tendency toward dilution. It could take many, many generations before that trait (gene set) becomes dominant, and in the meantime, especially in the earlier generations, extinction of the trait is a distinct possibility.

B. In social animals, very powerful individual advantages come linked to a very powerful disadvantage: the tendency of the group to reject as alien anything too different. Think of the recent tendency of white mobs to lynch physically superior black males. Or of the early 19th century practice of Australian tribesmen to kill mixed race offspring born to their women.


9.0 In another example of Dawkins' dismissive attitude toward fellow scientists, Dawkins writes:

Paul Davies' The Mind of God seems to hover somewhere between Einsteinian pantheism and an obscure form of deism -- for which he was rewarded with the Templeton Prize (a very large sum of money given annually by the Templeton Foundation, usually to a scientist who is prepared to say something nice about religion."

Dawkins goes on to upbraid scientists for taking Templeton money on grounds that they are in danger of introducing bias into their statements.

I have not read The Mind of God: The Scientific Basis for a Rational World (Simon & Schuster 1992), so I cannot comment on its content. On the other hand, it would appear that Dawkins has not read Davies' The Fifth Miracle: the search for the origins and meaning of life (Simon & Schuster 1999), or he might have been a bit more prudent.

Fifth Miracle is, as is usual with Davies, a highly informed tour de force. I have read several books by Davies, a physicist, and have never caught him in duffer errors of the type found in Dawkins' books.

By the way, Robert Shapiro (see footnote 9.1 below) didn't find Hoyle's panspermia work to be first rate, but I have the sense that that assessment may have something to do with the strong conservativism of chemists versus the tradition of informed speculation by astrophysicists. Some of Shapiro's complaints could also be lodged against string theorists.

By the way Nobel laureate biologist Lynn Margolis also denounced Hoyle's panspermia speculations, but, again what may have been going on was science culture clash.

Some of the notions of H and his collaborator, N.C. Wickramasinghe,
which seemed so outlandish in the eighties, have gained credibility with new discoveries concerning extremophiles and the potential of space-borne microorganisms.

9.1 This draft corrects a serious misstatement of Crick's point, which occurred because of my faulty memory.

In Origins: a skeptic's guide to the creation of life on earth (Summit/Simon & Schuster 1986), biochemist Robert Shapiro notes that the probability of such a circumstance is in the vicinity of 10^20 to 1.

Shapiro's book gives an excellent survey of origin of life thinking up to the early 1980s.

Shapiro also gives Dawkins a jab over Dawkins' off-the-cuff probability estimate of a billion to one against life emerging.

10. I have also made more than my share of those.Relevant links:
In search of a blind watchmaker
http://www.angelfire.com/az3/nfold/watch.htmlDo dice play God?
http://www.angelfire.com/az3/nfold/dice.html
Toward a signal model of perception
http://www.angelfire.com/ult/znewz1/qball.html
On Hilbert's sixth problem
http://kryptograff.blogspot.com/2007/06/on-hilberts-sixth-problem.html

The world of null-H

http://kryptograff.blogspot.com/2007/06/world-of-null-h.html
The universe cannot be modeled as a Turing machine
http://www.angelfire.com/az3/nfold/turing.html
Drunk and disorderly: the inexorable rise of entropy
http://www.angelfire.com/az3/nfold/entropy.html

Biological observer-participation and Wheeler's 'law without law'
by Brian D. Josephson
http://arxiv.org/abs/1108.4860

The mathematics of changing your mind (on Bayesian methods)
by John Allen Pauloshttp://www.nytimes.com/2011/08/07/books/review/the-theory-that-would-not-die-by-sharon-bertsch-mcgrayne-book-review.html?_r=2&pagewanted=all

Where is Zion?
http://www.angelfire.com/az3/newzone/zion1.html


Other Conant pages
http://conantcensorshipissue.blogspot.com/2011/11/who-is-paul-conant-paul-conants-erdos.html
A Dawkins link
http://users.ox.ac.uk/~dawkins/
Draft 08 [Digression on a priori probability added]
Draft 09 [Correction of bad numbers plugged into a probability example in the digression]
Draft 10 [Digression amplified]
Draft 11 [Digression revised and again amplified]
Draft 12 [Digression example clarified]
Draft 13 [Correction in digression due to comment by Josh Mitteldorf]
Draft 14 [Digression amplified]
Draft 15 [Digression amplified and made into an appendix]





>/dov>