Wednesday, January 26, 2022

 

Thursday, November 28, 2013

AI, Star Wars and quantum computing

This essay was first published ca. 2000 and modified a few times thereafter.


By PAUL CONANT
As proposed, the National Missile Defense has, at best, little bang for the buck. Every phase of interception, using collision as the kill mechanism, has serious drawbacks.

However, a system combining boost-phase interceptors with high-powered midcourse laser beams is not implausible, though the expense of an effective system may turn out to be unacceptable. Such a system has been found unpromising in a July 2003 report from a 12-member panel of the American Physical Society. NMD is not designed to be effective against Russia, which can fire enough ICBMs to simply overwhelm the battery of interceptors. Though some have argued that the system might work against Chinese missiles, this seems unlikely, since the Chinese are well able to deploy effective decoys and other 'penetration aids.'

The assumption that more primitive states will be unable to deploy such countermeasures is highly unconvincing. A test intercept, rated successful by the Pentagon, was widely criticized for using a decoy heat source that would not be used in a real-world situation. NMD is not capable of preventing delivery of biowar toxins, such as smallpox and anthrax, because the warhead, when entering space, can divide into numerous small bomblets invisible to interceptor detectors.

NMD's apparent purpose is to prevent fanatical elements of 'rogue' states -- Iraq, Iran and North Korea -- from successful nuclear warhead launches. North Korea has a missile that can reach Alaska and Hawaii and is presumed to be developing one that might reach the lower 48. It is known to have diverted nuclear materials for bomb research. Neither Iraq nor Iran has the capability to hit American cities, though an unstated purpose of development of NMD might be, as a corollary, to develop weaponry for defense of Israel, as part of a theater missile defense program.

Despite George Bush Senior's enthusiastic statements about the success of the Patriots against Iraq's scuds, the kill rate was apparently very low. NMD might also be justified as a means of countering terrorists who gain control of a missile or two. But, as the events of September 11 show, such a catastrophe needn't happen only in a distrusted nation; it might happen anywhere, raising very hard questions as to where to position interceptors.

The Air Force is reported to have overcome optical turbulence problems with its long-range (250-mile) laser heat ray, which, well clear of a foe's air defenses, would incinerate missiles, either during boost-phase or during free fall. However, the experts with the American Physical Society have noted continuing technical difficulties with an airborne laser.

There has been much debate as to whether spaceborne or airborne lasers are practical because land-based weapons are easier to defend. On the other hand, nothing beats the speed of light and missiles sent to kill a laser can be zapped by laser beam. The question then becomes, how many lasers are required? An important advantage of space-based lasers is low maintenance. Another is that they can fire during a foe's boost phase as well as during midcourse free fall and even during the end course. Also, the heat ray needn't incinerate every missile instantly. Rather, the missile would be heated above its air friction threshold and would burn up like a meteorite. During boost phase, it might be advantageous to hit the missile in the lower atmosphere, though the APS panel said that a laser beam would be ineffective during boost phase against solid-fuel rockets, which are more heat resistant than liquid-fuel rockets.

If the intruder is past boost phase, a laser intercept might be triggered sometime after atmosphere re-entry. Multiple beams would hit decoys, which will likely be balloons, that will either explode harmlessly or overheat and burn up on re-entry.

A penetration aid that could render rocket kill vehicles useless would be a device to jam the KV's end-game detector, which is operated by an onboard computer. That is, the warhead carries a device that sends out a high-amplitude pulse of radio waves to create a power surge in the KV's onboard computer. The jammer might be set to go off some specific time after a surveillance signal (x-band radar) is detected.

The laser's computer will tend to be much farther away from the foe's warhead, and, of course, jamming ability diminishes by the inverse square of the distance. Space-based lasers would be vulnerable to similar lasers fired from the surface or above it, though lower atmosphere optical turbulence can impede strength and accuracy of pulses. Still, laser satellites can be cloaked with stealth technology; that is, a satellite can be shaped to promote the bending of surveillance waves around it and coated with a skin designed to absorb radar and perhaps other surveillance waves.

In addition, small rocket-motor firings can be programed to occur pseudorandomly in order to make minor orbit changes, thus increasing difficulty of detection.

Light and fast
The exo-atmospheric kill vehicle is designed to hit a warhead near the top of its trajectory in space. The advantage of an EKV (or of an atmospheric kill vehicle) is its light payload, which consists principally of a compact infrared (heat) sensor, coupled with an onboard computer. If an EKV's initial velocity equals that of its target, it will move much faster.

The EKV is guided to the vicinity of the target by ground-based x-band radar. The EKV's high-sensitivity heat sensor and onboard computer then orchestrate the 'end-game' intercept. In space, the target is following a simple, largely unperturbed trajectory, so that course projection problems are less than they would be for a conventional aircraft or for a cruise missile taking evasive actions. A salvo of EKV's would be launched at slightly different times in order to reduce likelihood of correlated errors (a miss by one KV won't imply a miss by all).

Initially, the Pentagon plans to deploy 20 interceptors, to be followed by 100 ground-based EKV's in Alaska. However, if North Korea's reach improves, it is reported that North Dakota is better positioned for an interceptor battery. Infrared-sensing satellites would be used to detect launches. Though geosynchronous orbits are envisioned, the 22,000-mile altitude means a 2.5-second data delay.

That couple of seconds adds to some minimum time between launch A and launch B.At lower altitudes, more satellites are required for keeping inhabited areas under surveillance.

Boost-phase concerns
Boost-phase missile-to-missile interception -- knocking out the adversary missile while its booster is still firing -- is viewed by some experts as the next step in missile defense. The booster's rocket flame is easy for a KV to detect and the missile's course is highly predictable, improving likelihood of a hit.

Boost-phase interception of ICBM's also avoids the decoy problem associated with midcourse interception. The American Physical Society analysts argued that boost-phase interceptors are unlikely to be fast enough to catch missiles using solid fuel. If they are, they would be impractically large, the scientists said. They also said that, even against liquid-fuel missiles, boost-phase KV's would be unable to reach missiles fired from Iran, which is too far from submarine-launch or air-launch areas.

A major concern of boost-phase is protection of the weapon platforms, which tend to be vulnerable to adversary attacks. The KV's must be close enough for interception, meaning ships or aircraft must patrol within a specified region continuously. Spaceborne lasers avoid this problem, though attack by satellite killer missiles, perhaps armed with jamming devices, is a concern.

A foe with ground-based lasers might destroy any airborne or spaceborne weapons, but it is assumed that such systems are beyond the capability of 'rogue' nations. It would seem that if ground-based lasers were effective, the Pentagon would not be seeking to build a battery of EKV's. It has been estimated that a ship-borne Aegis KV, loitering in the North Pacific, could knock out a North Korean missile headed for America in one shot, though weapons expert Richard L. Garwin favors stationing ships in the Japan basin or operating a joint U.S.-Russian missile base on Russian soil near the North Korean border.

Ships in the Persian Gulf would be positioned to fire interceptors at missiles boosting from Iraq or Iran, though distances to Iran's hinterlands could pose problems. Though such ships would be well-defended by batteries of short-range missiles, attack is still plausible. If Saddam rained enough missiles on the ships, they would run out of defensive rockets, thus leaving him free to fire at other targets.

Even if the United States placed boost-phase interceptors on submarines -- a formidable task that would require a new class of subs -- an adversary might 'smoke out' the subs by firing missiles in various directions, tracking the origins of KV launches, raining missiles down on the subs, and then firing a new volley at other targets. These scenarios are very unlikely because of the costs involved, but cannot be totally discounted.

An important use of boost-phase interceptors might be to deter an Indo-Pak nuclear war. India and Pakistan have missiles in the 1,500-mile range capable of carrying nuclear warheads. The political, ethnic and religious passions in the region could well be sufficient to spark such a cataclysm. As we know, a Pakistan nuclear scientist assisted the Taliban and al Queda in their nuclear weapon queries.

Because such a conflagration would imperil the security of every nation and pose radioactive fallout perils for the entire globe, it seems that America keeping a boost-phase anti-missile system primed might be worthwhile.

Midcourse hassles
The biggest hurdle to interception of an ICBM near the top of its trajectory in space is the likelihood of effective decoys. A simple countermeasure is to have the warhead deploy reflective balloons in space and set them to rotating ('tumbling'). The sunless side of a balloon is far cooler than the warhead, which has been heated by air friction. But if the balloon rotates and its reflectivity is proportionately brighter than the warhead's, it will give off the same heat signature as the warhead, which is also rotating. In space, where there is no air resistance, the balloons will travel at the same velocity as the warhead. Hence, the KV can't obtain sufficient data to distinguish among the objects.

As MIT's Ted Postol says, faster computers and better detectors are of no use here. 'Getting better information that is irrelevant doesn't help you.' However, General Ronald Kadish, head of the Pentagon's missile defense program, thinks improvements in complexity theory might answer this obstacle. That issue is discussed below under 'artificial intelligence.'

As for atmospheric midcourse interception, it would appear to be a useful backup option in the event of boost-phase misses. It is necessary however that the KV not also be near the top of its potential arc, otherwise relative velocity might be too low, resulting in a light tap, rather than a destructive hit. Apparently, Patriot intercepts in the Persian Gulf war suffered from such low impact-energy collisions, though some scuds may have been gently steered away from their original target.

Patriot warheads, however, explosively fragmented near the interception point in order to increase probability of a hit. Hence, greater overall relative velocity was needed.

A not-foolproof countermeasure for short and medium range missiles is use of thrusts higher than anticipated at correspondingly changed launch angles. That is, say, if the Israelis are geared to aim near the top (height H) of scud trajectory A, Saddam might redesign his scuds to go to H+x, where x varies. Because the warhead falls from a higher altitude, the increased acceleration and associated buffeting may increase course projection error, though this may not be a serious issue.

Arms expert Andrew Sessler is persuasive in arguing that buffeting-related error is liable to be insignificant for endcourse ICBM's.

The Israelis, assuming they have been aiming for what they believe is the top of the scud trajectory, would then have to be prepared for a range of maximum altitudes. Their options:
  • Store enough fuel in each interceptor for any plausible altitude, possibly complicating design and rendering a lot of costly equipment obsolete.
  • Add interceptors, each of which varies in thrust, to the defensive system.
  • Aim low, intercepting any comer at a bit below the lowest possible maximum altitude, and tolerate disadvantages, if any.
If a missile is missed in boost phase, try again in midcourse. If missed there, try again at endcourse, on the downward arc. That seems reasonable, but it's not in the cards. The military is unenthusiastic about the endcourse option, perhaps because of costs associated with missile-to-missile systems. However, a laser system should be able to fire at enemy missiles during any phase of flight with little difference in cost.

Artificial intelligence and course-plotting
Gen. Kadish has argued that real-world tests, along with advances in complexity theory, would likely overcome the the midcourse decoy scenarios offered by Postol and others. As this reporter understands it, complexity theory focuses on special cases of negative entropy: conditions that yield perceived order out of seemingly random inputs. Perhaps the general has been reading reports from Los Alamos National Laboratory, which has close collegial ties with Murray Gell-Mann's Santa Fe Institute, a private think tank devoted to the study of complexity theory.

Yet it seems unlikely that Gell-Mann and his associates have come up with revolutionary (and presumably classified) mathematical theorems that would allow the government such capability. Revolutionary theorems are hard to come by -- and they surface independently of government research grants. It may be that Kadish was referring to advances in artificial intelligence, a research area no doubt entwined with government secrecy and military funding. AI -- or quasi-AI (which is a less-controversial notion) -- is related to complexity theory because a top issue of complexity theory is the discovery of how life forms, including how the perception apparatus called the human mind, might evolve from self-replicating parts.

So, if you can devise a program of self-contained automata to interact and build larger systems, you might be able to obtain a program that thinks, or quasi-thinks. Development in World War II of radar-guided anti-aircraft fire contributed importantly to the development of computers (and inspired Norbert Weiner in his philosophy of cybernetics). Aircraft course-prediction systems use various weighting methods (more weight to more recent data, for example), filtration of signal noise and smoothing algorithms (methods of approximating an output curve closely through occasional input values).

These calculations are judgmental in nature, the computer using various criteria to guess the next move (and perhaps countermove). We might think of the Deep Blue program, which defeated world chess champion Gary Kasparov, as a variant of an advanced radar detection program. Quasi-AI, like human intelligence, is useful for anticipating a position based on imperfect data. If we consider 'random' to imply 'no datum on which to base a judgment,' then neither a human mind nor a computer program can make a useful prediction.

Of course, we usually mean 'random within constraints,' in which case useful prediction is limited by the constraints. We may view 'chaotic' or 'pseudorandom' to mean outputs that cannot be ascertained without knowing previous output values, as with a recursion function of the X-next type. So if the next move is random within constraints or pseudorandom, we may face an exponential rise in computing work to either curtail the constraints or implement a recursion function.

In the case of the midcourse decoy scenarios, pattern recognition would have to go beyond course and brightness clues.Pattern recognition can be, like the traveling salesman problem, a computational quagmire, though not necessarily so. If a pattern is composed of independent elements, computational work rises, essentially, by n. But if a pattern is a set of interdependent elements, then computational work can rise exponentially by number of elements. That is, a detector identifying two co-dependent elements has 2! (=2) units of work to do; a detector coping with 10 interdependent elements might have in the vicinity of 3.6 million (10!) units of work to do.

Pattern recognition of the decoy balloons tumbling through space looks to be an iffy task. Detection equipment would need to identify a set of small differences and analyze them, but possible clues are so negligible that it seems recognition of interdependent subsets would be required. (The total number of possible subsets of a set is 2n, an exponential quantity.)

And suppose an 'advance in complexity theory' makes decoy recognition feasible? What countermeasure might yield another computational quagmire? Bottom-up artificial neural networks have proved effective at 'learning' to discriminate among patterns. For example, in 'The Engine of Reason, the Seat of the Soul' (MIT Press, 1995), Paul M. Churchland cites programs in which a vector quantity is assigned to each element of a limited set of faces. The program then uses a process of error reduction until it is able to match a face with a numerical code (name) most or all the time. Such a system requires repeated trials (run by a serial computer).

The technology's appeal to weapons designers lies in the fact that the program is effective at identifying the correct pattern (target) even with degraded (noisy) data. This technology, while fascinating, is unlikely to help much in the discrimination of Postol-type decoys from warheads. Neither human nor classical machine will do well at such a task because the differences in input data are so miniscule. As of June 12, 2002, the Pentagon, taking advantage of the post-9/11 secrecy craze, had classified data on future tests of decoys, leading some to charge the Rumsfeld contingent of trying to shield NMD from legitimate technical criticism.

Nevertheless, the Pentagon has never directly refuted the point made by Postol and others, but rather relied on the notion that technical breakthroughs will save the day. The possibility that NMD can be made to work only against inferior decoys but never against the best decoys designable is not addressed.

Does quantum computing lurk behind NMD?
In December 2001, the New York Times reported that an IBM research team was about to announce the factoring of the number 15 using quantum computation, an experiment with defense repercussions. No confirmation of this initial report can be found in the usual places, such as science magazines or even at the relevant IBM web site. However, an IBM scientist referred this writer to a Nature article by the IBM researchers (Jan. 4, 2002), which discusses the potential, using lasers and beam splitters, for quantum computing using photon quantum effects.

Nothing about the actual factorization of the number 15 is noted, though the IBM researcher does not deny the accuracy of the Times report. The team reputedly used a quantum device to test and validate Shor's algorithm, which shows a way to use the quantum phenomenon of superposition to do simultaneous factoring. The team factored the number 15 into its primes of 3 and 5. Of course, this is a far cry from factoring hundred digit numbers but the validation of such a method sent shockwaves around the world.

If a classical supercomputer can crack a code by factoring numbers with 10x digits, code-makers simply use primes that, when multiplied together, result in numbers with (10x) + 1 digits. On a classical computer, the work of cracking such a code rises exponentially by digit place.

It seems likely that decoys may present similar computational issues, particularly if the course-plotting program is already souped-up with nonlinear methods. Is there a way out? There remains the intriguing concept of quantum computation, which still appears an elusive quarry, despite its apparent validation in principle. Essentially, the idea behind such a device is that though a quantum particle, once observed, appears to have taken a specific path, we can NEVER predict which path it will take with absolute certainty.

For example, if a photon goes through a symmetrical interferometer, it is said to take one of, say, two paths each with a probability of perhaps 1/2, but, if left undetected enroute, the photon emerges as if it takes one path with probability 1. Some sort of interference causes the photon to have a nonprobabilistic final path. Can this quantum weirdness be harnessed? In principle, yes, as the experimenters proved.

According to the 'many worlds' theory proposed by Hugh Everett (who, incidentally, spent his career as a Pentagon scientist), for each path the photon MIGHT take in our world, it actually DOES take in 'another world' (or 'parallel universe'). Our universe differentiates into two universes at the time a quantum particle, such as a photon, seems to make a 'random' choice.

It is possible for differentiated universes to merge back into a single wave under the right conditions. If somehow these split universes could be merged back into 'our' universe, you might be able to compute classically exponentially hard problems in the blink of an eye. Supposing the 'many worlds' idea holds, just imagine that you could somehow get nearly identical computers in a large set of universes to parallel-process a tough problem -- one universe/computer per route in the traveling salesman problem, for example.

A more usual view is that the photon, before detection, has various possible positions superposed. On detection, the wavelike nature collapses, and the photon's position can be known.In the quantum computation experiment, factors are associated with superposed quantum states called spins. At any rate, though quantum computation might someday prove a boon to code-crackers and AI program designers, it seems at this point unlikely that the code-busting National Security Agency would be very happy at the prospect of the Pentagon squandering such an asset on an easy-to-spy-on weapons system.

For deep insights into the worlds of quantum theory and AI, see The Emperor's New Mind and Shadows of the Mind by Roger Penrose and The Fabric of Reality by David Deutsch. Also see Deutsch's Frontiers article.

Pages of interest:
Conant letter to Reporters Committee
http://angelfire.com/az3/nfold/freepress.html
Psyops against the press
http://angelfire.com/az3/nfold/psyops.html

Monday, November 11, 2013

First published Thursday, July 12, 2007


The prosecutor's fallacy

There are various forms of the "prosecutor's paradox" or "the prosecutor's fallacy," in which probabilities are used to assign guilt to a defendant. But probability is a slippery subject.

For example, a set of circumstances may seem or be highly improbable. But defense attorneys might wish to avail themselves of something else: the more key facts there are in a string of facts, the higher the probability that at least one fact is false. Of course, that probability is difficult to establish unless one knows either the witnesses' rates of observational error or some standard rates of observational error, such as the rate typical of an untrained observer versus an error rate typical of a police officer.

(For a non-rigorous but useful example of likelihood of critical misstatement, please see the post Enrico Fermi and a 9/11 plausibility test. In that post we are testing plausibility which is far different from ironclad guilt or innocence. Also, for a discussion of probabilities of wrongful execution, please search Fatal flaws at Znewz1.blogspot.com.)

Suppose an eyewitness is tested for quick recall and shows a success rate of 96 percent and a 4 percent error rate. If the witness is testifying to 7 things he saw or heard with no prior knowledge concerning these things, the likelihood that the testimony is completely accurate is about 75 percent. So does the 25 percent probability of error constitute reasonable doubt -- especially if no fact can be expunged without forcing a verdict of not guilty? (Of course, this is why the common thread by several witnesses tends to have more accuracy; errors tend to cancel out.)

The prosecutor's paradox is well illustrated by The people v.
Collins
, a case from 1964 in which independent probabilities were incorrectly used, the consequence being dismissal of the conviction on appeal.

To summarize, a woman was shoved to the ground and her purse snatched. She and a nearby witness gave a description to Los Angeles police which resulted in the arrest of a white woman and a black man. I do not intend to treat the specifics of this case, but rather just to look at the probability argument.

The prosecutor told the jury that the arrested persons matched the description given to police so closely that the probability of their innocence was about 1 in 12 million.

The prosecutor gave these probabilities:

Yellow auto, 1/10; mustached man, 1/4; woman with ponytail, 1/10; woman with blonde hair, 1/3; black man with beard, 1/10; interracial couple in car, 1/1000. With a math professor serving as an expert witness, these probabilities were multiplied together and the result was the astoundingly high probability of "guilt."

However, the prosecutor did not conduct a comparable test of witness error rate. Suppose the witnesses had an average observational error rate of 5 percent. The probability that at least one fact is wrong is about 26 percent. Even so, if one fact is wrong, the computed probability of a correct match remains very high. Yet, if that fact was essential to the case, then a not guilty verdict is still forced, probability or no.

But this is not the only problem with the prosecutor's argument. As the appellate court wrote, there seems to be little or no justification for the cited statistics, several of which appear imprecise. On the other hand, the notion that the reasoning is never useful in a legal matter doesn't tell the whole story.

Among criticisms leveled at the Los Angeles prosecutor's reasoning was that conditional probabilities weren't taken into account. However, I would say that conditional probabilities need not be taken into account if a method is found to randomize the collection of traits or facts and to limit the intrusion of confounding bias.

But also the circumstances of arrest are critical in such a probability assessment. If the couple was stopped in a yellow car within minutes and blocks of the robbery, a probability assessment might make sense (though of course jurors would then use their internalized probability calculators, or "common sense"). However, if the couple is picked up on suspicion miles away and hours later, the probability of a match may still be high. But the probability of error increases with time and distance.

Here we run into the issue of false positives. A test can have a probability of accuracy of 99 percent, and yet the probability that that particular event is a match can have a very low probability. Take an example given by mathematician John Allen Paulos. Suppose a terrorist profile program is 99 percent accurate and let's say that 1 in a million Americans is a terrorist. That makes 300 terrorists. The program would be expected to catch 297 of those terrorists. However, the program has an error rate of 1 percent. One percent of 300 million Americans is 3 million people. So a data-mining operation would turn up some 3 million "suspects" who fit the terrorist profile but are innocent nonetheless. So the probability that a positive result identifies a real terrorist is 297 divided by 3 million, or about one in 30,000 -- a very low likelihood.

But data mining isn't the only issue. Consider biometric markers, such as a set of facial features, fingerprints or DNA patterns. The same rule applies. It may be that if a person was involved in a specific crime or other event, the biometric "print" will finger him or her with 99 percent accuracy. Yet context is all important. If that's all the cops have got, it isn't much. Without other information, the odds are still tens of thousands to one that the cops or Border Patrol have the wrong person.

The probabilities change drastically however if the suspect is connected to the crime scene by other evidence. But weighing those probabilities, if they can be weighed, requires a case-by-case approach. Best to beware of some general method.

Turning back to People v. Collins: if the police stopped an interracial couple in a yellow car near the crime scene within a few minutes of the crime, we might be able to come up with a fair probability assessment. It seems certain that statistics were available, or could have been gathered, about hair color, facial hair, car color, hair style, and race. (Presumably the bandits would have had the presence of mind to toss the rifled purse immediately after the robbery.)

So let us grant the probabilities for yellow car at 0.1; woman with ponytail, 0.1; and woman with blonde hair, 0.333. Further, let us replace the "interracial couple in car" event with an event that might be easier to quantify. Instead we estimate the probability of two people of different races being paired. We'd need to know the racial composition of the neighborhood in which they were arrested. Let's suppose it's 60 percent white, 30 percent black, 10 percent other. If we were to check pairs of people in such a neighborhood randomly, the probability of such a pair is 0.6 x 0.3 = 0.18 or 18 percent. Not a big chance, but certainly not negligible either.

Also, we'll replace the two facial hair events with a single event: Man with facial hair, using a 20 percent estimate (obviously, the actual statistic should be easy to obtain from published data or experimentally).

So, the probability that the police stopped the wrong couple near the crime scene shortly after the crime would be 0.1 x 0.1 x 0.333 x 0.18 x 0.2 = about 1.2-4, or about 1 chance in 8300 of a misidentification. Again, this probability requires that all the facts given to police were correct.

But even here, we must beware the possibility of a fluke. Suppose one of the arrestees had an enemy who used lookalikes to carry out the crime near a point where he knew his adversary would be. Things like that happen. So even in a strong case, the use of probabilities is a dicey proposition.

However, suppose the police picked up the pair an hour later. In that situation, probability of guilt may still be high -- but perhaps that probability is based in part on inadmissible evidence. Possibly the cops know the suspects' modus operandi and other traits and so their profiling made sense to them. But if for some reason the suspects' past behavior is inadmissible, then the profile is open to a strong challenge.

Suppose that a test is done of the witnesses and their averaged error rate is used. Suppose they are remarkably keen observers and their rate of observational error is an amazingly low 1 percent. Let us, for the sake of argument, say that 2 million people live within an hour's drive of the crime scene. How many people are there who could be mistakenly identified as fitting the profile of one of the assailants? One percent of 2 million is 20,000. So, absent other evidence, the probability of wrongful prosecution is in the ballpark of 20,000 to 1.

It's possible that the male or female associate of the innocent suspect's partner is guilty, of course. So one could be an innocent member of a pair while the other member is guilty.

It's possible the crime was by two people who did not normally associate, which again throws off probability analysis. But, let's assume that for some reason the witnesses had reason to believe that the two assailants were well known to each other. We would then look at the number of heterosexual couples among the 2 million. Let's put it at 500,000. Probability is in the vicinity of 5000 to 1 in favor of of wrong identification of the pair. Even supposing 1 in 1000 interracial couples among the 2 million, that's 2000 interracial couples. A one percent error rate turns up roughly 20 couples wrongly identified as suspects.

Things can get complicated here. What about "fluke" couples passing through the area? Any statistics about them would be shaky indeed, tossing all probabilities out the window, even if we were to find two people among the 20 who fit the profile perfectly and went on to multiply the individual probabilities. The astoundingly low probability number may be highly misleading -- because there is no way to know whether the real culprits escaped to San Diego.

If you think that sounds unreasonable, you may be figuring in the notion that police don't arrest suspects at random. But we are only using what is admissible here.

On the other hand, if the profile is exacting enough -- police have enough specific details of which they are confident -- then a probability assessment might work. However, these specific details have to be somehow related to random sampling.
After all, fluke events really happen and are the bane of statistical experiments everywhere. Not all probability distributions conform to the normal curve (bell curve) approximation. Some data sets contain extraordinarily improbable "ouliers." These flukes may be improbable, but they are known to occur for this specified form of information.

Also, not all events belong to a set containing ample statistical information. In such cases, an event may intuitively seem wonderfully unlikely, but the data are insufficient to do a statistical analysis. For example, the probability that three World Trade Center buildings -- designed to withstand very high stresses -- would collapse on the same day intuitively seems unlikely. In fact, if we only consider fire as the cause of collapse, we can gather all recorded cases of U.S. skyscraper collapses and all recorded cases of U.S. skyscraper fires. Suppose that in the 20th Century, there were 2,500 skyscraper fires in the United States. Prior to 9/11 essentially none collapsed from top to bottom as a result of fire. So the probability that three trade center buildings would collapse as a result of fire is 2,500-3
or one chance in 156 billion.

Government scientists escape this harsh number by saying that the buildings collapsed as a result of a combination of structural damage and fire. Since few steel frame buildings have caught fire after being struck by aircraft, the collapses can be considered as flukes and proposed probabilities discounted.

Nevertheless, the NIST found specifically that fire caused the principle structural damage, and not the jet impacts. The buildings were well designed to absorb jet impact stresses, and did so, the NIST found. That leaves fire as the principle cause. So if we ignore the cause of the fires and only look at data concerning fires, regardless of cause, we are back to odds of billions to one in favor of demolition by explosives.

Is this fair? Well, we must separate the proposed causes. If the impacts did not directly contribute significantly to the collapses, as the federal studies indicate (at least for the twin towers), then jet impact is immaterial as a cause and the issue is fire as a cause of collapse. Causes of the fires are ignored. Still, one might claim that fire cause could be a confounding factor, introducing bias into the result. Yet, I suspect such a reservation is without merit.

Another point, however, is that the design of the twin towers was novel, meaning that they might justly be excluded from a set of data about skyscrapers. However, the NIST found that the towers handled the jet impacts well; still, there is a possibility the buildings were well-designed in one respect but poorly designed to withstand fire. Again, the NIST can use the disclaimer of fluke events by saying that there was no experience with fireproofing (reputedly) blown off steel supports prior to 9/11.



Blacklisted Journalist column (contains a few editing errors)

What is an algorithm?


This page went online in 2001 and was revised in June 2003.

The word 'algorithm' is all the rage these days and it is easy to give special cases of algorithms, such as Euclid's method of isolating the greatest common divisor.

And what's a computer program without its algorithms? But what is a satisfactory general definition for those of us uninterested in computer programing? I thought it wise to explore the concept of 'algorithm,' which I use in an intuitive sense when justifying alternatives to infinite sets on my page 'When is truth vacuous? Is infinity a bunch of nothing? [Haven't been able to recover it]

"The notion of algorithm," writes A.G. Hamilton (Logic for Mathematicians, Cambridge, revised 2000) "is an intuitive one, not a mathematically precise one..." which he defines as:

* an explicit effective set of instructions for a computing procedure (not necessarily numerical) which may be used to find the answers to any of a given class of questions.

Other definitions:

* a procedure for solving a usually complicated problem by carrying out a precisely determined sequence of simpler, unambiguous steps." [Academic American Encyclopedia, 1996]

* A plan of how the arithmetic is to be organized to solve a specific class of problems is called an algorithm. The detailed organization of procedures that a machine may take to implement a given algorithm is called a (machine) program. The list of detailed instructions or commands a specific machine employs to carry out a program is called a code. [Encyclopaedia Britannica 1998; 8:828:3b]

* a procedure, or sequence of instructions, used to evaluate a function. We write A(f(x)) to indicate that algorithm A implements, realizes or evaluates f at all arguments x in the domain f(X). [Marvin C. Paull, Algorithm Design: a recursion transformation framework, John Wiley & Sons, 1988]

* any step-by-step procedure to calculate just about anything. [Cornell topologist Jim Conant]

* a finite procedure, written in a fixed symbolic vocabulary, governed by precise instructions, moving in discrete steps, 1, 2, 3..., whose execution requires no insight, cleverness, intuition, intelligence, or perspicacity, and that sooner or later comes to an end. [David Berlinski, The Advent of the Algorithm: the idea that rules the world, Harcourt, 2000] I suppose Berlinski wasn't thinking of procedures that 'reach' a limit at an infinity of steps. So here is my definition, followed by discussion.

* an n-step procedure, where n may be finite or unbounded, for converting an element of a set of input values into an element of a set of output values.

This is rather too vague, so I will get more specific.

But first, back to Berlinski: Berlinski, who studied logic under Alonzo Church, says that the pre-1930s fuzziness of definition was removed by four great 20th century logicians, who each gave a different definition, each of which turned out to be equivalent to the other.Though Berlinski describes the achievements of Church, Kurt Godel, Alan M. Turing and Emil Post in the area of mechanical computability, Berlinski does not tell us precisely what their definitions are.

I think he means that a successful program for a Turing machine or a Post machine is an algorithm. And that Church's lamda conversion system can be used to formulate recursive definitions. How one sets up the recursive definition is an algorithm. Godel's primitive recursive functions are also defined via 'algorithm.'I'm not terribly satisfied with Berlinski here but, let's grant that he is trying to get across some rarefied stuff to a bunch of laypeople (me included).

In order to fully understand the concept of 'algorithm,' one needs a course in mathematical logic. It is of importance in order to get at such concepts as calculability and decidability.In logician Herbert B. Enderton's interpretation, Church's thesis says that recursiveness is the correct formalization of our informal idea of calculability.

Recursion implies that a procedure inevitably yields the same output value on every run. No randomness allowed.Recursive functions, according to Hamilton, are composed of addition, multiplication and projection functions.Suppose we think of an algorithm as the instruction for a Turing machine (a hypothetical tape which has symbols erased and changed at every move) to calculate a particular final output value (if the tape is known to halt).

There can be only a finite set of symbols arranged in a 'legal' way (a well-formed formula following specific grammatical rules (another algorithm!)). Now each of these formulas is unique and can hence be encoded as a single number (perhaps a Godel number, but not necessarily).

Now since each formula is a sentence of n symbols, we find that we can make as many legal formulas as we like, since n is unbounded. However, for any n, there is only a finite set of formulas. By this, it is known that the set of algorithmic formulas is denumerable (is one-to-one with the set of integers, but not with the set of irrationals). In this conceptualization, there is really no difference between turing machine n and set of instructions n.

Another way to think of an algorithm is as a decision tree, which I won't bother to draw. But let's use logic notation to get across the thought: A simple one-step algorithm might be: P --> Q v R where P = {a,b}, Q = {q}, R = {r} P is the set of input values, and Q and R are sets of output values. So an algorithm would specify truth values, such as: (a --> q) & (a --> ~r)(b --> r) & (b --> ~q) But another, shall we say, sub-algorithm might be: a --> q v a --> r [which includes a --> q & r] We can continue the steps indefinitely of course, where every step has its own sets of output values. By this it is clear that a particular path or solution to this decision tree is recursive in the sense of, speaking loosely, f(f(f(x))).

However, our decision tree need not be expressed as a function, though perhaps it may be expressed, possibly, as a composite relation. But my son Jim is iffy on restricting the word 'algorithm' to relations. What about mazes? he asks. At any rate, let's look at a generalized algorithm expressed less formally: Step0 > select input value1 > perform operation 1, obtain output value2 > use step 1 input value, perform operation 2, obtain output value. ...and so forth. (For convenience, we number the operations independently of the steps, so that at step x, operation n means the nth operation performed since step 1. Also, note that operation m may or may not equal operation n.) Now here we introduce 'meta-algorithms.' Notation: Suppose we call the example above algorithm A, which we might specify as A[o].

Conditions prior to step 1 are expressed A(0) or A[o](0), with A(x) an arbitrary step and A(n) the final step, if one exists. A(x) means the xth step, not the xth input or output value.For two algorithms A and B, we may wish to determine how they interact mathematically, if they do. If B is a meta-algorithm on A[x], then B = A[x+1] A meta-algorithm A[x+1] simply erases operations in A[x] and possibly substitutes operations. From the notation, we see that we can have as many meta-algorithms as desired, analagous to a composite function. An example of a meta-algorithm:We have, holding c constant, algorithm A[c](n). Add operation m and erase operation k, giving A[c+1](n). We might be able to decide values for any element of the meta-algorithm set specified B[x]. For example, if x is even, use A[c]; if x is odd, use A[c+1]. On the other hand, we might run into 'undecidability,' as in: A[x] where x --> inf.; if x is composite, use A[c]; if x is prime, use A[c+1].
How often does the number 31 recur in an infinite random digit string?
If an infinite digit string is random, then a substring that might occur once in a billion digits, would have probability one of occurring infinitely many times. In fact, there is a finite probability for any finite substring, implying probability one that all finite substrings will recur infinitely often.

The other side of the coin is that probability one doesn't mean absolute certainty. For example, there is probability one that Goldbach's conjecture is true, and yet there might be some case over infinity where it doesn't hold.

Additionally, nearly all the reals are noncomputable. But there is a notional possibility that a a program that generates a 0 or 1 randomly over infinity might generate such a noncomputable. There is no reason to suspect that every noncomputable has the property of every finite substring infinitely recurring. Let us assume such a string. We then scan the string and delete the number 31 in every occurrence. The string is still infinitely long -- but someone might object that the modified string is not now random.

BUT, we have proved such a number exists, a number that might have been generated by a random process, whatever the probabilities say.

A mathematician comments: The answer lies in an explicit definition of the term "infinite random digit string." Any algorithm for producing the digit string will at once tell you your answer and also raise the objection that the digit string isn't really random. A physical system for producing the digits (say, from output of a device that measures quantum noise) turns this from a mathematical into an experimental question. [1]

Conant replies: Or a philosophical question. I.e., we know, according to Cantorian set theory -- as modified by Frankel, Zermelo, Von Neumann (and including the axiom of choice) -- that there is a nondenumerable infinity of noncomputables. So one of these noncomputables R0 -- that could only notionally be reached by some randomization method such as via quantum noise -- does contain a (countable) infinity of 31s. But that means there is also a number R1 that contains the same infinite string as R0 but that lacks all occurrences of 31.

Of course, it isn't necessary that such a randomization process yield a noncomputable. R0 might be a noncomputable and R1 a computable. Nevetheless such a procedure might yield a noncomputable R1.

If we regard Frankel-Zermelo set theory as our foundation, it is pretty hard to imagine an element of the noncomputables as anything but a random number. However, if one could randomly select each digit via a quantum noise detector, we would agree that the probability is overwhelming that '31' will recur an infinity of times.

[1] Private communication with a professor 'pen pal.'

Edited version posted March 3, 2013; I will update this page from time to time.
A previous version of this page is found on Angelfire.


Toward a signal model of perception

Contents
  • Overview
  • Philosophy, time and motion
  • The issue of solipsism
  • Information, entropy and perception
  • Interpretations of quantum results
  • Abstraction and causality
  • Multiplexing possibilities
  • Feedback control
  • Goals and behavior
  • Reality construction
  • The importance of narrative continuity
  • Dreams and altered states of consciousness
  • Group reality construction
  • Randomness, probability and coincidence
  • Jung, Koestler and synchronicity
Appendix A: Synchronicity anecdotes
http://paulpages.blogspot.com/2013/04/appendix-anecdotal-accounts-of.html
Appendix B: Synchronicity 'experiments'
http://paulpages.blogspot.com/2011/11/appendix-b-experiments-in-synchronicity.html

By PAUL CONANT
Philosophy is perhaps the best category for this paper, which crosses the boundaries of various specialties. I have attempted to incorporate an interpretation of quantum phenomena into a still rough-hewn signal model of perception. Appendix B has results of experiments in the manufacture of unusual coincidences1x, which, not fulfilling rigorous experimental standards nor being classically replicable, are meant as food for thought.

There is a type of logic proof that gives a first approximation of the proof followed by one or more versions that hone and amplify the initial sketch. This paper does something of the sort, though no precise result is reached and certainly we are not aiming for any nail-on-the-head proofs. A principal reason for much of the fuzziness stems from the problem of defining terms, so many of which are interdependent or that have axiomatic roots that are hard to fathom. Even so, it is hoped that once the somewhat fuzzy pieces have been read, a larger Gestalt will emerge.

Disclaimer: Though I cite some of the ideas of such persons as Jung and Freud, I wish to make clear that I do not endorse any particular psycho-social theory, nor do I wish to revive old political controversies.

Though issues of psychology enter our discussion, we leave the lion's share of psychodynamics to others. Similarly, we avoid the minutiae of neuroscience. Our aim is to abstract a process, in the same vein as Turing's abstraction of a universal computer. Yet, we bypass most of the mathematics on the basis that most of the mathematical groundings are already well known.

A sketch of our line of thinking: the Schroedinger cat paradox demonstrates that "concrete" reality is far more ephemeral than is usually believed. In this sense Berkeley was right. Similarly, our intuitive sense of linear time biases our opinion as to what constitutes the perceived past and reality. The brain processes signals and manufactures a, for the most part, cohesive narrative which it perceives as "hard reality." But this reality is more dreamlike than is generally understood. Bizarre coincidences, or synchronicities, are a result of what I call phenomenon wave interference.

Philosophy, time and motion
You're stuck with a grotesque and absurd illusion, the idea of time as an ever-rolling stream... There's one thing quite certain in this business: the idea of time as a steady progression from past to future is wrong. I know very well we feel this way about it subjectively. But we're victims of a confidence trick. -- Fred Hoyle (1)

Zeno, later fortified by Bishop Berkeley's criticism of differential calculus, has also told us that there is something distinctly odd about time and motion. Some may believe that Karl Weierstrass's epsilon-delta proofs of mathematical limits have neutralized these issues, but of course the enigmatic nature of time and motion has resurfaced with the theory of relativity and with quantum mechanics.

Physicist Alan Lightman (2) makes the point that the quantum energy limit means that if one raises a swing to a particular height Y, the potential energy is nh, with n an integer and h Planck's energy constant. So our idea that we may raise the swing to any height between 0 and Y is wrong. Yes, the gap between allowed energies corresponds to changes in height of about 10(-33) inch, but we still have the question, what happens to the swing between n and n+1? Apparently, it does not exist in an intuitive sense. The swing (or some small region on it) exists in a frozen state (instantaneously) for each allowed height. At the next height, the swing region miraculously appears again. There is no transition between n and n+1!

One is of course reminded of how motion pictures work, with slightly different still frames run together to form a smooth impression of something we call motion.

So Zeno had a point. One never really crosses a distance measured in terms of the real number line. One crosses a Planck distance measured in integers only. But then Newton had a point. The derivative, or fluxion, represents instantaneous time. That is, the duration of the time interval is zero, and this mathematical concept has been to some extent confirmed by quantum mechanics (though for strict rigor, if not in practice, quantum mechanics requires the use of finite difference calculus). Actually, 0 is also an enigmatic concept in quantum theory, whereby we are faced with quantum limits on the definition of time and distance which are, of course, interdependent with the notion of energy.

There are, of course, many more puzzles concerning quantum phenomena and time, such as Alain Aspect's validation of Bell's inequality, a sensational result that has stirred no end of wrangling but which assuredly points to a difficulty with coming to terms with the concepts of time and history.

Suppose we have two detectors A and B where A is closer to the source of an entangled pair than B. The observer looks at B and finds that, say, the result is "spin up," implying that -- ignoring the possibility of error -- detector A must say "spin down." But if A's state is in superposition until observed, then so is B's. Yet once A is looked at, B's result is determined. Yet the particle, according to the classical view, arrived at A before its partner arrived at B. Some have argued that the Einstein-Podolsky-Rosen problem implies instantaneous, faster-than-light messaging between the detectors. But in this scenario, if a message is sent from A to B, the implication is that it went backward in time.

Following John Archibald Wheeler, we have the scenario of a photon that had a, say 50-50 probability of taking a clockwise or counterclockwise hyperbolic curve around a gravitational-lensing star. That star might be billions of light-years distant. So did the observation of the photon determine an event that occurred billions of years ago?

Or consider the wrinkles in the cosmic background radiation, which are supposed to have been a consequence of quantum fluctuations more than 13 billion years ago. These erratic fluctuations are credited with the irregularities that eventually led to formation of stars, galaxies and life on earth. Yet such "fluctuations" are a consequence of observation. Had the wrinkles been detected on some other day, a very different observable universe would presumably be seen. Maybe the moon wouldn't be there, after all. And yet, somehow the wrinkles have become "concrete-ized" so that different observers will fairly well agree that they are about the same whenever and wherever observed, much like the moon.

Asked whether the observer influences the observed object, Wheeler replied, "The observer does not influence the past. Instead, by his choice of question, he decides about what feature of the object he shall have the right to make a clear statement." (3)

In his autobiography, Wheeler writes that measurement of the photon "in some sense determined that history" but that a measurement is a mechanical registration not requiring a conscious observer. (4)

This almost seems like a quibble. From the perspective of the observer, only one history becomes available once a question is asked and answered by "collapse of the wave function."

So if an entangled message was not sent via Einsteinian spacetime, what is the correct description? I do not propose a full answer to the question except to say that bilocalism seems to imply a cosmic fabric that is deeper than our usual phenomenal belief system. Nobelist Brian Josephson (Google his home page for link to paper) has argued that quantum bilocalism might well be linked to the bilocalism of paranormal events, an idea that has made him a most unwelcome presence in some scientific quarters.

David Bohm, who struggled to eliminate fundamental randomness from quantum mechanics, nevertheless strongly defended the necessity of nonlocality. "We have not yet found what we would regard as a valid logical reason for dismissing nonlocality. We are therefore led to ask whether there could not be some other kind of reason. It may well be that one of the main reasons that people dislike the concept of nonlocality can be found in the history of science. For in the early period of the development of science there was a long struggle to get free from what may perhaps have been regarded as primitive, supernatural and magical notions in which nonlocality clearly played a key part. Perhaps there has remained a deep fear that the mere consideration of nonlocality might reopen the flood gates for what are felt to be irrational thoughts that lurk barely beneath the surface of modern culture." But, even so, he writes, the "right to enquire freely" must be upheld. (5)

Then we have Einstein's discoveries that time is a function of velocity (in special relativity) or, actually, acceleration (in general relativity) and his overthrow of simultaneity, a necessary idea in Newton's background frame of "equably flowing" time. (Newton, by the way, strongly suspected that there was more to the world than what he described in Principia, and spent much of his life as an alchemist seeking to unravel the mysteries behind the world of phenomena.)

Einstein's friend, Kurt Goedel, found a set of solutions to the differential equations of general relativity in which it was in principle possible to go forward into the past. Einstein, while not disputing Goedel, wondered whether such solutions had any relation to physical reality. But Goedel was convinced that if one solution of Einstein's field equations showed such a result, then there was a disturbing problem with the conception of the lapse of time, even if we don't actually live in such a universe.(6) After all, he noted, whether closed time-like loops exist would only depend on the arrangement of mass in the cosmos. We might add, why should a cosmos such as ours exist with such an anthropic preference?

If both Goedel and Einstein are correct, then should we not be prepared to construe time as a perceptual matter?

This point is underscored by Richard Feynman and others who considered that particles can and do "travel backward" in time. If so, can time be said to exist at all?

Reflecting Hermann Minkowski's definition of a light year as equal to i second, Stephen Hawking has argued for "imaginary" time. All complex numbers on the plane can be mapped onto a sphere with the north pole point representing the point at infinity on the plane. Might not time behave similarly, whereby the closer one gets to the finite beginning of time, the closer one gets to eternity?

Consider the matter of metabolic rate and awareness. In some sense, a fly is certainly aware of its surroundings. But what is its now? It lives fast and dies soon, by comparison with a human, whose now is much longer. A being who is much larger and slower than a human would presumably regard the human now as amusingly short. And what would a being of cosmic scale regard as now? Might not such a now be immensely long, possibly even eternal?

This said, we should acknowledge that the sense of now is of course limited by the quantum limit on time subdivision and it seems plausible that it is a peculiar function of consciousness. A percept, in the sense of a small process that precedes and includes consciousness, would be a kernel of this now, I suppose.

In fact, neuroscience experiments have determined that direct, unitary perception of an event "now" lasts between 0.02 and 2 seconds. For visual experience, the percept or "now" length is about 0.01 s, for auditory experience the duration is about 0.02 s. A continuous sensory stimulus lasts no more than 1.5 to 2 s.

Admittedly, we have not properly defined now. But the point is that what we call time is closely related to perception, and we are faced with a form of the chicken-or-the-egg problem.

All this is well known, but it in fact runs counter to the intuitive sensibilities of many scientists. They know of these disturbing issues, but most of their bread-and-butter work involves standard "equably flowing" time and they brush aside questions concerning causality as matters of interpretation that can be dispensed with.

Yet this prejudice may hinder alternative ways of discussing physical reality, ways that in fact evoke strong, and occasionally irrational, protest.(7)

The issue of solipsism
"Inconsistency," it has been said, "is the hobgoblin of small minds." But without that concept, logic, mathematics and science in general would not exist. Scientists prize consistency and tend to disdain the self-referencing problems of Russell, Goedel and Turing.

Even so, Goedel's incompleteness theorem tells us that our ability to analyze has limits. But within those limits, scientists have tended to favor linear formulae, such as those of Newton, because they give a good measure of predictability. Whenever feasible, non-linear systems are approximated with linear equations. Yet, as systems become more complex, predictability tends to decline and non-linear feedback reigns. Chaos theory and results from catastrophe theory show that non-linear systems can evolve toward fundamental unpredictability (the noise amplitude equals or exceeds the desired signal), supercomputers notwithstanding.

It is non-controversial that perceived reality is influenced by the brain's processing routines. But to what degree is that reality dependent on the brain? If it is "too much," then can we say that some absolute, equably evolving, external background frame of reality (or information) exists?

Solipsism is the notion that the only certifiable reality is what one's mind entertains, which contrasts with the scientific tradition of categorizing working sets of abstractions that are independent of any one mind. Hence, there is a strong bias among scientists in favor of an absolute background reality. An altered state of consciousness is viewed as a pathology related to mental illness, brain damage, drug use, extreme emotional stress, fatigue or sensory deprivation.

Yet the fact that fear (the root of much mental illness), brain damage or some extreme psychic disturbance can completely derange "reality," might give us some insight into the "normal waking state." In both normal and abnormal cognitive states, the brain is processing and decoding signals via a sophisticated negative feedback control system (though some acute episodes of mental dysfunction are a consequence of the emergence of positive feedback).

To a great extent, students of neuroscience and perception have found that much, if not all, of perceived "reality" is manufactured. If we think of "reality" as a data stream strongly influenced by the interaction of a scanning device with the environment (while conceding, in line with Chomsky, that some of the "core reality" is hard-wired), the distinction between "normal" and "abnormal" perception boils down to the sentient being's success at survival and, perhaps, procreation.

In his book About Time, Paul Davies raises the solipsism issue: "In fact, how can we be sure that the universe wasn't created a hundred years ago, with everything arranged to appear as if it were much older. Or, for that matter, perhaps the world started five minutes ago, and we were all made with consistent memories of our earlier activities [planted?] in our brains. Even more interesting would be if our memories varied a bit, to inflame controversies like the number of gunmen who killed President Kennedy." (8)

This last point is of some interest. We would expect that slightly different reality histories would clash. The question is, is there an "actual reality" against which the conflicting memories can be matched? Or are attempts at forensic inquiry fatally tainted because investigation brings about new realities (or histories)? Essentially, what we would like to know is whether a set of absolute truths -- an equably flowing absolute background reality -- exists and, if so, what form it takes. It would seem that some such absolute system is necessary, but there is no guarantee that it will be the background reality implicitly assumed by most physicists.

As an analog, let us, in terms of a computer terminal screen, consider Einstein's belief that phenomenal reality suffices. Suppose we have some very smart people from a lost Pacific island who, for the first time, encounter a modern computer terminal that is showing a group of videos. The keyboard is missing. They examine the imagery and postulate various rules for the behavior of the phenomena. Fine. But what is the chance they will be able to work out the deeper reality of the electronic system running the program from simply viewing the screen's videos?

Nonlinearity comes into play here. As individuals interact and exchange information, we might expect that their perceptions of "past events" would tend to merge. Still, I am uncomfortable with the idea that reality is so malleable that it is pointless to wonder about government conspiracies. But that's only personal prejudice, of course.

We are dealing with a matter of degree. We cannot rule out that some knowledgeable observers are not in denial about the circumstances of JFK's death or the attacks of 9/11, but are actually living in some other world that somehow interacts with your world and mine. (Even so, I feel quite certain that denial -- repression of unwelcome truths -- is a common psychological phenomenon greatly exploited by political elites.)

Information, entropy and perception
We can accept Shannon's definition of information as the negative of the log of the probability of the detection of some symbol and his associated definition of entropy.

An oft-unnoticed implication here is that a message requires some form of control in order to counteract noise, which is to say entropy. A message conveyed through a sequence of signaling systems will degrade over time (as the childhood game of "telephone" should convince us). In other words, if information is to be retained, energy, directed into error-correcting codes and algorithms, must be continually added to the transmission system (though in the ideal, there is a limit to how much error-correction is necessary to obtain a perfectly noiseless channel). This, of course, dovetails nicely with the entropy and conservation laws of thermodynamics.

Sometimes we might like to be sure that some information string is not only of relatively low probability, but reflects what we sometimes loosely call order. The string 010101... may be of low probability and yet constitute a message of low value. A means around this would be to append the information in the error-correction code to the information in the message.

The inclusion of error-correction information then leads us to accept that information is really a mental construct, even if that construct is common to a number of minds. Information requires work, which we account for via the error-correction process. In fact, though work is defined as equivalent to energy (W = K = 1/2mv2), my thinking is that the difference between the two is that work carries with it a higher level of information than does energy in general. That is, some of the energy of the system goes into what might be called an error-correcting process. This diversion of energy, coupled with the First Law of Thermodynamics, then accounts for the Zeroth Law that rules out perpetual motion machines for high-variable systems. (Nevertheless, if we cast work in terms of efficiency, then work of low efficiency implies high entropy. And as for the impossibility of a perpetual motion engine, this applies to a repetition of a high-information state, which is extraordinarily improbable.)

So the tendency toward increased entropy implies that information -- viewed as a transmitted or stored message -- is time-dependent. In fact, the concept is, like energy, an abstraction of a process that to a great extent depends on activities of the brain.

From a classical perspective, Boltzmann entropy, Shannon entropy and the "arrow of time" can be reconciled thus: We consider a dynamical system at some time t0 to be represented by a single net force vector. In the case of a gas in a sealed container, the vector is close to 0. This vector is the sum of all the constituent force vectors at t0. (We can also notionally calculate from ta to tb.)

For even mildly complicated systems, the information in the net vector V is insufficient to tell us which "path" brought about V. That is, V is the sum of all other force vectors and we cannot know the order of summation. Hence, we say that entropy has increased and reversibility is impossible.

In quantum terms, the arrow of time -- to wit, irreversibility -- follows from the general non-commutability of matrices in matrix mechanics. The few cases where AB = BA represent the relatively rare symmetrical systems.

In addition, broken symmetry occurs at the particle level, whereby violation of geometric parity implies a noncommutative relation.

This viewpoint would appear to accord well with Neils Bohr's position that knowledge of quantum events is limited to the questions we are able to ask, though Bohr was anxious to disentangle the macro-world from quantum weirdness.

Wheeler has said that the cosmos cannot be a giant machine ruled by any pre-established continuum law (please see my paper On Hilbert's sixth problem found at

http://kryptograff.blogspot.com/2007/06/on-hilberts-sixth-problem.html ),

which would seem to suggest that he does not favor the idea that the universe can be expressed as a sum of units of information.

One of the issues when it comes to perception is that the entropy law implies that mental constructs, or phenomenon signals and memories, fade or fail. Their tendency toward extinction is, we suggest, based on the emotion level attached to these constructs, a point discussed below.

This E-value corresponds to the strength of the error-correction code, even though a strong E-value might induce what is construed to be memory distortion.

The entropy of the brain's programs, such as memory templates that fade out if not reinforced, is an obvious consequence of natural systems. In an imperfectly conducting cable -- or set of circuits -- sine waves of different frequencies travel at slightly different velocities, yielding delay distortion, a smearing out of the wave packet and its form. That is, virtually any signal tends to get noisier and noisier over time simply as a result of its principal media.

Interpretation of quantum results
I well realize that the term interpretation has become a catch-all means of evading the logical implications of material acausality and bilocalism, as if interpretation is a matter to be left to philosophers, mystics, poets and cranks. If one is interested in mere brute-force calculation, then this view will serve. But recall that Einstein was highly interested in interpretation in his 1905 relativity paper. In fact, his interpretation of physical reality so as to exclude an ether is among the things that distinguish his relativity paper from Poincare's 1904 paper and a reason why he, not Poincare, is honored for the breakthrough. The wrong interpretation was impeding scientific insight. In a similar vein, Einstein's theory of gravity reinterprets the physical description of space.

Suppose for the moment that we divorce conscious awareness from the cosmos. Then one might say that the cosmos just is -- no causes, no effects, just an undifferentiated whole. (This still isn't quite right, there being no observer to make this appraisal.) This whole is sometimes called a spacetime block. Without the mind to experience motion, there is no distance and no time. And, if we think in terms of a manifold of greater than 3+t dimensions, the block, taken as a whole, is frozen.

So let us consider the three principal interpretations of quantum reality:
1. Copenhagen. Bohr favored the concept of limits to knowledge. There is no explanation of the counterintuitive results of quantum mechanics and so we must only talk about the results of measurements of quantum phenomena. The "question" strongly influences the result.(9) This approach helped him avoid public discussion of strange implications, such as were brought to the fore by Schroedinger's compelling thought experiment.

2. John Von Neumann's observer centrality. Von Neumann argued that there is a chain of quantum events between the measuring device and the brain and so the "collapse of the wave function" occurs in the central nervous system and hence the observer is the key to quantum phenomena. Schroedinger introduced his cat specifically to underscore the absurdity of this idea, but the experiments of Aspect confirming Bell's inequalities leave room for puzzlement. [See Anecdotes appendix for a note on Von Neumann's odd superstitions.]

3. Many worlds. There are several variants of Hugh Everett's "many worlds" proposal, which says that the superposition of quantum states implies that there are numerous branching realities, or universes, that evolve for each quantum event. Bohr's reaction to this idea froze Everett out of academic physics. Murray Gell-Mann, who proposed a variant he called "many histories," like Schroedinger, wished to show the absurdity of observer centrality by pointing out that an ancient crystal could show a track from a quantum particle. Would not this imply that the observer had somehow formulated this implicit history by simply looking at the crystal?
Wheeler has pointed out that a quasar billions of light years out is seen in two positions because of gravitational lensing of an interposing galaxy. If one uses a detector to observe a single photon, would not that imply that the photon's trajectory to the left or right of the quasar was determined by the observer, even though the galaxy is hundreds of millions of light years away?

Interestingly, when Bohr, Max Born and Wolfgang Pauli defended the Copenhagen interpretation and strongly criticized Einstein's "hidden variables" belief (a phrase none used but which succinctly describes Einstein's position), Einstein retorted by pointing out that the Schroedinger thought experiment (Einstein substituted a mechanical recorder for the cat) scaled up quantum weirdness into the macroscopic world, something he thought to be unacceptable though not logically impossible.(10)

[I hasten to point out here that Pauli's position as of 1949 that what is now called the Copenhagen interpretation is completely satisfactory should be taken into account when discussing his apparent sympathy with Carl Jung's notion of synchronicity.]

Consider this scenario:

A human DNA mutation may occur when a high-energy particle collides with chromosomes, causing a rewriting of the stored information. Hence, cosmic rays, nuclear reactor leaks or even sunlight photons can cause mutations. Quantum rules are used to describe all these particles and their detections.

Now suppose a person -- absent any medical or family-history reason -- decides to be tested for a hereditary disorder. According to the Von Neumann postulate, the collapse of the dangerous particle's wave function doesn't occur until he or she reads or hears the result of the test. Prior to that detection, the photon was in a superposed state and cannot have been said to have interacted with the chromosome of some ancestor.

I once posed this conundrum to a physicist who had written a popular article and he replied that such an absurdity was why he subscribed to the decoherence arguments propounded by David Lindley.(11). He then begged off further correspondence, telling me that he had recently discovered that he had a hereditary disorder.

Interestingly, Wheeler, who spurns the need for a conscious observer as a determinant of history, nowhere in his autobiography mentions the Schroedinger cat thought experiment.

At this juncture, I would like to address a point of confusion that has entered into the debate. According to some, the rapid decoherence of the waves constituting macro-phenomena implies that Schroedinger's cat is only in a live/dead superposition for a very short, effectively unobservable time, and so there is no real measurement problem for large objects.

Two points:
1. As SQUID experiments and experiments with large molecules have shown, it is possible to obtain superposed states of quantum phenomena in relatively large systems, which are detected by indirect means. Or consider the Bose-Einstein condensates, whereby a group of atoms is superposed and only a single superposed state can be said to exist. So the principle is established that alternate "realities" do indeed coexist even if such systems tend to entangle with the ambient environment and decohere rapidly.

2. Schroedinger was addressing the issue of what is a detector and what is an observation. When does the observation occur? Some have tried to argue that the measurement occurs once the cat dies, or doesn't, or at the moment that the box lid is opened.
However, let us consider a cloud chamber or scintillator track of a quantum particle. What do we see? A sequence of water droplets formed around the ionized atoms, where the least squares method gives the particle's fictional "continuous" trajectory. Each droplet is correlated with a quantum action (movement of a valence electron) that was emitted by an atom that was "close enough" to the transient particle for an energy exchange. Because the atoms are jiggling about, the sequence of blobs is irregular. If it were possible to fire another particle with the same energy and precision (not possible), we would see a different path, because atoms would not reliably intersect in the same places with the particle.

Now consider what happens as we look at a quantum measurement. There is a sequence of intermediate quantum events. So in most cases there is some very large number of quantum paths between the "external" detection and the brain's cognition. So each leg of each path is in superposition at the micro-level. We may then regard each path as in superposition with all the other paths. In other words, our interaction with the measurement requires a set of superpositions of states. Yes, this set of superpositions doesn't last long, but we haven't got rid of the cat conundrum by appeal to "decoherence." There is actually a large set of live states and another large set of dead states that link to the observer's consciousness.

Related to the decoherence viewpoint is the ensemble argument which says that quantum events can only be assessed statistically. One simply does not ask about the cat's state before observation. But if one does wonder about the cat's state when one is not looking, the implication is that all these quantum paths to the cat are untaken and so in superposition. If we choose to think in terms of linear time, then the state of the unobserved cat is bothersome. But, if we accept that time is not some sort of equably flowing river, then perhaps we can accept the implication that phenomenal reality is not so "concrete" as one might think.

Bohr's interpretation essentially required that a sharp distinction be drawn between the experimental apparatus and the observer, but such a program doesn't really work. As Bell said, "The problem of measurement and the observer is the problem of where measurement begins and ends, and where the observer begins and ends. Consider my spectacles, for example: if I take them off now, how far away must I put them before they are part of the object rather than part of the observer? There are problems like this all the way from the retina through the optic nerve to the brain and so on." (Quoted in The Ghost in the Atom, P.C.W. Davies, J.R. Brown, ed, Cambridge 1986.)

It should be noted that Bohr's thinking evolved over time, but when he took into account the observer, it was as a way of saying that the lack of distinction between observer and observed limited what we can know about the physical world. "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature," Bohr is reported (12) to have said.

As Bell notes in Ghost in the Atom (13), Aspect's finding of strong correlation for entangled pairs makes the measurement problem harder "because Einstein's view that behind the quantum world lies a familiar classical world was a possible (and now discarded) way of solving the measurement problem -- a way of reducing the observer to an incidental role in the physical world.

Similarly, Pascual Jordan said that "observations not only disturb what has been measured, they produce it." (14)

Questioned about whether some inanimate device could replace a mind in a quantum measurement, Rudolph Peierls responded in the negative (13). A quantum experiment "goes on until you can throw away one possibility and keep only the other" which is when "you finally become conscious of the fact that the experiment has given one result."

Peierls insisted that "there is a quality of human beings, call it mind, that distinguishes us from the other objects in our environment and which is absolutely crucial for making sense of fundamental physics."

In fact, quite a number of physicists agreed with Albert Lande's battle, made prominent in books published prior to 1970, to get rid of the duality problem of quantum physics (14a). However, Aspect's results greatly undermined such attempts.

As Nick Herbert said of the Von Neumann interpretation, "In Von Neumann's consciousness-created world, things (or at least their dynamic attributes), do not exist until some mind actually perceives them, a rather drastic conclusion but one to which this great mathematician was forced by sheer logic once he had decided to take the quantum measurement problem seriously." (15)

Bohm, who was sharply critical of Von Neumann's interpretation, nevertheless was unwilling to rule out the brain's influence and chides neuroscientists for an overly classical approach. He notes that "we know now that the retinal cells respond to a few quanta at a time and that this response leads to the multiplication of effects to the classical level of intensity," adding: "But the retina is just an extension of the brain. There could evidently be other parts of the brain in which such a sensitivity may exist, e.g. in certain kinds of synapses. If this were the case, then the brain could, like a measuring apparatus, manifest and reveal aspects of the quantum world in the other processes. Such quantum sensitivity would imply that there are more subtle possibilities of behavior of the brain, and a classical analysis would break down." (16)

Eugene Wigner's view (17) was that conscious reality is absolute and that physical reality is dependent on conscious reality. Consider the case of Wigner's friend. Suppose Wigner sets up a quantum measurement and is prepared to see whether a particle is, say, spin up or spin down. Under Von Neumann's postulate, the superposition remains until Wigner actually looks at the detector. But suppose Wigner is in the next room and calls out to his friend to look at the detector and shout out the result of the experiment. Is the friend in a superposed state until Wigner hears the answer?

In his book Physics and Philosophy (18), James Jeans concludes that the debate over whether material phenomena are mental forms or whether mental forms are a consequence of material phenomena seems to be leaning toward the former. He once said that "the universe begins to look more like a great thought than like a great machine." (19)

Fred Hoyle, the British astrophysicist, was another backer of Von Neumann observer-centrality, arguing that the attempt to separate the macro and micro worlds via statistics wouldn't always work. His variant of the Schroedinger cat scenario was a bomb rigged to a quantum device. If one doesn't look at the device, presumably the bomb both explodes and doesn't explode. This, he insisted, means that consciousness is crucial to reality. (20)

The many worlds interpretation might be construed as an attempt to bring some external background reality back into science and dispose of the distasteful subjectivity implicit in observer-formed reality. But this interpretation has its own problems. (Bohm says that Everett's scenario is really a many minds interpretation (16), as opposed to Bryce Dewitt's formulation of many worlds (21).)

David Deutsch has defended his "weakly interfering" many worlds view by saying that, with the entire universe described as a wave equation, there is no longer a need for strong subjectivity in quantum theory. Hawkings, by positing a no-boundary universe (roughly analagous to a Mobius band, perhaps), suggested that the wave equation description could be adopted with "initial conditions" being similar to the point at infinity on the complex sphere's projection onto the complex plane. Hawkings also seems to favor demoting the observer to minor status.

What bothers Seth Lloyd about the many worlds interpretation is his experiential reality of the ego as the center of the universe. A backer of Gell-Mann's many histories idea, Lloyd finds it troublesome that there would be many variants of himself in split-off universes, but Deutsch responds that these split-off universes do not strongly interfere and so there is no need for concern. (See Lloyd's home page to read the Lloyd-Deutsch debate.)

Topologist Jim Conant has pointed out that the many worlds scenario means that there is one world in which an individual has missed every life-threatening accident of nature as opposed to all his fellows, who have not survived. This person would live alone indefinitely in one of these universes. Though we will not use the "obviously silly" argument to dispute this interpretation, surely such a scenario raises significant philosophical issues.

Commenting on the Schroedinger cat problem, Leonard Susskind asserts that the only way to avoid the difficulty of wave function collapse "is to include the entire observable universe as well as the branches of the wave function in the quantum description." Susskind favors a multi-bubble-universe model. (22).

But Rolf Landauer opposes such a view. "We caution those who invoke the wave function of the universe. How can that wave function be recorded, unless you have a second and separate universe available for that?" (23). So there would need to be an infinity of Susskind bubbles and the ultimate recorder would never be reached.

Bohm eventually settled on the analogy of the hologram to describe his notion of "implicate order," whereby a hidden process might be behind seemingly strange results.(24) His "quantum potential" permitted instantaneous signaling, which brings the notions of space and time into question.

Bohr emphasized the concept of "complementarity," as in the wave complementing the particle. From what I can gather, he perhaps means "two sides of the same coin." Or, perhaps he was suggesting something like interdependence of definitions. For example, Euclid gives us a line in terms of points or points in terms of a line. They strictly imply each other.

Similarly, we might view complementarity to mean that clashing concepts result because A <--> B. Notice that when A <--> B we cannot say A causes B or the converse. When A <--> ~A, of course, we have an inconsistent system. However, this doesn't hold for waves and particles because we say that detection of A --> no detection of ~A (detection of a wave implies a lone particle hasn't been detected), though we don't have a visualizable picture of why the types of detection differ.

Abraham Pais reports (22a) that Bohr thought of complementarity as akin to two Riemann surfaces, a concept he encountered in a course on complex analysis. We can see that Bohr was thinking topologically, in the sense that two Riemann surfaces reconcile related but mutually exclusive mathematical objects.

Interestingly, Einstein used Riemann topology for his general theory of relativity, but apparently did not grasp how Riemannian concepts could undermine the principle of causality. On the other hand, Bohr's intuitive topology is essentially a heuristic device.

To paraphrase Bohr, we might say that complementarity is akin to two branches separated by a branch cut. Sometimes the cut is placed at a singularity. So z(-1/2) has two branches with a singularity at z = 0. We might suggest the singularity brings to mind the unobservable component whereby wave and particle are somehow fused in an undefined way.

Though this is an interesting analogy, we should caution that plenty of situations have multiple branches. For example, ln(z) has an infinity of branches (because ln(z) = 2i(pi + k), where k is any integer). Similar analogies can be made for Riemann surfaces.

Of course, the usual meaning of complement is expressed symbolically: A' = A\B where B is a subset of A. This equation expresses dualism nicely but is trivial in terms of causality or acausality.

To be blunt, complementarity strikes this writer as so much hand-waving.

However, David Wick suggests (22b) that Bohr's "complementarity" idea may well stem from his reading of William James, who wrote of an experiment in which a subject was given post-hypnotic suggestion to be blind to card numbers that were multiples of 3. Upon awakening, the subject denied seeing any cards labeled 3, 6 or 9, though her hand, as she was speaking, picked up exactly those cards.

James refers to these cases as representing "relations of mutual exclusion" found when the mind has been compartmentalized into distinct consciousnesses.

James, in his Principles of Psychology, wrote that, at the least, in certain persons, "the total possible consciousness may be split into parts which coexist but mutually ignore each other, and share the objects of knowledge between them. More remarkable still, they are complementary. Give an object to one of the consciousnesses, and by that fact you remove it from the other or others." [Full quote in footnote (22c).]

This result is quite similar to results of experiments with brain damaged patients, and also brings to mind my Necker cube example.

Abstraction and causality
When we say that A causes B, what do we mean? Ordinarily we mean that phenomenon A, or, better, signal A is linearly associated with B in time (A reliably occurs before B). If A is an input into the brain processor, there is an inference that B will be a following input.

We might regard this as a black box scenario. Input of signal A into the box is expected to be followed by the output of signal B. However, if we decide to look inside the black box, we will find another black box. The notion of scientific advance might be seen as a set of nestled black boxes. But the set is not infinite. We reach the "last black box" when we reach quantum limits. If one thinks of causation as equivalent to branching trees of energy exchanges, then at the quantum level there are no more energy exchanges that can form a link between two phenomena.

Yet the phenomena A and B (the cause and the effect) are names given to patterns, or that is to say, signals and signal templates (memories).

But what is a phenomenon? The best answer is to say that it is a signal with key components that remain constant. That is, we assume that phenomenon X expresses a thing or event that is replicated or recurs. But in actuality X is normally an approximation and abstraction of many experiences. So empirically one's brain determines that when A occurs, the probability is high that B follows. When the learned probability is very high, we say that "A causes B," forgetting that A and B are as much abstractions as a Euclidean line. This abstraction is real as a limiting or axiomatic form but is not itself replicable, except perhaps as a memory template. (The ideal of a horse is the set of attributes that define what we mean by this phenomenon.)

However, mathematics, mathematical logic and hard sciences don't seem altogether empirical. Yet, what we have in science is a system of relations as in "if xRy and yRz, then xRz." So pure math and logic provides a system or systems of abstraction that are used to undergird the relations found by physicists and other scientists. Still, these science relations are based on assumptions about abstractions such as massive object, force and energy which "work" in the sense that they can be used for assigning higher or lower probabilities to "A implies B."

But even if one proves that xSv implies xRy, one can rarely be sure that that abstraction x represents a unique information string rather than some unordered set of such strings. If the latter, sometimes some x_0 will occur without being paired to any y, forcing us to account for such outliers by incorporating a margin of error.

But my basic point is that cause and effect are in many respects a product of the mind's means of perception and estimation. Whether there are causes and effects in some background frame of reality is not evident.

What is meant by the word abstraction? This is a freighted question, and we will avoid most of the deep philosophical issues, but certainly it is related to the concept of codification, whereby a short data string is used to represent a longer one. Note that if a coding is going on, then there is information in the function that transforms string A into string B, and so abstractions tend to have relatively high information values.

One might also accept that modern naive set theory, or an axiomatic formalism such as Zermelo-Fraenkel, is a good basis for conceptualization. Further -- realizing that much computer math can be represented with matrices or, in general, via group theory -- we can think of a data set embedded in a matrix. For example, suppose we have some well-chosen augmented square matrix AB, where B is a column vector whose elements are all 1's. Then A, through a sequence of row-column transformations, can be uniquely represented as as IA', or simply A'. Now A' condenses the data of A (though the number of bits per column element tends to rise) so that it can be said to uniquely represent A.

The information content increases with each transformation. We can of course stop anywhere between A and A' and the intermediary matrix CD might be construed to have a lower level of abstraction than A'.

Of course an abstraction, expressing some recurrent pattern, is worthless -- in fact, can't exist -- without memory. The pattern must be stored for potential use later, though patterns left unused may extinguish. We suggest that any perceptual pattern X -- the memory template -- has an emotion value associated with it, which helps prioritize the brain's tasks. That is, X has a pain (avoidance) value between 0 and some maximum, and likewise for pleasure (attraction). If X has a (0,0) value, then it probably would not be retained in storage.(26)

Clearly, a pattern's E-value can and does evolve with time, as the bio-system records a value for each new occurrence of X and then uses a weighted averaging method to assign a new value. These two scores are used to prioritize the system's handling of the recognized signal. The overall process is made obvious from the phenomenon of extinction, whereby a desire-based or fear-based behavior gradually extinguishes if no signficant association with reward or punishment occurs during recurrences of X. (Of course, there is always the possibility that non-learned -- "hard-wired" -- reactions to specific signals can have a relatively constant E-value over time.)

So pattern recognition -- which we shall discuss further later on -- and causality go hand in hand. There are various logico-mathematical systems for encapsulating the idea of causality. It is handy that the matrix multiplication rule for AB = C can be construed as a cause of C. This analogy -- it is perhaps more than an analogy -- reflects the principle of entropy, or, that is, the apparent asymmetry of time. The fact that BA need not equal C echoes this asymmetry.

Heisenberg was deeply concerned with the causality issue, though he believed that the mathematics sufficed to give quantum relations and that there was little point in trying to draw a three-dimensional machine that would fit inside a "black box." He certainly had a precedent. Newton's theory of gravitation gave the relations without trying to peek into the cause of "action at a distance." Field theories seem to get rid of that problem, only to disclose on further inspection equally difficult problems with linear causality.

Various problems in quantum theory, it seems, result from the fact that black boxes are always necessary. For example, the procedure of renormalization is needed to lop off infinities so that calculations are possible. But many are dissatisfied. However, the method gets results, even if it doesn't "make sense" in accord with standard causality, just as Heisenberg's matrices got results, even though there was no system of levers and pulleys "underneath."

Multiplexing possibilities
We regard the normal human brain as a signal processor that employs a feedback control mechanism.

Before elaborating on the concepts of "signal" and "feedback," let us consider "multiplexing," which is the encoding, transmission and decoding of two or more approximately simultaneous signals.

If we consider a signal as represented by some wave form, then, using, say Fourier analysis, that wave form can be decomposed into lower amplitude components (perhaps harmonics). Now it is unsurprising that two wave forms -- signals -- can superpose. So a scanner tuned to a particular type of wave form of amplitude A will regard that form as the signal and the complement form of amplitude B as noise. Obviously, the scanner's decoding program is critical to determining which signal is received and which is ignored.

In the communications industry, analog multiplexing may use amplitude or frequency modulation, whereas digital multiplexing is usually done by weaving together messages based on time intervals, knowledge of those intervals being essential for decoding.

Another possibility: Encoder A uses binary numbers from set X intended for Receiver A' and Encoder B uses binary numbers from set Y intended for Receiver B'. X junction Y may or may not be disjoint. If not, either the intersection is all noise for both parties or Receiver A' may receive a readable, if noisy, message intended for Receiver B' (and possibly the converse as well). Similarly, a multi-node network might yield occasional unintended messages.

We can posit a similar idea for wave analysis. We number the harmonics and assign one set of numbers to harmonics subset X and the other set to harmonics subset Y. If X and Y are not disjoint, then every now and then an unintended but meaningful message may reach a decoder.

An additional consideration: a signal or wave form may be of any finite length (though if long it must be processed piecemeal). We can think of the novel Crime and Punishment as a single signal that could be superposed over Brothers Karamazov, another single signal. If there is a non-empty intersection of the codes of these multiplexed signals, a processor processing these narratives piecemeal could -- in principle -- come up with a fairly readable, if noisy, narrative composed of elements of both stories, though the composite story would likely lack a satisfactory Gestalt.

Though in this example such an outcome represents an extreme long shot, one can devise scenarios whereby such "readable but noisy" composite narratives are much more likely. Also, we have not yet addressed the feedback control issue in the brain's processing of signals. (27)

Goals and behavior
A negative feedback system has one constant state, or goal, that it is designed to maintain. That system may employ a number of secondary feedback systems, the goals of which are elements of the primary goal. The primary goal is hard-wired and some of the secondary goals may also be hard-wired (archetypes, in Jungian language), whereas other goals may be formed by the software program. This would be equivalent to finding the best route through a maze to obtain the bit of cheese that the animal has learned is very likely to be had.

We may represent a goal G as a matrix of numbers standing for various signal pulse strengths. In order to satisy a matrix -- reach a goal -- the system may test a number of algorithms, each of which has some probability of success. After a number of trials, the system selects an algorithm with the best (or at least "good enough") success rate. In fact, we may design neural networks such that the system runs various sub-routines and, through repetition, learns probabilities.

The brain's means of learning seems well-represented by conditional probability methods. Consider some primary goal and draw a tree diagram of routes to the result. As the brain runs the system through trials of these paths, it assigns probabilities to each leg which it memorizes (rarely consciously). Again, the optimal path learned is not necessarily objectively the best path. We may have parallel trees for different goals, but these goals are still secondary to the primary goal of pleasure/not pain, which would usually have the highest emotion value of 2(1/2)x(1/2), with x very high.(26) In other words fear of death corresponds to fear of the unknown, which carries a high pain template value; desire to live may correspond to numerous remembered pleasant life experiences. Clearly, this emotion value is not uniform, as the suicide rate demonstrates. (32)

The feedback occurs in the process of testing to ensure that an optimal path is memorized. In fact, the memory function is an essential part of the feedback loop. Importantly, the feedback control system may continually test paths to see whether the currently accepted optimum still holds. But there are limits on this process as behaviors, choices and social interactions become ever more complex. If internal feedback control worked well at high levels of complexity, the jails would be empty.

Plainly, pathologies ("bugs") occur in software that deploys feedback control systems, as, for example, when self-defeating closed loops form as addictive or repetitive compulsive behaviors. Some pathologies occur when the brain learns that to obtain self-pleasure, it must inflict self-pain -- though the phrases "delayed gratification" and "no pain, no gain" reflect a need to balance fundamental goals.

The tree-diagram representation works for discrete time intervals, whereby each tree diagram's probabilities either remain the same or change by time interval. But the diagrams -- each corresponding to some template memory pattern -- themselves are often continually evolving.

While fear of the unknown has an obvious Darwinian point, delight in the new can also be cast in Darwinian terms. The hunter must discover, must solve problems. Also newness relates -- along with its soulmate, creativity -- to what Freud called libido, whereby creativity is the sublimation of sexual needs. The species prospers through heterosexual variation -- though not too much, as that increases risk of disease. What we are getting at is that there is a powerful mechanism to find or produce new patterns -- but not too new. These patterns need be composed of existing stored patterns, or memory templates, just as two or more wave forms can superpose as a composite signal.

Again, matrices prove useful in representing goal-directed (feedback) systems, where AB very well may not equal BA. Suppose the system has determined that goal A cannot be achieved without sub-goals B and C. It may also have determined that B then A obtains goal C but that the converse doesn't work. (Markov chain matrices may prove of value when modeling such a system.)

Cognition that a goal has been achieved is much the same as recognition of a pattern. In the same vein as Google asking whether you meant "New York City" rather than "New York Sity," a matrix is considered satisfied when there is a good match of template elements with incoming data. This form of error-forgiveness makes sense because the probability of accuracy increases exponentially with each accurate element. Nevertheless, this "good fit" approach is hardly infallible, as when one calls out to a "familiar" person, only to learn that the set of initial clues was associated with a stranger. This method of estimation strongly influences the "construction of reality," as discussed below.

Focus is a particularly important concept. When one thinks of focus, one may think of some sort of conscious process, but we may roughly define focus as the pursuit of some goal, though the goal may be some complex mental construct. Hence focus becomes the mechanism for pursuing and staying with a specific goal. In turn, this is a consequence of prioritization of goals that are either hard-wired or have been programed into the software. So focus then becomes a function of the E-value for a specific goal. An emergent incoming signal (or pattern) may trigger a higher E-value associated with the stored template. The system then shifts focus.

In a human, the so-called "left-brain" pattern recognition, abstraction and manipulation systems are sufficiently robust to yield third- and fourth-generation software programs for obtaining goals, meaning that the composite signal -- made up of the input signals, the feedback loop signals and the output signals -- is, if not chaotic, at least highly unpredictable in many respects.

It seems self-evident that consciousness is differentiated with degees of alertness. So we'd say that a stream of consciousness can be represented as a composite signal of variable amplitude, though an average amplitude can be used for a basic state, corresponding to such activities as reading an instruction manual, or dozing on the train. The amplitude for alertness is closely related to focus.

Thoughts and thought constituents (or "sub-thoughts") would be modeled as wave forms guided by a carrier wave, which varies, in form and amplitude, with the higher alert states and the trance-like or sleeping states.

So we would say that human brain's feedback control system runs both parallel and hierarchical programs for the monitoring of incoming data and output behavior. That output behavior includes the stream of consciousness (though the machine paradigm may not be satisfactory from a philosophical point of view, of course). The output, influenced by a number of variables, has a chaotic tendency, rather like the weather. On the other hand, as with weather forecasting, some underlying behaviors are quite probable, as psychological warfare experts and blackmailers have learned. Some of these sub-systems may be linear, but the overall effect seems to be that these goal-pursuit activities express a non-linear dynamical system.

However, in that survival depends on good estimation of outcomes, the human mind tends to favor linear "cause-effect" interactions. In order to obtain such linearity, it uses probabilities to determine the likelihood of B, given A, even though the patterns A and B are in fact generated within a non-linear system.

Construction of reality
We will give a working definition of reality as "meaningful interaction with a perceived environment." Meaningful is any interaction associated with some emotion value. The emotionless cold logic of Star Trek's Mr. Spock is, we suggest, insufficient for perception and consciousness. The amygdala's valuation of experiences is essential for the prioritization process necessary for human consciousness.(33) Even when one applies cold logic to the solution of a particular problem, one does so based on some emotion-based need.

Does this mean that the "objective external environment" is emotionally colored? In a word, yes. Focus is predicated by goals, which have E-values. Of course, the engineer and the scientist wishes to filter out the subjective aspects of "the environment" or problems under study. Essentially, this means that there is an assumption that the intersection of a number of skilled minds leaves a set of relations that are stripped of subjectivity.

No longer is one's mind the creator. The agreed principles of reality are said to precede this.

I have not attempted to answer either the solipsists or the mechanistic representationalists. What is intended is to show that the brain does indeed construct perceived reality. (Whether the manufacture occurs before or after the fact is the bone of contention; my suggestion is that both are true.)

The term percept is generally reserved for sentient lifeforms. It is at the level of percept that the line of distinction between conscious awareness and mechanistic awareness blurs, and I must leave that blur in place. I offer no conjecture as to what constitutes the kernel of consciousness. (34)

To approach the percept concept, let us use a heuristic device. Consider a video. Each frame is an element of a longer signal. A percept is analagos to a video frame. There exists some quantum (using the term advisedly) of perception whereby the brain processes a specified set of data (which we might represent with matrices) in some basic time interval.

The percept occurs once a match has been made between an incoming data set and a template pattern. This match generally occurs before it reaches the conscious level. There is a reason for this delay. The percept not only matches a data set against a memorized pattern, it also must "fit" into the stream of awareness, as has been demonstrated with the phi effect.

Hence cognition -- what occurs as a consequence of a percept or set of percepts -- is a composite event. We should regard, at some basic stratum, a number of percepts as being superposed (running in parallel but having a Gestalt effect) to form a focus percept set. Again, focus is dictated by parallel and hierarchical goals, some of which are hard-wired and many of which are evolved from the "software."

Different brain mechanisms are scanning for various high priority patterns. These patterns usually superpose. These superpositions may be cast in terms of the composition of a matrix with sub-matrices, or as the composition of a wave form with sub-wave forms, perhaps at the level of harmonics.

So we are talking about a set of percepts represented, say, as a high information (many superpositions) wave form. I do not say that a percept necessarily corresponds to some harmonic. I don't know.

At any rate, we would say that a stream of consciousness may be treated as a signal represented by a composite wave form over some basic interval associated with a percept. As noted earlier, different percepts have typically different durations, and so we would expect a composite percept's duration to either be as long as the longest-time sub-percept or to vary with the longest-time sub-percept. (35) So then, we would treat the typical stream of consciousness as a wave form that, within constraints, rarely shows exact replication (has high complexity). The aperiodicity of course does not extend to the essential components, such as biorhythmic "carrier waves" and other elemental wave forms.

In this light, we must take into account wave packet dispersion, or delay dispersion. Though not important at the quantum level, delay dispersion is routine in the so-called macro-world. That is, a phenomenon signal, composed of numerous subsignals, is likely to be subjected to delay dispersion because some of the subsignals taken from memory are showing entropy's wear and tear. This effect also helps to ensure that the overall reality signal is highly variable and generally aperiodic.

Oliver Sachs has told of a noted musician whose short-term memory was so severely limited by a brain disorder that he lived in a state where from moment to moment he felt as if he had just become conscious, though he could still function to some extent because process-system memories of how to do things -- such as play music or converse -- were still operational, though in a sharply limited sense. (36) The condition of the musician, now deceased, demonstrates that the impression of continuity that an unimpaired person has derives from many stored patterns being brought into play. And, we see that some of the feedback control systems can still run without much input of information via the consciousness. Despite his abilities, however, the musician was subjected to a highly unpleasant discontinuity -- somewhat akin to watching a video with far too many frames missing.

Research into visual perception demonstrates that the brain constructs the three-dimensional view of the world. No logical or philosophical imperative requires that the world "out there" have three spatial dimensions. When one thinks of the word dimension, one may say that dimensions are part of "concrete reality," but in fact they are mathematical abstractions that we moderns use in coping with our ideas about reality. We might easily argue that human perception approximates within some range what happens in some n-dimensional (n greater than 3, or 4, if you include time) manifold. In fact, the melding of the spatial dimensions and the time dimension into a Minkowski-Einstein spacetime continuum underscores this point.

(Interestingly, Einstein was delighted with the potential of the Kaluza-Klein five-dimensional spacetime frame; he made a serious effort to keep their mathematics while avoiding having spacetime be "really" five-dimensional. Einstein's reformulation found little favor.)

These days, string theorists suggest that we can't "see" posited extra dimensions because they are too tiny, being "curled up" at a quantum level. This disability would be analagous to not being able to see the edge of some extremely thin object. Yet the idea that three space dimensions correspond to some objective "concrete" set of phenomena is, at root, an unproved assumption. This cosmic stuff with which the brain interacts needn't correspond to any visualizable images. In fact, quantum discoveries tend to underscore that point.

The concept of dimension stems from the abstraction of the notion of edges in Euclidean solid geometry and also is, intuitively, related to the up-down sense from gravitational effects. We might add that the brain's method of decoding signals as "moving objects" despite these objects continually changing in apparent size and shape helps to impress on us the intuitive notion of dimension. If we regard a moving object in terms of signaling, we get some idea of how this process works. The brain detects some core pattern and compares it with the relatively static background visual field. The change in the background field is perceived as motion and the incremental changes in the "object" set give rise to the perception of morphological change.

At any rate, most cosmic stuff goes undetected by the unaided brain, and what is detected is registered on a signal processing system used to keep the body and brain in equilibrium (despite disequilibriums, such as illness or wild behaviors that occur as a consequence of systemic complexity).

The brain's determination of depth (37) is also part and parcel of our intuitive idea of dimension. The brain uses visual clues to transform two-dimensional surfaces into three-dimensional ones. Donald Hoffman (38) has given a number of interesting examples of these constructions. And, of course, the use of polarized glasses to obtain a 3-D effect from specially phased imagery at a motion picture theater drives home the point that visual reality is a constructive process.

Clearly depth perception is heavily dependent on wave interference. The two-eyed vision system is a device for measuring such interference (though the vision system is more than this, being highly integrated into the brain's various means of processing data). Standard photographs and paintings do not contain those interferences and so the brain interprets them as flat, even though the brain uses visual clues to give a measure of mock depth. In fact, it is perhaps significant that a hologram is fractal-like in that small portions of the photographic plate replicate the entire image in 3-D, though precision declines with scale. This bolsters our contention that phenomenal reality is constructed from closely packed information.

Bohm's concept of some implicit order that makes bilocalism more determinate fits with the idea that the brain is interacting with some sort of projection. In fact, it is clear that the brain projects some reality. But is it possible that the brain is able to operate analogously to a holographic projector, reading information from some unvisualizable "place" and projecting that information via some sort of wave interference? At any rate, the brain cannot detect a pattern without having a closely matching template pattern for comparison. The incoming data stream must be analyzed -- i.e., broken down into convenient small signals that can be matched against stored signals. But, as we tried to emphasize in the section on multiplexing, there is often more than one way to break down the incoming data and reconstitute it into something readable. As an analogy, one may get a string of letters and be able to decode a sensible English sentence plus some noise and a sensible Swedish sentence plus some noise. The processor must know what it's looking for.

So we see that, in principle, there is no way to distinguish actual reality from virtual reality, a point well known to science fiction writers but not well accepted by most scientists. (39) As optical illusions and ambiguous patterns demonstrate, the brain constructs a phenomenon based on stored data. In the case of an ambiguous pattern, the brain can only discern one of the two superposed signals. While focused on the selected signal it is blind to the alternate state (AKA the alternate construction, the alternate phenomenon, the alternate reality). If prompted, it may see the alternate pattern, but as it does so, it becomes blind to the previous pattern. If it views the superposition pattern, then it sees neither sub-pattern while entertaining the collective pattern. Of course, the brain can hop back and forth rapidly for simple constructions but much more complex signals may not be so easily "jumped out of."

Consider the Necker cube. If the figure is presented as completely symmetrical, one usually sees a two-dimensional six-gon divided into six similar triangles. It is possible for some people to see the figure as a cube, but this usually requires a conscious attempt (perhaps organized by the left hemisphere) to do so. A standard Necker cube is however ambiguous (perhaps the right hemisphere's processing is affected), whereby one usually sees a mock three-dimensional cube in one of two possible orientations. When the brain first scans the figure, it detects, prior to consciousness, the superposed states of A and B, which is the flat state. The brain of a modern (habituated to mass media imagery) reads the clues and ordinarily chooses either orientation/state A or orientation/state B, thus collapsing the wave function. The brain cannot see the alternate state until the previous state has been suspended, implying that the brain for a short period temporarily revives the superposition.

The Necker cube demonstrates that the brain constructs reality, though when the data are sparse enough, the reality can be discerned as mock or simply representational. In fact, one way the brain can discern that the phenomenon isn't "real" or "serious" is by switching back and forth between states.
A study by Jay San­guinetti of perception of ambiguous black and white silhouettes shows that the brain may recognize a pattern, but reject it before it is consciously perceived.
World Science summary
http://www.world-science.net/othernews/131114_brain.htm
San­guinetti showed study par­tic­i­pants im­ages of what ap­peared to be an ab­stract black ob­ject. Some­times, how­ev­er, there were real-world ob­jects hid­den at the bor­ders of the black sil­hou­ette.
When neurons fire in a specific coordinated manner, detectors record a specific signal strongly correlated with pattern recognition. The "recognition" signal occurs about 400 mil­lisec­onds af­ter the im­age is shown (less than a half a sec­ond).
“The par­ti­ci­pants in our ex­pe­ri­ments don’t see those shapes on the out­side; none­the­less, the brain sig­na­ture tells us that they have pro­cessed the mean­ing of those shapes,” said Mary Pe­ter­son, a senior scientist at the University of Arizona who oversaw the study. “But the brain re­jects them as in­ter­preta­t­ions, and if it re­jects the shapes from con­scious per­cep­tion, then you won’t have any aware­ness of them.”

Closely related to this construction process is the notion of Gestalt. When two subsignals superpose and are recognized as meaningful, we call that cognition a Gestalt effect. The Gestalt reaction may be a consequence of hardware or software functions.

In the case of the Necker cube, the Gestalt reaction ordinarily doesn't stem from superposition of states A and B but from the superposition of sub-signals that constitute perception A or perception B.

As we know from internet programs designed to thwart automated access, pattern recognition is not a simple matter. Suppose we have a (say equilateral) triangle and draw two intersecting lines such that one side is linked to two lines and each of the other sides is linked to one line. A spontaneous identification of the numeral 4 is unlikely, but if told that the figure contains a 4, the observer sees it by deconstructing the figure and cognizing the appropriate subsignal while defocusing the remaining signal. Again, we see that meaningful (not mere noise) reality is constructed by the brain either by decomposition of superposed signals or by the combining of signals into a composite signal.

Interestingly, we may say that the figure we've described is composed of two subsignals, but that only holds if there are templates for those sub-signals. The 4 exists because it is a common visual signal used in an important process known as counting. So it is with superposition of more complex phenomenon signals.

Another example: take a collection of five stones each colored differently and arrange them in a circle. How many ways can we arrange this circular pattern? Let us name the colors as B,R,G,P,W and let B be fixed relative to the others. With respect to B, there are 4! (24) patterns. But, in a ring, there is no difference between a permutation and its mirror image, and so the total number of patterns is 4!/2, or 12.

The fact that we get two numbers for the same set of stones, (n-1)! and (n-1)!/2 is another illustration of the brain's imposition of order on its input signals.

An important issue cited by Davies, Hoffman and others is sometimes known as the phi effect, whereby the conscious brain discerns motion that can only have been constructed by a pre-conscious processing method. The phi effect is most obvious from video film, whereby the brain interprets slightly different still images presented sequentially as if the subsignals in the images represent moving objects.

Hoffman relates experiments with four lights set at corners of a square. At one time interval between flashes, if the lights are flashed sequentially, the observer sees four dots traveling in straight lines. At a shorter interval, the observer sees one illuminated dot traveling in a circle.

Interestingly, the lights can be flashed in such a way that straight line motions might head toward one source, but this interpretation is rejected by the brain, which, says Hoffman, prefers "global coherence of motion."

If stick figures of a symmetrical t and a symmetrical x are alternately flashed at a sufficient rate, the observer sees a whirling windmill.

Numerous experiments of this sort -- pioneered by Max Wertheimer in 1912 -- point to the brain's construction of a meaningful signal from input clues.

We also have the remarkable and telling effect of masking, which engineers deconstruction and forgetfulness of a pattern in order to deal with a replacement pattern.

"It is possible to present stimuli to the brain subliminally (unconsciously)," writes neuroscientist Joseph Le Doux (40). "This can be done in a number of ways, but one commonly used is backwards masking. In this procedure, an emotion-arousing visual stimulus is flashed on a screen very briefly (for a few milliseconds) and is then followed immediately by some neutral stimulus that stays on the screen for several seconds. The second stimulus blanks out the first, preventing it from entering conscious awareness (by preventing it from entering working memory), but it does not prevent the first from eliciting an emotional reaction (the stimulus changes the beating of the heart or makes palms sweat). Since the stimulus never reaches awareness (because it is blocked from working memory), the responses must be based on unconscious processing of the meaning of the stimulus rather than on the conscious experience of it. By short-circuiting the stages necessary for the stimulus to reach consciousness, the masking procedure reveals processes that go on outside of consciousness in the human brain."

We see that an unconscious process chooses the phenomenon signal it has a need to focus on. The brain requires some time interval in which it determines whether a signal perceived by the "reptilian brain" is to be incorporated into the conscious narrative.

In About time, Davies writes that the "essence of phi shows up in experiments in a darkened room where two small spots are briefly lit in quick succession, at slightly different locations." The subjects saw not two successively lit dots but a single dot moving back and forth. Usually each dot is illuminated for 150 milliseconds followed by a dark period of 50 milliseconds. "Evidently the brain somehow 'fills in' the 50-millisecond gap. Presumably, this hallucination or embellishment occurs after the event, because until the light flashes, the subject cannot know the light is 'supposed' to move." (8)

In another experiment, the first spot is colored red and the other green, Davies wrote. Subjects reported seeing the dot change color in the middle of the imagined trajectory. So how does the subject experience the correct color before the green spot lights up?

Plainly, a lag time permits the brain to cobble together signals into a consciously perceived signal that "makes sense," in accordance with some Darwinistic imperative.

Davies writes that Daniel Dennett (41) argued that the receiver system initially only records the red spot and then the green spot but that an "Orwellian in-built censor" rejects the initial reality and constructs another, causing the observer to forget the initial discrete observations.

We might add that for such basic percept or pre-percept signals, the rules for organizing them into a meaningful signal are highly time-dependent.

We may wonder about the mechanism for distinction between the perception of the self as "behind the eyes" and the remainder of the world as "out there." I don't have a good mechanistic answer as to perception of qualia, or of self. (42)

But I would say that the feedback control system is not able to monitor all of its interior elements, analagous to the results of Goedel and Turing that show that a computer cannot compute a proof of every true statement, that its circuitry must be in this sense be incomplete.

Construction of reality occurs not only at basic levels but at all levels (except perhaps at some metaphysical ultimate level). As Eric Kandel (43) has noted, much post-behaviorist research verifies Freud's basic point that many conscious perceptions and goals are driven by unconscious goals.

On the borderline of perception, we see how the brain manufactures phenomena which seem plausible. For example, when the ambient noise level is high relative to a stray noise that doesn't quite fit, a person may think he's heard his cellphone tone and reach to look at it, only to find no incoming call or signal. This phenomenon can occur even when there are no other cellphones nearby.

Clearly, the brain has fitted what it takes for a signal to the nearest (mathematically) stored pattern and influenced the conscious to perceive it.

However, the executive function may alert the conscious to be wary because of the fact that the purported signal is near the edge of perceptible phenomena.

The malleability of individual reality is strongly underscored by Norman Doidge, who has drawn attention to remarkable advances in the study of neuroplasticity. (44)

G. William Farthing says such "automated" or "unwilled" activity is explained or rationalized by the interpreter system, which formulates a politically correct narrative expressed via conscious cognition. Farthing notes that the interpreter explanations cannot be entirely accurate because it cannot scan all the computational activities of these "nonconscious modules" [whether they be hardware or software programs]. (45)

Further, as work with brain-damaged patients shows, the interpreter system is not limited to coming up with convincing, politically correct stories to mask and repress from consciousness the infantile or animalistic hidden agendas, it can also concoct reality streams that explain anomalies linked to the disorder.

Farthing is far from alone when he insists that mental unity is an illusion. "Our actions are not controlled by the executive system. Most of them, including many complex cognitive acts, are products of nonconscious modules." He adds that "our culturally instilled, folk-psychological belief in conscious control of our actions is so strong that when the left hemisphere interprets the behaviors elicited by nonconscious modules, it typically interprets them as if they had been consciously controlled." (46)

Steven Strogatz (47) tells of a 1999 study in which electrode-linked subjects were shown "Mooney faces," ambiguous black-and-white images that, viewed in one orientation, look like faces, but viewed "upside down," are meaningless blobs. About a quarter second after a subject scrutinized the image, the monitor displayed a number of "gamma oscillations" caused by millions of neurons firing rhythmically at around 40 Hertz in various cortex regions associated with visual processing. These gamma oscillations occurred for subjects reporting a face or a blob at the point when unconscious recognition occurred.

However, notes Strogatz, though the firings occurred in each case, the degree of synchrony was radically different. Only when the "rightside up" face was viewed did the electrical discharges align themselves in farflung parts of the brain. Yet, before the subject pushed a button to signify recognition of a familiar type of image, the synchrony dissolved. Again, the brain is constructing a phenomenon signal from the input data, which is why there needs to be a time lag prior to consciousness.

Roger Shepard's "turning the tables illusion" catches the brain in the act of constructing reality. An image of two identical parallelogram tabletops accompanied by different visual cues fools the brain into seeing very different shapes.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485780/

Gerd Gigerenzer remarks that "the perceptual system does not fall prey to illusory uncertainty -- our conscious experience does. The perceptual system analyzes incomplete and ambiguous information and 'sells' its best guess to conscious experience as a definite product" (47a).

The hallucinations of patients with Charles Bonnet syndrome are akin to lucid dreaming. They happen to people with damage in their visual processing system, as Oliver Sacks points out in his book Hallucinations. The hallucinations are so convincing that some patients are initially unaware that others don't see the apparitions. Often, however, the hallucinations are so bizarre that the patient is aware that his or her brain is acting peculiarly. For example, Sacks tells of a woman who observed a man in a striped shirt paying at a cash register. As she watched, the man "split into six or seven identical copies of himself, all wearing striped shirts, all making the same gestures -- then concertinaed back into a single person" (47b).

What is going on here? Clearly, the brain is constructing an alternate reality, evidently "filling in" as compensation for full or partial loss of vision or visual processing ability (the woman in this case had reduced blood flow in a part of the brain that processes visual imagery).

The murkiness of the concept of free will is driven home by V.S. Ramachandran (48), who cites a study in which EEG-wired subjects were instructed to wiggle a finger at any time of their own choosing within a ten-minute period. Researchers found that 0.75 second before a finger wiggled, the EEG recorded a "readiness potential," even though each subject's conscious awareness of a decision to wiggle coincided almost exactly with the actual wiggling.

From the foregoing, it would appear that there is no remedy for the situation whereby one's thoughts contribute strongly to "wave function collapse."

Dreams and altered states of consciousness
Sigmund Freud's Interpretation of Dreams gives the mechanisms of repression, censorship, disguise and conflation. For Freud, conflicting primal goals and emotion-laden life experiences are thrashed out in symbolic form in order to help the brain attain some sort of equilibrium, though in pathological cases the equilibrium is not well attained. (51)

But why dream? Dreaming requires a type of consciousness that largely leaves out left-brain activity and permits associations of symbols based on primitive similarities. The stream of consciousness is not nearly so smooth, and it is quite possible a laptop might vanish from beneath one's fingers. A typical dream seems to reflect an infantile or animalistic consciousness -- the way a baby actually interacts with the world.

In fact a dream, like an internal fantasy or novel or Hollywood movie, often seems to fill a need for vicarious reality. But why does a person need a vicarious experience? It may stem from early man's need to show others where the game was spotted. This vicarious reality is essential to human speech, which mimics, to a limited degree the brain's feedback controlled reality construction system.

Also, I suggest that the right brain, when representing its goals and conflicts in a dream, is actually attempting to formulate a reality scenario that meets its standards but lacks the left-brain's ability to make narratives smooth and continuous, and so the dreams simply stop abruptly. Such a dream state is in many ways similar to an alcohol-induced blackout, whereby the left-brain (including, if you like, the super-ego), is suppressed and the ego is permitted to live riotously without supervision or restraint. Inability to remember what occurred in a blackout is akin to the common inability to remember a dream. Some alcoholics drink just a little before blacking out and then consuming prodigious quantities of intoxicants. This permits the Dr. Jekyll of the ego to be unsupervised by the Mr. Hyde of the superego -- even after the fact.

So though a typical dream, from our perspective, doesn't seem to cause the observer to interact with the external world, we see that a person may be in a dream-like state and interact with the external world, as we also know from other altered states of consciousness.

Except for very high E-magnitude dreams, forgetfulness of a dream and its details is a function of alertness. Interestingly, I have found that if I a wake up quickly and forget a dream, I can sometimes recover the dream if I allow myself to soon return to a trance-like, semi-conscious (right-brain dominated) state. The reason for the amnesia seems to be that the brain doesn't want one reality stream to conflict with the other; the organism wants no confusion occasioned by blending of reality streams.

Of course, we have various levels of dream consciousness. My own experiences suggest a near-blackout level -- partly because the symbols fly so fast and are so disjoint and partial -- that seems to result from illness or physical problems, to mid-level awareness dreams that often occur sequentially and seem to be acting out the same conflict, but with different symbols for each episode, to high-awareness dreams close to the conscious state. Usually, the self is simply a passive observer or passive participant, but if I am close to waking up I may assert control and start directing the dream to some extent, as with a fantasy. I am also able to fall asleep (lose left-brain consciousness) by fantasizing, so that my "consciously controlled" fantasy merges into a dream without the left-brain self in charge.

I well recall drifting into a state of semi-consciousness on a long-haul bus and listening to a symphony orchestra play a very interesting, creative piece. My brain was transforming the input of the engine noise into something pleasant. I also found that I could regulate this symphony to a limited degree with a left-brain "conductor" -- as long as the "conductor" wasn't too alert, in which case I would be cognizant of the engine noise. (51A)

In fact, hypnotic and hypnogogic states are routine occurrences, as the psychiatrist Milton H. Erickson argued. Consider his controversial method of introducing hypnotic states. He found that if he used deliberate confusion -- or contradiction -- that the patient became, as it were, frozen, which amounted to a low-intensity trance. For example, in his handshake ploy, Erickson would make as if to shake the patient's hand but then grab his wrist. We might say that the executive agency that controls conscious focus becomes unable to entertain two contradictory states and, in this case, the psychiatrist uses the state-conflict to divert the executive and make it more plastic, a situation which can be viewed as the beginnings of a so-called trance.

http://en.wikipedia.org/wiki/Milton_H._Erickson

In the 19th century, hypnosis researcher Jules Dupotet de Sennevoy discovered that subjects lost all sense of physical sensation, including somatic pain. When affllicted with various tortures, there was no response, but upon awakening, the residual pain prompted great anger from the subjects. But during the hypnotic state, the mental capacity was often acuter than during the ordinary waking state, he found. Others have since confirmed this phenomenon, which shows that the reality formation mechanism that we have posited is operating so as to screen out somatic awareness.

Of course there is an extensive literature on hypnosis that strongly affirms the role of mental processes in reality formation.

Pavlov argued that trances corresponded to gradations between the fully alert awake state and sleep. We would add that in some trances the left-brain reality constructor is limited while the right-brain reality constructor rules. However, full conscious awareness is limited, because left-brain cohesion routines tend to extend conscious awareness. Such effects are routine when drifting off to sleep or listening to music. In some subjects, such a state can be induced by a hypnotist. Notice that a skilled hypnotist can manipulate a subject's mind so that it creates an alternate reality and storyline. The hypnotist's voice perhaps is perceived by the subject's mind, now in an infantile state, as that of a parent.

The sleep-walker is mainly in a dream state but interacts with the "external world" sufficiently to demonstrate the mind's ability to construct its reality.

Similarly, psychotic episodes show that the brain constructs reality, rather than merely observing it. The brain begins to misinterpret data sets in such a way as to bring about dysfunction. However, one must acknowledge that there are borderline cases in which one man's paranoia is another man's reality. Conspiracies do exist. (52)

What of the world view accepted by a typical American? It is actually a wild distortion of "reality." Propagandists and psychological warfare experts operating through a limited media ensure this situation.

The old saw about genius being akin to madness carries a kernel of truth. The creatively inclined individual draws heavily from the right-brain "lateral" connectivity programs (simple associations as opposed to hierarchical associations) in order to obtain new ideas (superpositions) for projects, while using the left-brain functions to meta-organize the project. This unusual dependence on the right brain corresponds with the right brain dominance of some cognitively disordered mentally ill persons.

Intoxicants are a well-known source of reality distortion. A drug such as LSD, through causing excessive firing of neurons, may yield visual and auditory hallucinations or may stimulate a revision of the belief system incorporated by the individual, with potentially dangerous consequences. Again, we see that the brain concocts reality from raw input data.

In the case of "cold turkey" withdrawal from alcohol or opiates, the subject experiences vivid hallucinations, often in the form of monsters or voices urging suicide. The goal of satisfaction of the addictive need is so great as to strongly affect the functioning of the reality construction mechanism. The organism's desire for suicide stems from its need to end the crisis associated with its lack.

The sleep-deprived often see phantoms, such as a road-sign looking like a person frantically waving his arms in warning. During sleep the brain organizes the day's principle perception sets. In fact, dreaming may be part of that organization process. At any rate, energy is required to maintain a highly-organized left-brain controlled reality narrative. When energy levels are low, that mechanism works poorly.

Very significantly, those deprived of sensory stimulation for days on end begin to construct reality narratives in which minor external stimuli play a major role. The small stimulus is amplified with a set of memory templates. In other words, lucid dreaming results. The brain must "live life." Even low levels of boredom are, from a Darwinist standpoint, counterproductive. The organism needs a rich variety of experience, within smooth constraints; such variety tends to increase chances for individual survival and group survival via hunting, searching and procreativity.

Also, nearly every human needs interaction with other humans. The need for positive approval is a stalwart of the "herd instinct." (We have not discussed the qualitative notion of love. Though love is very important, we would use an E-value, or set of E-values, associated with a specific object or person, as sufficient for our limited purposes.)

[We will have more to say on Freudian-style group dynamics versus Jung's collective unconscious idea in the section on Jung and synchronicity below.]

A prisoner may be held incommunicado in order to pressure him to cooperate with an interrogator. Relief from boredom and the need for human contact may boost the value of the interrogation session, the captors hope. But such a captive is very likely to enter a dream world and, his grasp of reality weakened, is at a disadvantage in defending himself from telling interrogators what they want to hear, even if it is objectively false. In fact, communist "brainwashing" of American war prisoners -- today called "enhanced interrogation" -- was intended to weaken the brain's reality formulator, thus making prisoners compliant and easily guarded by a few soldiers and more amenable to confessions of doubtful veracity.

Occasionally, the brain's reality constructor may show "minor" disjunctions.

Consider the "case of the missing sock."

When one places loose pairs of socks in a washer and dryer, the brain anticipates that there will be chaotic, or effectively random, mixing and tumbling. At any point, there is uncertainty as to the existence of not only one sock, but the pair. There is a waveform for the pair which is a superposition of the individuals. When this waveform vanishes, there is "no guarantee" that a waveform for an individual will emerge, especially in that socks have low priority and there is no strong belief (the focus amplitude is low) that a particular individual will "be there" later. That is, the reality formulator may tend to lose a sock in the mixing process.

Yes, statistical arguments can be used to counter this suggested scenario, but our model questions some of the assumptions of modern probability theory, as discussed later. [Also see my paper The many worlds of probability, reality and cognition linked in sidebar.]

The vanishing laptop example might seem extreme. But what of the time you casually put down a cup of coffee in a cluttered room with no one else present, turned to do something and then turned to get the coffee, only to find the cup was nowhere in sight? Have you never had such an experience? Perhaps you searched diligently and then found it in some unexpected place, wondering, "How in the world did I manage that?" but then shrugged off the little mystery. Possibly you searched the entire room diligently but didn't find it. Either you accept that something spooky has happened or you believe you have forgotten leaving the room with the cup in hand.

We suggest that sometimes routine psychological errors run a bit deeper, into the reality formulation area. The change of focus may mask a particular anticipated signal; perhaps when you changed focus you caused a glitch in the signal decomposition procedure. When you turn back, the cup signal isn't decoded; its "wave function hasn't collapsed."

On occasion, such an inexplicable event (or non-event, really) might be related to how your mind is interacting with someone else's, even if you don't "see" them in your "materialist" virtual reality state.

Group reality construction
We can see group minds at work on the internet, perhaps using network theory to give a partial description. The evolution of civilization and its group mind, with its many sub group minds, is like the evolution of an organism continually changing -- sometimes regressively, as in the Dark Ages -- and replenishing itself.

We see from political and economic life that this mind is indeed unconscious (though elites try to manipulate and control it) -- which is to say, irrational. This description, perhaps based in Freudian group dynamics, accepts a conventional mechanistic worldview, with all inter-mind activity mediated by the senses.

However, from a Jungian perspective, the stated existence of paranormal phenomena implies a connectivity that bypasses the ordinary sensory system.

The model we have been urging suggests that the sensory system is not part of a simple decoder, but is also part of the individual reality formulator. How does an individual's reality formulator interact with those of his or her fellows, or that is, if an individual dreams his own dream during awake life, how would this comport with the dream-construction of his fellows?

Consider the multiplexing examples above. Different-language messages can intersect and potentially yield meaningful, if noisy, combination messages. Also, a translator program can link two worldviews that are disjoint in many respects but share enough in common so that a "good-enough" function exists between the two.

Consider an English speaker carrying on a radio conversation with a German-speaker through an interpreter; if the interpreter is relatively fast, neither the English speaker nor the German speaker need know that the entire conversation isn't being conducted in one language; neither need know that he is speaking to a foreigner. Similarly, a person living in "virtual world A" might interact with someone in "virtual world B" with each unaware of the other's world.

To use a silly example, perhaps Lancelot in King Arthur's court is wooing Guinevere, without realizing Guinevere is Britney Spears who is having boyfriend issues. Britney has no clue that Lance is in another realm altogether.

Yes, we commonly say of someone that "he's in another world." The point here is the extent of that possibility.

But because of the feedback system of each mind/node, our model then implies communal virtual reality. This communal reality is continually changing, much like weather and climate systems. Such a communal reality can be viewed as the intersection of individual reality streams. But this group reality needn't be and probably isn't the background reality of the cosmos. This group reality would correspond more to Jung's notion of a collective unconscious, but in no way disputes the Freudian notion of the group mind. The Freudian group mind tends to be elemental and, in totality, irrational and subject to manipulation of experts in crowd psychology, but it is said to operate via the usual senses. The Jungian collective unconscious is a linkage of minds on some non-material plane that contains ancient knowledge.

(I would never consider myself a Jungian, but I think it important to place my model within a historical perspective.)

I have had numerous personal experiences that suggest some sort of covert linkage among minds, but I won't dwell on them, except to point out that very occasionally I will have a dream in which I wake up thinking that one of the dream people actually represents someone else's mind. I believe this because the qualitative feel of the emotions of the dream figure "aren't mine," but are I suspect those of someone known to me. That mind is invariably in Freudian disguise. Yes, I am aware that one often doesn't apprehend one's own unconscious feelings; yet I believe that I am fairly familiar with their routine qualia. When those qualia differ substantially from those with which I am familiar, then I suspect that the signal hasn't been generated entirely internally.

To me, this accords with the idea that each brain influences the signal processing of the other in ways that transcend and indeed bypass the sensory systems.

The researcher Dupotet found that hypnotized subjects seemed to be able to see through opaque objects and "appear endowed with a knowledge beyond that which they ordinarily possess, are able to diagnose illnesses and prescribe effective treatments and even foretell future events in exacting detail." Of course this observation was met with scientific derision. In popular culture, hypnotism and spiritism became interlinked and a fad that still continues.

But, is there something to Dupotet's finding? That is, is it not plausible that the subject's reality constructor is working at a "different level" and intersecting with the hypnotist's reality constructor to produce such effects?

Such "back-channel" communication is often ascribed to the spirit world, whereby a dreamer may interact with a spirit other than his own. But then of course we are left with the problem of defining spirit. The assumption is that such entities are non-material and can neither be confirmed nor denied and are hence of no relevance to science.

I am not about to define spirit any more than I am ready to define consciousness. However, I would say that the assumption that non-material links are impossible is directly contradicted by relativity theory and quantum mechanics.

Deja vu is sometimes ascribed to this supposed spirit realm. Recent research shows that the deja vu perception can be induced . A pattern is flashed so briefly that a subject grasps it only unconsciously. When the pattern is shown at a longer interval, the subject often has a feeling of having seen it before but can't pin down the memory.

We would say that on occasion a reality stream signal contains a superposition of two relatively complex subsignals. The superposition has its own Gestalt. But so does one of the subsignals; however, the larger pattern blocks from conscious perception the subsidiary pattern.

But non-routine deja vu might occur if a person has somehow changed virtual worlds. The former virtual world is blocked from working memory, but a part of the brain recognizes it.

We would say that closely related to the deja vu experience is meaningful coincidence, whether serial or simultaneous. What we have is several subsignals that interfere at time A that go to constitute part of the overall reality signal at time B. This always happens. But sometimes the subsignals cohere sufficiently so that at the "collapse of the wave function" at time B, they form "meaningful" pairs or sets. The observer sees one or more "echoes" of time A events at one or more future times, though my experience is that these future times tend to be within 24 hours.

Following Von Neumann, we say that the wave function collapses only upon observation by a conscious mind, though the degree and type of consciousness may vary sufficiently so that wave functions don't decompose very predictably.

To use a relevant analogy, consider a simple taut string fixed between two walls. Pluck the string at end A and a moment later at end B and two traveling wave forms are set off. These wave forms are discrete until they pass through each other. Once they are fully superposed the brain cannot discern directly either wave form. It must wait until the waves have continued on before they can be viewed as individuals. So it is with reality subsignals. However, because the reality subsignals are often very rich, they can be decomposed in numerous ways.

Randomness, probability and coincidence
Randomness is a description of information. When do we consider an event or observation to be random?

In a classical sense, we can model an event with a set of force vectors. Some events have fairly large sets of force vectors, many of which have low magnitude. The vector sum -- the cumulative vector -- predicts the next state of the observation.

However, quite a few of these low-magnitude vectors can't be observed. We can say that empirically we have learned that the very small vectors tend to cancel out. In some cases, a small vector represents a "tipping point" that changes the vector sum in a drastic way. For example, during the toss of a fair coin, some small vector determines whether the coin lands head or tail. Over many trials, these small vectors tend to cancel, yielding the "law of large numbers."

So here the concept of random is related to our knowledge of small forces. We can't sum those forces exactly for one event, but we can assess that the small forces vary so little that there is no way to be certain what the next outcome will be.

Essentially then, an outlier event is either the result of a measurement error or the result of a set of small forces becoming coherent, thus amplifying their effect rather than canceling or reducing their effect.

At the quantum level, the probability amplitude is what we can know about the "collapse of the wave function." We cannot predict exactly where the collapse will occur and a particle's impact will be recorded. There seem to be no hidden physical forces that are influencing the outcome. Within constraints, the outcome is non-deterministically or acausally random. I strongly suspect that the collapse is a result of how the brain processes data at a very basic level.

It seems plausible that if the focusing unit is neutral about outcomes, the results are deterministically random, the determinants being within the unit's reality builder software. But this is an insufficient explanation, and in fact, I agree with Penrose that progress in physics, physiology and psychology requires extensive research into this interface.

The word "coincidence" is taken from Euclidean geometry, whereby two line segments intersect over some finite interval. More generally, we deem two events as coincidental if they have something in common, if they intersect. If the number of elements of the intersection is "high," we may say the coincidence is non-random, but is causally related, perhaps reasoning that the information content of the intersection is so high as to represent error-correction activity. However, because many small forces can occasionally cohere into a large vector, rather than canceling out, we are unsure whether this coincidence has a causal relation or whether it represents an outlier. Hence the need to run a number of trials.

So the coincident events of a boy meeting a girl he knows at the supermarket may be a result of various unmeasured forces and be held as essentially random, even though the girl suspects not. But, if these types of meetings occur repeatedly over a short duration, the girl will almost certainly be correct to assume he is interested in her.

Also, often "meaningful coincidences" are to be expected without there being any macro-connecting cause or force. Such coincidences are counterintuitive to many people and so they believe that some large mystery force is at work. The birthday problem is a good example of this. If you enter a classroom of more than 23 people whose birthdays are unknown to you, the chance is better than 50 percent that someone shares your birthday. That probability climbs rapidly with number. Another example is the "hidden hand" of a free market not controlled by cartels. There is no central organizing force, but we get the impression one exists because micro-actions can and often do cohere into a substantial "net force vector."

In a light-hearted vein, Edward B. Burger and Michael Starbird (53) dismiss the idea that a "cosmic conspiracy" might account for similarities such as these:
  • Lincoln and Kennedy were both shot to death
  • Lincoln was elected to Congress in 1846
  • Kennedy was elected to Congress in 1946
  • Lincoln was elected president in 1860
  • Kennedy was elected president in 1960
  • Lincoln's secretary was named Kennedy
  • Kennedy's secretary was named Lincoln
  • Andrew Johnson, who succeeded Lincoln, was born in 1808
  • Lyndon Johnson, who succeeded Kennedy, was born in 1908
  • John Wilkes Booth was born in 1839
  • Lee Harvey Oswald was born in 1939
Burger and Starbird argue that "Lincoln and Kennedy were not average citizens. The pile of minutiae through which to forage is truly immense." And so "how likely is it that there are no coincidences of dates and names in any of this blizzard of possibilities?" Their answer: "Essentially zero."

John Allen Paulos has argued similarly about various coincidences concerning 9/11, in particular concerning the numerals themselves (Google his home page for a link to that article).

If one accepts the philosophical underpinning of probability theory, one can see the point of view of these writers. However, some of these pairings might indeed fall into the realm of "meaningful coincidence" and represent coherence of reality construction wave forms as shared among a large group.

P.T. Barnum was preoccupied with the number 13, which he held to be unlucky (54). The superstition associated with the number doubtless influenced his suspicion of it, which was reinforced after two fires. The first destroyed his American Museum on July 13 and the second his Barnum Museum on Nov. 13.

From the perspective of routine probability theory, these coincidental events do not portend a mysterious connection. The probability of two consecutive fires can't be estimated; no doubt many combustibles were kept within and the many visitors would have drastically increased the chance of accidental combustion. The probability of "the same" event occurring on the 13th is about (1/30)2, which is low, but a significance analysis would show nothing surprising.

Of course there are various ways to estimate the likelihood of randomness. But even if the probability were very low, because there are only two events, one cannot rule out a fluke outlier.

One way to address "remarkable chunks of order" is with Ramsey theory, an area of discrete mathematics aimed at finding what conditions are necessary for a minimum amount of order to appear. The simplest example is the pigeonhole principle. If there are m pigeonholes and n > m letters, then at least one pigeonhole contains at least two letters. If m is much smaller than n, two or more letters share one slot only infrequently, and so we might be struck by the "remarkable coincidence" of the association of these two letters. Elaborations on this concept occur in network theory, and it has been proved that for minimum numbers of links and nodes subregions must have higher information content than the average for network subregions.

The pigeonhole example shows how important the question is in standard probability analysis. Suppose you walk into a large room with a bank of 1,000 pigeonholes, having been told that the rule is that each slot must have at least one letter. Now you walk along and behold! there is a slot with two letters and you see no other slot that contains a pair. You may say what are the chances of such a pair? But as long as there are at least 1,001 letters, there is a 100 percent probability of such a pair existing, though that pair's slot would hold (assuming equivalent average content), double the information of a typical slot. Now if you were standing outside the room, your chance of guessing the number of the slot with the high information is then 10(-3).

A discussion of randomness shouldn't ignore the issue of deterministic constraints on predictability. We may have an algorithm whose final outcome cannot be computed, as far as is known, by some shortcut method. The work of predicting the nth value rises with n. The work for computationally difficult problems rises exponentially, versus polynomially for many other problems.

We can say that as n rises, the noise rises with it, so that after some m steps, the noise equals or exceeds the signal. In that case, the nth value is effectively random, within whatever constraints there are.

Completely chaotic systems likewise are effectively random within specified ranges. Periodicity vanishes, and yet periodicity is the key to nearly all forms of prediction. However, in the march toward chaos, sometimes there are transient intervals of near-periodicity. These islands of near-periodicity might, depending on the circumstances, look like strange coincidences but in fact are diverging from periodicity until chaos is attained.

Systems "far from equilibrium" may yield counterintuitive effects, as when transient periods of order disrupt the general trend of increasing entropy, as discussed by such researchers as Prigogine, Kaufmann and Strogatz (55). Sometimes such "spontaneous order" is cast in terms of wave entrainment, whereby a high-amplitude wave form sometimes tends to bring low-amplitude wave forms into phase, or at least into near phase.

As Stephen Wolfram (55a) and others have noticed, algorithms with a few simple rules may, occasionally, yield high information results, though most rule sets yield low-information results.

Arthur Koestler's Roots of Coincidence (56) uses probability arguments in order to help the general reader make sense of J.B. Rhine's work. However, Koestler fails to note that Paul Kammerer's alleged "law of seriality" (occurrence of strikingly similar events) must take into account that randomness implies clustering.

For example, consider a sequence of 16 tosses of a fair coin. If the alternating pattern HTHT... occurs, a runs test of randomness gives a normal curve z score equivalent to less than 1 percent. That is, we would not be confident of randomness. So then, in a run of n >= 16 tosses, the probability of clustering (several heads or tails consecutively) is greater than 99%.

Even so, it is possible to detect apparent non-random clustering. Consider a fair coin tossed 20 times. The probability of 12 heads and 8 tails is 20C8(0.520), which is about 12%. We would not feel terribly confident in ruling out randomness. However, the probability of obtaining 12 heads and 8 tails in the order 2 tails followed by 10 heads followed by 6 tails, when assessed by a runs test, gives a normal curve z score of 3.49, which is equivalent to a confidence level of non-random influence of 99.98%.

A one-to-one match of elements from two sets (or an intersection) can be seen as a coincidence. For example, consider the sets

{A,B,C,D,E,F,G,H,I,J,K} and

{A,B,C,D,E,F,G,H,I,J,K}.

What is the probability that, on scrambling say the first set that two letters of the same type will be found in the same position (lined up)? A probability theorem tells us that for any n greater than 10, the answer is about 63%. That is, the probability closely
approximates 1 - e(-1), or 0.63. You may not think this unremarkable, but if the set elements carried high E-values, in that case such a match might strike you as a bizarre coincidence.

So, when struck by the seeming unusualness of coincidence of occurrences, one must take into account that there are various "non-paranormal" explanations that can be used.

And it must be admitted that much writing on supposedly significant coincidences is intended for the Fate magazine set (57), lacks intellectual value and unfortunately tends to undermine the work of serious inquirers into paranormal phenomena, such as nobelist Brian Josephson.

The probabilistic framework adopted by many scientists rests on some important assumptions:
1. There is no non-material medium for transmission of information other than the "spacetime continuum," whatever that is. However, that position is challenged by the fact of quantum entanglement and bilocality.

2. There is a "large" percentage of observed reality that is not a mere interpretation or projection of the brain. In other words, the individual is one of many event makers and does not influence or control most observed events. Yet, the example of the Schroedinger cat paradox challenges this assumption.

3. The moon "is there" when one isn't looking. The fact that there seems to be agreement among minds as to the moon's probable "thereness" does not preclude the possibility that it is some sort of projection. The claim that the moon is there when one isn't looking is, at root, a theological matter.

So a "strong virtual reality" view of perception would mean that calculation of probabilities is an acceptable way to estimate outcomes, but that beyond this is the occurrence of "upper hierarchy" subsignals that recur within some interval. Barnum's predisposition to worry about the number 13 could have influenced future events in his life. Of course, there is always the possibility of Freudian-style unconscious sabotage whereby one "accidentally" knocks over a lamp twice running. But we are talking about something more, the mind having a strong influence on the "fabric of reality."

Jung, Koestler and synchronicity
A number of physicists have toyed with the notion that mystical, or even, paranormal phenomena are related to quantum weirdness. Non-physicists Carl Jung and Arthur Koestler are two writers who have pondered this idea in their discussions of synchronicity, which Jung defined as the conjunction of "meaningful, acausal coincidences."

Of course these writers, lacking a mathematical orientation, are dismissed by the bulk of scientists. "Flapdoodle," said Nobelist Murray Gell-Mann of Koestlerian ideas (though later in his book The Quark and the Jaguar, Gell-Mann mentions the notion of "goblin worlds." (58) Gell-Mann was the motive force behind the Santa Fe Institute, whose mission was to provide scientific alternatives to arguments for "intelligent design."

John G. Taylor (59), a mathematician with a background in quantum mechanics, had this to say: "Arthur Koestler argued in his book The Roots of Coincidence (56) that, because quantum mechanics seems to have these bizarre features associated with the Einstein-Rosen-Podolsky experiment and the Schroedinger cat paradox, that therefore other bizarre phenomena can also occur in the world. This is, I think, a very dangerous, specious argument."

John Archibald Wheeler recounted that in 1979 he strongly objected to sharing an American Association for the Advancement of Science podium with several parapsychologists, characterizing that field as pseudoscience (4).

While true that neither Jung nor Koestler had a specific idea of how such an interface might occur, they did have the statements of people like Wolfgang Pauli and Pascual Jordan they could cite in their favor.

We must accept however that Koestler wrote like the skilled journalist and thoughtful person he was but could not be said to have presented a serious scientific case for synchronicity or so-called paranormal pheonomena, other than by recapitulating some of the results of J.B. Rhine. Essentially, his point was indeed that if quantum weirdness operates in the world, then why shouldn't paranormal weirdness also occur? His ideas about "holons" (biological parts roughly equivalent to semi-autonomous states in a federal system) seem to reflect some of Lynn Margolis' thinking on symbiosis versus competition; such activity, Koestler thought, might account for extra- or quasi-sensory perception.

Physicist F. David Peat (60) has argued in favor of a quantum explanation of synchronicity. In fact, a number of the ideas in this paper were stimulated by Peat.

Jung developed his synchronicity idea beyond "meaningful coincidences" to include paranormal phenomena in general. He argued that such peculiarities cannot be dismissed as simple subjective mental aberrations, but require a non-Euclidean spacetime continuum, pointing out that Jordan, a founder of quantum mechanics, had advocated the "idea of relativistic space to explain telepathic phenomena." (61)

In 1930, Jung wrote that perhaps time, far from being an abstraction, was a "concrete continuum" that helps account for "acausal parallelism, such as we find, for instance, in the simultaneous occurrence of identical thoughts, symbols and psychic states." (62) A Jung biographer, Ronald Hayman, writes that Pauli, who for a time was Jung's patient, had originally come up with the idea, though he did not use the word "synchronicity" (62). In 1951, Jung and Pauli published jointly two separate papers on synchronicity.(63),(64) [See first item in Appendix A for an interesting account of the "Pauli effect."]

Koestler wrote that a number of eminent physicists had shown interest in psychic phenomena. An early member of the British Society for Psychical Research was Joseph J. Thompson, discoverer of the electron, said Koestler (65). A middle-aged Werner Heisenberg subscribed to a multi-layered view of reality as propounded by Goethe, writes Heisenberg's biographer, David Cassidy (66). The nine layers of reality -- accidental, mechanical, physical, chemical, organic, psychic, ethical, religious and "genial" -- are rungs on a ladder that shift from the objective to the subjective. While placing quantum physics just below the organic level in the realm of the chemical, Heisenberg believed that in order to comprehend the "grand connections" one must climb the ladder of realities.

Heisenberg, wrote Cassidy, believed that the war reflected "movements in the foundations of human thought" -- a shifting of the layers of reality over the heads of individuals in such a way that "dark demons" loosed on the world took on a greater role than in the past. He saw Nazism and Bolshevism as a "strange sort" of "worldly religion."

A typical scientist might well respond that this sort of speculation demonstrates that philosophy is best left to philosophers, and yet we are left with the impression that still another important physicist senses a connection between quantum issues and so-called occult phenomena.

When discussing "meaningful coincidences," we should take care to distinguish between Freudian-style free-association coincidences and the synchronicity postulate of Jung. For example, in The Psychopathology of Everyday Life, Freud (67) observes that a young man's forgetfulness of a phrase reflected an unsconscious conflict concerning a girlfriend who had missed her period. This conflict surfaced in a string of thoughts expressed during a free-association test. Freud argued that the apparent correlation between this set of ideas and worry over the former companion's potential pregnancy was not haphazard coincidence.

However, though this distinction is important, we must leave room for the possibility of a merger between such unconscious conflicts and synchronicity events. In fact, our model of perception actually predicts that such mergers will be rather routine.

Both Jung and Koestler were influenced by Paul Kammerer, who published a book of carefully chronicled "meaningful coincidences." (68) Kammerer, reports Koestler, regarded "random coincidence" as a false representation and that actually there is a universal, but mostly unnoticed except in unusual cases, "recurrence of identical or similar data in contiguous areas of space or time." Kammerer regarded this "simple empirical fact" as furthering his arguments in favor of Lamarckism, the rival theory of Darwinism that says that heredity is influenced by the specific behaviors of parents. (69)

Kammerer posited a "law of seriality" that ensures that "acausally related" events stream together. Koestler notes that the chief difference between Kammerer and Jung is that Jung is more concerned with serial events and Jung with simultaneous events. In fact, Jung seems to have expanded on the idea of synchronicity to cover paranormal phenomena in general. Synchronicity is to him a linchpin of the collective unconscious, that vast storehouse of archetypes (symbols common to humankind throughout history).

Kammerer's credibility is however at issue. He committed suicide in 1926 after his specimen that supposedly bolstered Lamarckism was exposed as having been doctored, though Koestler argued that Nazis or Nazi-types sabotaged the sample as a means of discrediting Kammerer's socialist views.

Both Jung and Koestler cited the work of J.B. Rhine and Rhine's successor, physicist Helmut Schmidt, in support of their ideas. More recently, Josephson has cited telepathy studies and argued that unfair methods have been used to discredit their value.

Freud considered most reports of telepathy to lack credibility. But, on the other hand, a few reports could not be easily dismissed. He suggested that telepathy is an archaic form of communication later supplanted by the more useful verbal form, though he did not propose any biological mechanism for this phenomenon.

For more on this, see my post Freud and Telepathy at

http://randompaulr.blogspot.com/2013/10/freud-on-telepathy.html

As Leonard Mlodinow (70) points out, if a telepathy test is given to a number of people, it would be expected that at least one person would do well enough to vary substantially from the mean. However, I would add that if that same individual repeated such a "fluke" with more than one trial set, then one would strongly suspect a non-random force was at work.

(Again, in our model, randomness is in part a consequence of the aperiodicity of the rich composite reality signal or subsignals. The computational difficulty of predicting a specific subsignal is, like weather prediction, of a high order. What this has to do with the "true randomness" of particle detection is a subject in need of considerably more work.)

Jung commented that in one Rhine study, subjects did well on the initial trial, when they were interested in what was going on, but afterward their returns were no better than random. I don't have the Rhine study at hand and so I cannot say whether the initial variance was statistically reasonable. However, based on our model, we would say that a subject's reality construction is heavily dependent on focus. In our scenario, a high level of belief might affect outcome.

I well recollect the day that I reached a friend via cellphone, having been momentarily convinced that I had rung up the correct number. As he was answering his only home phone, I suddenly realized I had dialed the wrong number -- and the connection became weak in phase with my doubt. After breaking contact, I checked carefully and found that I had indeed dialed a wrong number. I hasten to add that usually when I make a mistake, convinced or not, nothing atypical happens.

[See Appendix A for an anecdotal collection of strange events.]

Of course some of Jung's synchronicities are quite easily dismissed as typical of randomness, such as his observation of "surnames that fit" a person's vocation or disposition. That said, how does one account -- assuming Jung's truthfulness -- for his "fish story"?

In his Synchronicity paper, Jung noted that on April 1, 1949, a Friday, "we had fish for lunch" and someone mentioned the custom of "making an 'April fish' of someone." That morning, he had made a note of an inscription reading "Est homo totus medius piscis ab imo." That afternoon, a former patient, who he hadn't seen in months, showed him "extremely impressive" pictures of fish that she had painted since he'd last seen her. In the evening, "I was shown a piece of embroidery with fish-like sea monsters on it" and the following morning another patient, whom he hadn't seen for years, told of a dream in which she stood on the shore of a lake and saw a large fish that swam straight toward her and landed at her feet." (63)

True, one might argue that, because he is seeking associations, he is noticing associations that occur routinely but tend to go unnoticed by most people who have learned, correctly, to relegate such pairings to "background noise." But it is relevant from our viewpoint that at the time "I was engaged in the study of the fish symbol in history." The focus on fish patterns influenced his reality construction, is how we would put it.

To drive home that point, we cite this footnote by Jung: "As a pendant to what I have said above, I should liken to mention that I wrote these lines sitting by the lake. Just as I finished this sentence, I walked over to the sea wall and there lay a dead fish, about a foot long, apparently uninjured..."

Jung credits the astronomer Camille Flammarion with this anecdote (71): "A certain M. Deschamps, when a boy in Orleans, was once given a piece of plum pudding by a M. de Fortgibu. Ten years later he discovered another plum pudding in a Paris restaurant and asked if he could have a piece. It turned out, however, that the plum pudding was already ordered -- by M. de Fortgibu. Many years afterwards M. Deschamps was invited to partake of a plum pudding as a special rarity. While he was eating it he remarked that the only thing lacking was M. de Fortgibu. At that moment the door opened and an old, old man in the late stages of disorientation walked in: M. de Fortgibu, who had got hold of the wrong address and burst in on the party by mistake."

Though this story can't be verified, it is illustrative of "common knowledge." Many people have experienced "synchronicities" of this sort, sometimes equally dramatic. However, such happenings are usually relegated to private life and "laughed off."

Jung's journey in psychology was from the beginning interwoven with a keen interest in paranormal phenomena, which he tried to explain in terms of deep forces to which the individual's unconscious was connected. "Ghost stories and spirit-like phenomena practically never prove what they seem to," he wrote in a 1950 note (72). "They provide information about things the layman knows nothing of, such as the exteriorization of unconscious processes." (My emphasis.)

Jung's observations led him to believe that bizarre occurrences appear to be projections of unconscious forces within an affected individual. What we have been saying is that bizarre occurrences may indeed point to mental activity, but that the perception-phenomenon feedback mechanism applies to all detected events.

There are a number of types of events that go under the heading paranormal phenomena. I make no effort to list them all, though I would say that we are talking about some type of non-material medium that links minds. Individual reality construction is heavily influenced by the reality weaving of other minds, so that no individual has control of his or her own mind or reality flow.

Various religious systems recognize this "below-the-radar" connectivity but that does not imply that a specific system is particularly helpful to the individual. Superstition has an impact on reality formation. If a person is taught to believe -- even partly -- a particular superstition, a person's reality formulator may warp in some "bad luck event" in the near future. In this regard, consider the remarkable discovery that an asteroid, Apophis, has been discovered that is expected to make a very close flyby on April 13th, 2029, a Friday, where it will pass to within 5 Earth diameters of us (below the altitude of geosynchronous satellites). The exact path the asteroid follows on its flyby in 2029 will determine whether it smashes into the Earth seven years later -- on April 13, 2036, another Friday.

Modern American popular culture, of course, has taken up the notion that Friday the 13th is a bad luck day. And, there has been greatly increased awareness of the possibility of an asteroid strike, stimulated by a number of films. So shall we assume that the Friday the 13th dates stem from routine statistical coincidence, or is it possible that popular fears and expectations have somehow been projected to weave mass fantasies and fears into a dreamscape that "solidifies" into "concrete reality"?

(My personal theological views are found at

http://paulpages.blogspot.com/2011/11/where-is-zion-many-wonder-about.html .)

Consider the word "enchantment." The root is the word "chant." It seems plausible that chanting could influence an individual or communal dreamweaver by the wave entrainment effect. Smaller signals tend to fall in line and the later reality signal has a high degree of coherence. The word is associated with magic spells (as in incantation) and with modern rhythmic music. It could well be that not only can a musical number put one into a trance-like reverie, but actually alter the output reality signal.

Recall the traditional Irish bards, who were not only poets but considered to be powerful magicians (although that element was played down after Catholicism took hold).

The placebo effect has been thoroughly documented and some scientists might hold that some miraculous cures are a close cousin of that effect. I would agree that the power of belief can strongly affect an individual's recovery. But I would add that if another mind is sharing with the patient that belief, the joint reality formulation may be stronger (or weaker). Yet who has the kind of belief that makes the lame leap and the blind see? Who has the power of belief to raise the dead?

Such biblical feats are often seen as nothing more than comic book stuff. But, if consciousness interacts with a dreamlike signal, then such story changes aren't impossible. However, they would require a level of belief that many would consider superhuman.

Sometimes knowledge of reality reformulation is not beneficial. Individuals may have learned various manipulations to adversely affect the reality streams of others, though this would tend to imply that their own reality stream would be likewise adversely affected.

And aside from such manipulators, there are also the charlatans and confidence tricksters who are adept at deceiving people via ordinary, sense-moderated illusions and delusions.

At any rate, the ideas stated above are meant to indicate a much more malleable form of reality than previously held in science and are not meant to lure the unwary into treacherous territory.


Footnotes:
1. October the First Is Too Late by Fred Hoyle (Baen 1966).
2. The Discoverers by Alan Lightman (Pantheon/Random House 2005).
3. Quoted in N.G. Vankampen's paper Information, physics, quantum: the search for links in Feynman and Computation, edited by Anthony J.G. Hey (Westview Press 1999).
4. Geons, Black Holes & Quantum Foam: A Life in Physics by John Archibald Wheeler with Kenneth Ford (W.W. Norton 1998).
5. The Undivided Universe: An Ontological Interpretation of Quantum Theory by D. Bohm and B.J. Hiley (Routledge 1995).
6. The Undivided Universe: An Ontological Interpretation of Quantum Theory by D. Bohm and B.J. Hiley (Routledge 1995).
7. For a comprehensive approach to the subtleties of the issue of time, see The Labyrinth of Time by Michael Lockwood (Oxford, 2009).
8. About Time: Einstein's Unfinished Revolution by Paul Davies (Touchstone/Simon & Schuster 1996).
9. For a thorough examination of the Copenhagen interpretation, see The Non-Local Universe: the New Physics of Matters of the Mind by Robert Nadeau and Menas Kafatos (Oxford, 1999).
10. The Bohr, Born and Pauli essays are, with Einstein's reply, found in Albert Einstein: Philosopher-Scientist, Paul Arthur Schilpp, editor (Library of Living Philosophers 1949).
11. Where Does the Weirdness Go? by David Lindley (Basic Books 1996).
12. Visual Intelligence: How We Create What We See by Donald Hoffman (W.W. Norton 1998); Hoffman and two colleagues wrote a paper on observer-based quantum mechanics, which is cited on Hoffman's home page.
13. Quoted in The Ghost in the Atom, P.C.W. Davies and J.R. Brown (Cambridge 1986, revised 1999). The book is based on a series of BBC interviews with quantum physicists concerning the measurement problem and Aspect's results.
14. The New Quantum Universe by Tony Hey and Patrick Walters (Cambridge 2003)
14a. New Foundations of Quantum Mechanics by Alfred Lande (Cambridge 1965).
15. Quantum Reality: Beyond the New Physics by Nick Herbert (Anchor/Random House 1985).
16. The Undivided Universe: An Ontological Interpretation of Quantum Theory by D. Bohm and B.J. Hiley (Routledge 1995).
17. Two Kinds of Reality by Eugene Wigner, an article first published in the philosophical journal The Monist in 1964.
18. Physics and Philosophy by James Jeans (Dover reprint of 1949 title).
19. Quoted by Arthur Koestler in The Roots of Coincidence (Hutchinson Publishing Group 1972, Random House 1973). Koestler was citing a passage from Jeans' Rede Lectures.
20. See The Intelligent Universe: a new view of creation and evolution (Holt, Rinehart and Winston, 1983) by Fred Hoyle. In this book, written near the end of his life, Hoyle proposed a "panspermia" theory whereby microbial life and viruses travel through space and rain on the earth. At first blush implausible, but he is quite persuasive on these points. He is attempting to account for the astounding improbability of life spontaneously generating on any one planet.
21. The Many-worlds Interpretation of Quantum Mechanics, edited by Bryce S DeWitt and Neill Graham (Princeton University Press 1973).
22. The Cosmic Landscape by Leonard Susskind (Back Bay/Little Brown 2006).
22a. Niels Bohr's Times, In Physics, Philosophy, and Polity by Abraham Pais (Oxford 1991).
22b. The Infamous Boundary: Seven Decades of Controversy in Quantum Physics by David Wick (Reed Business Information, Inc. 1996).
22c. "It must be admitted, therefore, that in certain persons, at least, the total possible consciousness may be split into parts which coexist but mutually ignore each other, and share the objects of knowledge between them. More remarkable still, they are complementary. Give an object to one of the consciousnesses, and by that fact you remove it from the other or others. Barring a certain common fund of information, like the command of language, etc., what the upper self knows the under self is ignorant of, and vice versa. M. Janet has proved this beautifully in his subject Lucie. The following experiment will serve as the type of the rest: In her trance he covered her lap with cards, each bearing a number. He then told her that on [p. 207] waking she should not see any card whose number was a multiple of three. This is the ordinary so-called 'post-hypnotic suggestion,' now well known, and for which Lucie was a well-adapted subject. Accordingly, when she was awakened and asked about the papers on her lap, she counted and said she saw those only whose number was not a multiple of 3. To the 12, 18, 9, etc., she was blind. But the hand, when the sub-conscious self was interrogated by the usual method of engrossing the upper self in another conversation, wrote that the only cards in Lucie's lap were those numbered 12, 18, 9, etc., and on being asked to pick up all the cards which were there, picked up these and let the others lie. Similarly when the sight of certain things was suggested to the sub-conscious Lucie, the normal Lucie suddenly became partially or totally blind. "What is the matter? I can't see!" the normal personage suddenly cried out in the midst of her conversation, when M. Janet whispered to the secondary personage to make use of her eyes. The anaesthesias, paralyses, contractions and other irregularities from which hysterics suffer seem then to be due to the fact that their secondary personage has enriched itself by robbing the primary one of a function which the latter ought to have retained. The curative indication is evident: get at the secondary personage, by hypnotization or in whatever other way, and make her give up the eye, the skin, the arm, or whatever the affected part may be. The normal self thereupon regains possession, sees, feels, or is able to move again. In this way M. Jules Janet easily cured the well-known subject of the Salpétrière, Wit., of all sorts of afflictions which, until he discovered the secret of her deeper trance, it had been difficult to subdue. "Cessez cette mauvaise plaisanterie," he said to the secondary self - and the latter obeyed. The way in which the various personages share the stock of possible sensations between them seems to be amusingly illustrated in this young woman. When awake, her skin is insensible everywhere except on a zone about the arm where she habitually wears a gold bracelet. This zone has feeling; but in the deepest trance, when all the rest of her body feels, this particular zone becomes absolutely anaesthetic." -- William James in Principles of Psychology (1890).
23. Landauer's paper Information is Inevitably Physical was printed in Feynman and computation: exploring the limits of computers, edited by Anthony J. G. Hey (Perseus Books, 1999).
24. Wholeness and the Implicate Order by David Bohm (Rutledge & Kegan Paul 1980).
25. Abraham Pais in Niels Bohr's Times (Carendon Press, Oxford 1991). Original quote found in Albert Einstein: Philosopher-Scientist (1949), edited by Paul Arthur Schilpp.
26. One idea for recording an emotion value would be to use z = x + iy, with x representing the pain level and y the pleasure; the i is largely for bookkeeping convenience. We would write the emotion ratio as iy/x and the conflict, or tension, magnitude as z = (x2+y2)1/2. So ziy/x would give a complete emotion value.
27.The brain must employ noise detection routines and error-correction algorithms. We would then expect that it generally filters out 1/f noise, which is essentially ubiquitous for natural processes, while scanning for meaningful signals. It may perhaps also filter white noise.
Also, noise being the complement of the meaningful signal, it is vaguely conceivable that the brain might mimic regression analysis to determine whether a set of wave forms shows meaningful correlation. If so, the brain might use this technique to hunt for a threat or reward without knowing exactly what it's looking for. However, I am not aware of any indication the brain uses such a method. Habituation versus sensitization tend to determine threat or attraction level.
Fear of the unknown, of course, is a useful evolutionary adaptation. Hence a major brain task is to sift noise in search of a null result, checking for noise that differs from background noise. The signal might only be meaningful because it doesn't fit any template including the template for background noise.
28. I use the term prime directive as a shortcut syntactical device. Likewise for such terms as hope and decision.
29. One wonders how natural selection selected natural selection.
30. I have not gone into details of control theory. My use of complex numbers for the emotion value should not be confused with the z-transform that converts a discrete time domain signal -- sequence of real numbers -- into a complex frequency domain representation. This conceptual paper does not delve into engineering specifics.
31. The low entropy process of doing science permits recovery of some of this undetected information, and we might see scientific endeavor as a consequence of an organismic drive to be aware of new threats and opportunities. But I have no desire to overstate the case for Darwinism, which, I believe, is valid within its parameters, just as are other scientific theories but which, as with any theory, cannot be used to cover all bases.
32. The slaughter of World War I stirred Freud to alter his psychodynamic theory to posit a psychic force he called the death drive (later personalized as Thanatos]. My thinking is that the so-called death wish is in fact what Freud might have called the ego's struggle to suppress ("kill") the superego, whereby infantile unconscious desires can be given free reign. The desire to use intoxicants shows this tendency. When individual minds get subsumed by a mob frenzy, something of the sort is going on. Others are permitting and encouraging the venting of infantile needs so the individual is glad to enroll. It is only one step from mob psychology to the manipulation of unconscious needs by skillful, perhaps jingoistic, propagandists.
33. Of course the Vulcans were inhuman. And, at special periods they went through ferocious primitive emotionalism.
34. We can talk about the mechanics of consciousness without being able to define its kernel. Though I think that Roger Penrose was technically in error with his Goedel self-referencing argument in The Emperor's New Mind, I agree that there is a problem in coming to terms with the actuality of consciousness. It seems more than an epiphenomenon. That a quantum theory of gravity would clear that matter up seems a bit strange to me, but Penrose should be given great credit for plowing a lot of ground that others have avoided.
35. Also, the subjective sense of passage of time varies with level of alertness and the E-value of the experience.
36 Oliver Sacks tells the story of Clive Waring, an eminent musicologist, at this site: http://www.cuarts.com/sacks/stories.html
37. Again, the mechanics can be described but the actual conscious cognition escapes obvious comprehension.
38. Visual Intelligence: How We Create What We See by Donald D. Hoffman (W.W. Norton 1998). Observer Mechanics by cognitive psychologist Hoffman and two colleagues is advertised on Hoffman's home page.
39. David Deutsch, in Fabric of Reality: The Science of Parallel Universes and Its Implications (Allen Lane/Penguin 1997), does discuss the concept in a serious way. John Wheeler, who favors a participatory universe interpretation nevertheless seems to shy away from a virtual reality model.
40. The Synaptic Self by Joseph Le Doux (Viking 2002)
41. Dennett's Consciousness Explained (Little, Brown 1991) offers some valid criticisms of what he calls "the Cartesian theater" model of consciousness. In my reading of his book, I did not notice any jarring contradictions between his points and mine. However, he seems to be on a crusade to banish the soul or "the ghost in the machine" by emphasizing emergence and process. I don't think he achieved that goal. As for me, I simply avoid the issue of whether or not there is such a vitalistic force and instead focus on the process.
42. But I hasten to warn that devices that give the illusion of a voice or other sound originating inside one's own head pose a serious risk of unethical use.
43. In Search of Memory: The Emergence of a New Science of Mind by Eric Kandel (W.W. Norton 2007)
44. The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science by Norman Doidge (Penguin 2007).
45. The Psychology of Consciousness by G. William Farthing (Prentice Hall 1992)
46. It would seem that free will would require some superhuman influence. But even Christian tradition says that unregenerate persons are controlled by Satan, the father of lies. In this view, only a person who partakes of God's spirit is set free, with freedom of choice presumably improving over time.
47. Sync: How Order Emerges from Chaos in the Universe, Nature and Daily Life by Steven Strogatz (Hyperion 2003).
47a. Calculated Risks: How to know when numbers deceive you by Gerd Gigerenzer (Simon and Schuster 2002).
47b. Hallucinations by Oliver Sacks (Random House 2012).
48. A Brief Tour of Human Consciousness by V.S. Ramachandran (Pi Press 2004)
49. Though I doubt the brain's processor uses anything as sophisticated as the least squares method.
49a. Bruce Hood, director of the Cognitive Development Centre, University of Bristol, and author of The Self Illusion: how the social brain creates identity, was quoted in Edge http://edge.org/response-detail/11275 thus: "As a scientist dealing with complex behavioral and cognitive processes, my deep and elegant explanation comes not from psychology (which is rarely elegant) but from the mathematics of physics.
"For my money, Fourier's theorem has all the simplicity and yet more power than other formal explanations in science stated similarly; any complex pattern, whether in time or space can be described as a series of overlapping sine waves of multiple frequencies and various amplitudes."
"Any complex pattern in an environment can be translated into neural patterns that, in turn, can be decomposed into the multitude of sine-wave activity arising from the output of populations of neurons."
We see that the brain must have some way to filter out the signal from the noise. Further, even the noise is a product of filtration from what we characterize as input from an entity not perceivable except via inference, analogous to a software program being expressed by the "hidden" hard-wiring.
50. Noise by Bart Kosko (Penguin 2006).
51. The Interpretation of Dreams by Sigmund Freud (originally published with a 1900 publication date).
51A. I recently ran across some YouTube videos in which various songs are played backward and are said to give perverse Satanic messages. What I notice is that rhythmic music remains rhythmic, even when played backward. Similarly, many English syllables are mirror images of other English syllables, as in "but" and "tub." The hearer is hence cued by the rhythm and by the approximately human voices to "hear" recognizable lyrics. If someone suggests to the hearer the interpretation of a particular set of "lyrics," the hearer's pattern-hunt and recognition process kicks into gear and thus constructs a convincing reality.
However, by playing the music backward, there is inevitable distortion in the "pronunciation" of recognizable syllables, giving the hearer the impression of strangeness, or other-worldliness. Further, some people make a game out of seeking instances of lyrics that seem to say things such as "Praise Satan." They then convince others of the "reality" of such verbalisms. However, they don't draw attention to the many intervals in which English speech is unrecognizable by most or, if it is recognizable, the verbalisms are uncontroversial.
52. See for example Witness by Whittaker Chambers (Regnery 1952).
53. Coincidence, Chaos, and All that Math Jazz by Edward B. Burger and Michael Starbird (W.W. Norton 2005)
54. 13: The Story of the World's Most Popular Superstition by Nathaniel Lachenmeyer (Avalon 2004).
55. See Strogatz op cit; Order Out of Chaos: Man's Dialogue with Nature by Ilya Prigogine and Isabelle Stengers (Bantam 1984); and Origins of Order: Self-Organization and Selection in Evolution by Stuart A. Kauffman (Oxford 1993).
55a. A New Kind of Science by Stephen Wolfram (Wolfram Media 2002).
56. The Roots of Coincidence by Arthur Koestler (Hutchinson Publishing Group 1972, Random House 1973).
57. An example of such opportunism is Incredible Coincidence: The Baffling World of Synchronicity by "psychic researcher" Alan Vaughan (J.B. Lippincott 1979).
58. The Quark and the Jaguar: Adventures in the Simple and Complex by Murray Gell-Mann (Macmillan 1995). I am unsure of what is meant by a "goblin world."
59. Quoted in The Ghost in the Atom, P.C.W. Davies and J.R. Brown (Cambridge 1986, revised 1999). The book is based on a series of BBC interviews with quantum physicists concerning the measurement problem and Aspect's results.
60. Synchronicity: the Bridge Between Matter and Mind by F. David Peat (Bantam 1987)
61. Jordan's paper, Positivistische Bemerbungen uber die parapsychischen Erscheinungen (Zentralblatt fur Psychotherapie [Leipzig] IX [1936] 14ff) was cited in The basic writings of Carl Jung, Violet Staub Laszlo, ed (Modern Library 1993 edition, page 129).
62. Cited in A life of Jung by Ronald Hayman (W.W. Norton 1999).
63. Synchronicity: An Acausal Connecting Principle by Jung and Archetypischer Vorstellungen auf die Bildung Naturwissenchafltlicher Theorem bei Kepler (The archetype in Keplerian physics) by Wolfgang Pauli, which appeared in Naturerklarung und Psyche 1952. A corrected version of Jung's article appears in Collected works of C.G. Jung translated by R.F.C. Hull (Bollingen Foundation/Princeton University Press 1960).
64. Having not read Pauli's paper and despite Jung's coy hints, I am uncertain how far Pauli was willing to go publicly in committing himself to a quantum aspect to paranormal phenomena.
65. "Beyond Materialism," an essay printed in Bricks to Babel by Arthur Koestler (Hutchinson Publishing Group 1980).
66. Uncertainty, the Life and Science of Werner Heisenberg by David Cassidy (W.H. Freeman 1992).
67. The Psychopathology of Everyday Life by Sigmund Freud (originally published in 1901).
68. Das Gesetz der Serie by Paul Kammerer, cited by Koestler. Einstein reportedly said of Kammerer's logbook of some 100 selected examples of bizarre coincidences that it was "original and by no means absurd" but then Einstein was known for being courteous to eccentrics.
69. Kammerer's idea is echoed later by Rupert Sheldrake's morphogenetic field.
70. The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow (Random House 2008).
71. Jung is quoting directly from Flammarion's book The Unknown which came out in 1900. Flammarion (1842-1925) was an astronomer who sold many books promoting his conjectures about life on other planets. Though his specific conjectures have been nullified by modern science, many in the scientific community agree with the essential notion that life on other planets is quite likely. However, Flammarion also believed that "spiritism" -- today called paranormal phenomena -- should be investigated in a scientific manner.
72. Forward to the book Spuk: irrglaube oder wahrgraube (Ghosts: false belief or true?) by Fanny Moser. The forward is found in Jung's collected works translated by R.F.C. Hull (Bollingen).


Revisions:
Material on Goedel and Einstein has been corrected as of Dec. 9, 2009.
Complementarity matter has been added as of Nov. 18, 2009 and updated as of Dec. 9, 2009.
Updates concerning Koestler were added Oct. 30, 2010.
An insert concerning probability of a match was added Aug. 31, 2012.
Paragraphs concerning Milton Erickson and Jules Dupotet de Sennevoy inserted Sept. 16, 2012.
Material on Bohm added, along with minor revisions and a regularization of the footnote system, in March 2013.
Footnote 49a added Sept. 10, 2013.
Note and link on Freud inserted Oct. 10, 2013.
Paragraphs concerning Oliver Sacks's book Hallucinations inserted Oct. 19, 2013.
Paragraph on Brian Josephson inserted Oct. 22, 2013.
Sanguinetti insert added Nov. 25, 2013.

No comments:

Post a Comment