<<

The Exact Sciences and Non-Euclidean Logic

David A. Edwards

[email protected]

http://alpha.math.uga.edu/~davide/

Dedicated to Reuben Hersh for his ninetieth birthday

What "really" exists pervades the sciences and human thought in general. The belief that the infinite does not really exist goes back at least to . Parmenides even questioned the reality of plurality and change. (Einstein's vision has much in common with Parmenides). Towards the end of the nineteenth century an acrimonious exchange took place between Kronecker and Cantor regarding the reality of the actual (as opposed to potential) infinite. Kronecker claimed that only the finite integers really exist and all else is merely the work of man. Cantor countered that the essence of mathematics was its freedom and that he had attained a larger vision than Kronecker had who could not see the infinite. Most mathematicians have followed Cantor and found his paradise a more beautiful and alluring universe. Hilbert accepted Kronecker’s viewpoint for his metalanguage, but tried to recapture Cantor's paradise in a formal language. Hilbert was a formal pluralist in feeling that each mathematical discipline was entitled to its own formalization. Russell was a logical monist and felt that all of mathematics should be constructed within a single formal system. He put a great deal of labor into his program and looked askance at Hilbert. He felt that Hilbert's approach had all the advantages of theft over honest toil. What he did not realize was that in intellectual affairs, as in economic affairs, great fortunes are rarely ever accumulated through honest toil. What is needed is the intellectual leap. Russel’s program led to much interesting mathematics, but even if in principle it could be carried out, in practice the result would be computationally intractable. One would be translating simple, clear ideas into the fog of Principia Mathematica. Russell's program has as much relevance to complex analysis as von Neumann's game theory has to chess. The understanding and appreciation of mathematics has very little to do with formal logic. For example, the following footnote occurs at the beginning of Wall's (1970) book Surgery On Compact Manifolds.

Recent results of Kirby, Siebenmann and Lees have now (1966) provided such a technique. All our methods now extend to the topological case, with only trivial alteration. See (K8),(K9), (L10).

All the experts could see the truth of this footnote. But this seeing is not explained by modus ponens. In his beautiful book Proofs and Refutations, Lakatos (1976) has shown that the mathematical process itself is dialectical and not Euclidean. At all times our ideas are formally inconsistent. But inconsistency, while still recognized as a pathology, is no longer seen to be a fatal disease. If we come across a contradiction, we localize it, isolate it, and try to cure it. But we have to get over our neurotic phobias concerning this disease and recognize it as inseparable from life itself. Hilbert's program collapsed with the startling work of Godel. Mathematical logic and the study of formal systems have become a branch of mathematics instead of its foundation. Moreover, Cantor's paradise has been raised into the metalanguage in order to prove deep theorems concerning formal systems as well as to provide a semantics for such systems. A. Robinson even defined non-standard formal systems which contain infinite formulas. One thus has a large plurality of different approaches to mathematics. Most mathematicians live in Cantor's paradise in spite of Russell's paradox; they simply learn to avoid making certain moves which have been shown to lead to contradictions.

The situation is similar in also. As soon as the phenomena under discussion become sufficiently complex, one must depart from Euclidean strategies and adopt a non-Euclidean approach. This is very clearly stated by Blandford and Thorne (1979, p. 454-460) in the context of black hole astro-physics.

The fundamental theory of black holes, as laid out in chapters 6 and 7, is well posed, elegant, clean and self-contained. It follows inexorably and clearly from the fundamental laws of physics. The theory of black holes in an astrophysical environment is completely the opposite. Because it deals with the physics of matter in bulk-matter orbiting and accreting onto a hole- - it is subject to all the dirty, complex uncertainties of the modern theory of the behavior of bulk matter. If thunderstorms and tornados on Earth have eluded accurate theoretical modeling, how can one expect to predict even qualitatively their analogues in the turbulent, magnetized plasmas that accrete onto a black hole in a close binary system? One cannot. The best that can be hoped for is to develop the crudest of models as to the gross behavior of matter in the vicinities of holes. Fortunately, the resulting models have some modest hope of resembling reality. This is because the relative importance of physical processes near a hole can be characterized by dimensionless ratios that usually turn out to be very large, and consequently, the gross behavior of matter near a hole is dominated by a small number of processes. The task of the model builder is to identify the dominant processes in his given situation, and to construct approximate equations, describing their macroscopic effects. Historically, to identify the dominant processes has not been easy. This is because a vast number of possible processes must be considered and the model builder often, out of ignorance, overlooks an important one. Thus it is that research on black hole astrophysics involves large bodies of physical theory. Within each body of theory one must have at one's fingertips approximate formulae that characterize a long list of possibly relevant processes. The necessary bodies of theory include general relativity, the physics of equilibrium and non-equilibrium plasmas, the physics of radiative processes, and the physics of stellar dynamical systems. Research in black hole astrophysics also requires, a good knowledge of the phenomenology of modern – the observed properties of stars, the main features of their evolution, the structure of the Galaxy, and the observed physical conditions in interstellar space.

The above example illustrates the following features of research in black hole astrophysics:

(i) It involves an iteration back and forth between the equations of the macroscopic model and the microscopic physics which underlies those equations. One iterates until one obtains self- consistency. (ii) (ii) One must search carefully, at each iterative stage, for overlooked processes that might be so important as to invalidate the model (anchoring to a homogeneous interstellar magnetic field in the above example). (iii) (iii) One frequently encounters a 'branch point' where the model will take on two very different forms depending on what one assumes for the environment around the hole (homogeneous magnetic field versus tangled field in the above example), and where both branches might well occur in the real universe. This leads to a plethora of possible models, each corresponding to a different black hole environment and/or range of black hole masses. Probably the clearest case study of non-Euclidean logic occurs in the paper Problem Solving About Electrical Circuits by Stallman and Sussman (1979, p.33-39). We quote from their introduction.

A major problem confronting builders of automatic problem-solving systems is that of the combinatorial explosion of search-spaces. One way to attack this problem is to build systems that effectively use the results of failures to reduce the search space that learn from their exploration of blind alleys. Another way is to represent the problems and their solutions in such a way that combinatorial searches are self- limiting. A second major problem is the difficulty of debugging programs containing large amounts of knowledge. The complexity of the interactions between the "chunks" of knowledge makes it difficult to ascertain what is to blame when a bug manifests itself. One approach to this problem is to build systems which remember and explain their reasoning. Such programs are more convincing when right, and easier to debug when wrong. ARS is an expert problem-solving system in which problem solving rules are represented as demons with multiple patterns of invocation monitoring and an associative data base. ARS performs all deductions in an antecedent manner, threading the deduced facts with justifications which mention the antecedent facts used and the rule of inference applied. These justifications are employed by ARS to determine the currently active data-base context for reasoning in hypothetical situations. Justifications are also used in the analysis of blind alleys to extract information which will limit future search . . . ARS supplies dependency - directed backtracking, a scheme which limits the search as follows: The system notes a contradiction when it attempts to solve an impossible algebraic relationship F, or when it discovers that a transistor's operating point is not within the possible range of its assumed region. The antecedents of the contradictory facts are scanned to find which nonlinear device state guesses (more generally, the backtrackable choicepoints) are relevant. ARS never tries that combination of guesses again. A short list of relevant choicepoints eliminates from consideration a large number of combinations of answers to all the other (irrelevant) choices. This is how the justifications (or dependency records) are used to extract and retain more information from each contradiction than a chronological backtracking system. A chronological backtracking system would often have to try many more combinations, each time wasting much labor rediscovering the original contradiction.

Note, that this is a case where a Euclidean approach does exist but is computationally intractable. We thus have a clear example of a non-Euclidean approach being used for good reasons and not just because of sloppiness. We thus gain a better appreciation of the gulfs that separate the mathematician, the physicist, and the engineer. Their programs are different, their aesthetics are different, and even their "logics" are different. At this point, the question of whether every non-Euclidean logic can be embedded in a Euclidean logic can only be answered dogmatically. Those in artificial intelligence research assume a positive answer, while hermeneuticists assume a negative answer. However, even if you believe in an affirmative answer, a very simple non-Euclidean logic may only be embeddable in an immensely complex Euclidean logic, thus making, its embeddability irrelevant from a practical point of view (recall von Neumann's analysis of chess).

Modern physics pervades nearly all aspects of modern biology. The striking difference between the explanations given by Aristotle and by modern biologists is partially explained by the fact that cells, molecules, atoms, electrons and photons were not part of Aristotle's universe. Modern biology teaches us that we breathe in order that oxygen can be supplied to the cells of the body, so that they can metabolize food to obtain energy needed for the other activities of life. This helps us understand the structure and purpose of the lungs, red blood cells, etc. Much of this understanding is at the level of molecular structure (for instance, the structure of hemoglobin). We explain the workings of green plants by saying that they are green because of the presence of molecules of chlorophyll which are used to capture photons from the sun, thus providing the plant with its basic energy supply. The resemblance of children to their parents is explained by the DNA-sequence in their chromosomes. Sight is explained by appeal to photons captured by the rods and cones in the retina. One could go on and on. Considering modern physics' pervasive role in modern biology, it is quite interesting to note how little most biologists know of the foundations of modern physics. Their ideas of photons, atoms, and molecules resemble the incoherent pictures formed by Einstein and Bohr during the period 1900-1925. Even more interesting is the fact (?) that their ignorance of the viewpoints of modern quantum does not seem to hinder their work at all. While they think of photons and electrons "classically," the success of their work is due to the fact that they only actually use atomic language in the way Ostwald recommended at the end of the nineteenth century. This can be easily seen in Watson's book The Molecular Biology of The Gene. Let us reconsider our previous biological examples using correct quantum mechanical language. First of all, photons are not objects which could come from the sun. Instead, one assumes the existence of certain detectors called photon detectors which give positive readings under certain conditions such as being exposed to the sun light. Next, one discovers that plants grow well not only in sun light but also under many other conditions. A basic invariant of all these conditions is that under them our photon detector would yield positive results. Again, molecules aren't objects either. So plants aren't "made up" of molecules such as chlorophyll. Instead, another basic invariant of those conditions under which plants grow successfully and appear green is that a chlorophyll detector would yield a positive result. Children don't resemble their parents because of DNA sequences, but, instead, parents and their children's DNA-sequences are highly correlated and each is somewhat correlated to phenotypes. This is not the type of language usually used by biologists. But biologists like Watson aren't wholly to blame for their misuse of physical language. Watson gets his physics from Linus Pauling. In his book, The Nature of the Chemical Bond, Pauling constantly mixes classical ontology with modern quantum mechanics. Consider for example the following quote (Pauling, 1960, p.19).

The electron distribution function for molecule-ion is shown in Figure 1-5. It is seen that the electron remains for most of the time in the small region just between the nuclei, only rarely getting on the far side of one of them; and we may feel that the presence of the electron between the two nuclei, where it can draw them together, provides some explanation of the stability of the bond.

This picture resembles more the hidden variables views of Bohm and Nelson than it does the viewpoints of Bohr and Heisenberg. But the master knew better as is shown by the following quotes (Paulinq, 1960, pp. 217- 220).

It is true that chemists, after long experience in the use of classical structure theory, have come to talk about, and probably to think about, the carbon-carbon double bond and other structural units of the theory as though they were real. Reflection leads us to recognize, however, that they are not real, but a rhetorical constructs in the same way as the individual Kekule structures for benzene. It is not possible to isolate a carbon-carbon bond and to subject it to experimental investigation. There is, indeed, no rigorous definition of the carbon-carbon double bond. We cannot accept, as a rigorous definition, the statement that the carbon-carbon double bond i s a bond between two carbon atoms that involves four electrons, because there is no experimental method of determining precisely the number of electrons that are involved in the interaction of two carbon atoms in a molecule, and, of course, this interaction has rigorously to be considered as being dependent on the nature of the entire molecule. I feel that the greatest advantage of the theory of resonance, as compared with other ways (such as the molecular- orbital method) of discussing the structure is that it makes use of structural elements with which the chemist is familiar. The theory should not be assessed as inadequate because of its occasional unskillful application. It becomes more and more powerful, just as does classical structure theory, as the chemist develops a better and better chemical intuition about it. The theory of resonance should not be identified with the valence-bond method of making approximate quantum-mechanical calculations of molecular wave functions and properties. The theory of resonance is essentially a chemical theory (an empirical theory, obtained largely by induction from the results of chemical experiments). Classical structure theory was developed purely from chemical facts, without any help from physics. The theory of resonance was also well on its way toward formulation before quantum mechanics was discovered. The theory of resonance in chemistry is an essentially qualitative theory, which, like the classical structure theory, depends for its successful application largely upon a chemical feeling that it developed through practice. We may believe the theoretical physicist who tells us that all the properties of substances should be calculable by known methods-- the solution of the Schrodinger equation. In fact, however, we have seen that during the 35 years since the Schrodinger equation was discovered only a few accurate nonempirical quantum mechanical calculations of the properties of substances in which the chemist is interested have been made. The chemist must still rely upon experiment for most of his information about the properties of substances. Experience has shown that he can be immensely helped by the use of the simple chemical structure theory. The theory of resonance is a part of the chemical structure theory, which has an essentially empirical (inductive ) basis; it is not just a branch of quantum mechanics.

Thus, no matter what the future may bring, chemistry is at present an independent science only slightly dependent upon physics and much dependent upon chemical intuition. The situation is similar for biology. Watson goes out of his way to insist that biology requires no “natural laws” not already found by chemistry and physics. He is opposing vitalists such as Bergson and the expectations of physicists such as Bohr and Schrodinger. Bohr hypothesizes that biology would yield a biological complementarity principle where the description of a living organism would be complementary to a description of its molecular structure. Such a position may yet come to pass when biology becomes sufficiently precise to become conscious of its “complementarities."

Conclusions

One of our most powerful myths is the story of the tower of Babel where by an original unity was shattered into diversity. Much of our intellectual history consists of attempts to find common grounds upon which to erect unified sciences. Many bewail the unsatisfactory situation in the social sciences, humanities, and theology where many schools compete for dominance. This situation contrasts with the seeming harmony of the natural sciences. The claim we have made here is that the relative lack of polemics in the hard sciences is not due to a consensus concerning fundamental theoretical structures, but is instead a much more complicated sociological fact. For instance, a close study of the various electron models reveals tremendous differences. The differences between the Schrodinger, Dirac, and Feynman theories of an electron appear to us as huge as the differences between the Freudian, Skinnerian and Piagetian theories of human behavior. But physicists don't mind the diversity. They take an eclectic approach using whichever model seems most appropriate under the circumstances. By contrast, many psychologists feel that the alternative approaches to human behavior are competing theories and only one of them will eventually prevail. In studying complex phenomena there are always a variety of possible approaches. Too much of an insistence on consensus results in very impoverished starting points such as those taken by Russell and Carnap. From such starting points it is very difficult to get anywhere. But the other extreme often results in a tower of Babel where each scientist has not only his own theories but his own scientific methodology and logic. To paraphrase Goethe: One can convince oneself while viewing a great collection of scientific works that nearly each master had a different way of approaching nature. In our opinion, what is required in science is the same thing that is required in art, namely "taste" and "judgment." To say this, however, is not to opt for an "anything goes" relativism or radical idealism. On the contrary, we have argued that toleration is necessary in order to save realism, and thus avoid such radical solutions. To the extent that agreement can be reached about goals, there are clear and demonstrable relative advantages between various positions. The determination of these advantages, though, may not be something that is clearly formalizable. On the other hand, in those cases where there is no agreement about goals, we should not expect consensus. In fact, our point has been that in these cases the attempt to enforce consensus carries grave risks.

Bibliography

Blandford, R. and Thorne, K. Black hole astrophysics, in S. Hawking and W. Israel (Eds.) Genera l Relativity. N.Y.: Cambridge University Press, 1979. Goethe, J. Goethe's Color Theory. R. Matthae (Ed.). N.Y.: Van Nostrand Reinhold, 1971. Lakatos, I. Proofs and Refutations. N.Y.: Cambridge University Press, 1976.

Pauling, L. The Nature of the Chemical Bond and the Structure of Molecules and Crystals. Ithaca, N.Y.: Cornell University Press, 1960.

Stallman, R. and Sussman, G. Problem solving about electrical circuits. in P. Winston and R. Brown (Ed s.), Artificial Intelligence: An M.I.T. Perspective, vol I. Cambridge, Mass.: M.I.T Press, 1979.

Wall, C. Surgery on Compact Manifolds. N.Y.: Academic Press, 1970.