<<

On the Origin of Forces

Theory of Universal Force and its Application to Physical Problems

A personal view by J. Rautio into the field of from the outside.

Abstract. This article introduces a 100% electromagnetic view of the world. It is a fractal world, in which similar phenomena and structures appear at all scales and, at least in principle, there is no boundary between microscopic and macroscopic worlds. This theory is also a theory of bosons, fermions, matter, mass, inertia, and quantization. Interactions of particles and their structure are largely based on the Riemann zeta function. Also, the so-called 1/f-noise is an integral part of the theory.

This is a completely new view, but its novelty means, in most cases, new physical insight of existing mathematical models. It is not in conflict with experimental observations. In other words, the new view demands a re-interpretation of the “cornerstone” experiments supporting relativity. The theory produces new insights into all branches of physic. For this reason, expressing dissident proposals for several branches of science is unavoidable.

In this work mathematical modelling is of qualitative nature. The emphasis is on simple physical understanding rather than quantitative studies. Exact numerical models are not important for the story to be told here. More important is to give a general view which makes understandable a number of physical phenomena that presently are not considered to be related.

Science prides itself of being self-corrective, but from an outsider's perspective, this simply is not true. and cosmology have become so weird that only one conclusion can be drawn; they have lost their way. No correction is in sight. The following article is written in the spirit of the advice given by Richard Feynman: “Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.”

For an outsider, it is easy to see that many disciplines of science are in need of radical rethinking. In what follows we pinpoint the events that, in our opinion, led physics and cosmology astray.

Content. The first part of the article contains a critique of , our suggestions fora new view and the existing support to them, mostly from eminent scientists advocating dissent from mainstream beliefs.

The second part, applications (p.87), gives explanations, based on the new view, to many poorly understood experimental results or phenomena from various branches of science. There is another quotation from Feynman: “The main problem in the study of the fundamental particles today is to discover what are the correct representations for the description of nature. At the present time, we guess that for the electron it is enough to specify its momentum and spin. … Will the electron’s momentum still be the right thing with which to describe nature?” This article tries to show the reasons why the answer to this question must be “ No”.

Critique Occult Qualities in Physics

Newton started a new way of thinking in science — the relationship between causes and effects. He set up the mathematical machinery to describe this relationship in a continuous manner. The Aristotelian / Scholastic natural philosophy was rejected. Mathematical equations and functions became the scientific explanation and the explanation was verified by making precise predictions based on those equations.

The Principia was Newton’s study of “the motion that results from any force whatever and of the forces that are required for any motion whatever.” In the case of Keplerian motion it appeared to be an attractive force.

Newton postulated an attractive force whose mode of operation could not be reduced to mechanical pressure or impact — a force, that is, apparently acting at a distance. His response to the problem was simply to shelve the problem of the intrinsic qualitative character of gravitational force — “I do not feign hypotheses” — and to insist that it was sufficient for scientific explanation to have a mathematical model which enables us to predict celestial motions.

1 Not everybody was satisfied with this. Huygens and Leibniz found that the adoption of attraction by natural philosophers would bring about a reversion to the “occult qualities” of Scholasticism.

“Gravity, interpreted as an “innate attraction” between every pair of particles of matter, was an occult quality in the same sense as the scholastics’ “tendency to fall” had been. Therefore, while the standards of corpusculiarism remained in effect, the search for a mechanical explanation of gravity was one of the most challenging problems for those who accepted the Principia as paradigm.

Newton devoted much attention to it and so did many of his eighteencentury successors. The only apparent option was to reject Newton’s theory for its failure to explain gravity, and that alternative, too, was widely adopted. Yet neither of these views ultimately triumphed. Unable either to practice science without the Principia or to make that work conform to the corpuscular standards of the seventeenth century, scientists gradually accepted the view that gravity was indeed innate.” [Thomas Kuhn]

In modern physics the concept of charge is of occult nature. Another oddity is the “isolated system”.

In physics a system is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment, which in analysis is ignored except for its small perturbations on the system. The cut between system and the world is a free choice, generally made to simplify the analysis as much as possible. An isolated system is one which has negligible interaction with its environment. The two concepts, attractive force and isolated system, are connected. is a good example. This branch of physics studies atoms as an isolated system of electrons and an atomic nucleus. In the atom the electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the attractive force… If a dynamical system of particles is isolated, there must be attractive forces to hold it together. The particles must have an innate property which causes them to experience an attractive force when near other particles. This leads directly to the idea that subatomic particles are sources of force fields, and this is no different from gravity as an “innate attraction.”

Modern Physics is Useless in Biology

Biologists once wanted to show living cells under the microscope to R. Feynman. “They had some plant cells in there, and you could see some little green spots called chloroplasts (they make sugar when light shines on them) circulating around. I looked at them and then looked up: “How do they circulate? What pushes them around, I asked. Nobody knew. It turned out that it was not understood at that time.” That is a relevant question still. Today we know that living cells are extremely complicated in their structure and inner processes. We know that many of the constituent molecules of the are in orderly and continuous motion. Why do they move? and thermal fluctuations are not nearly adequate explanations.

Ilja Prigogine has recognized living organisms as open systems and “dissipative structures”. According to his theory, dissipative structures not only maintain themselves in a stable state far from equilibrium, but may even evolve. When the flow of energy through them changes, they may go through points of instability and trans-form themselves into new structures. Living organisms maintain themselves in a state far from equilibrium. Chemical and thermal equilibrium exists when the processes of a living cell come to a halt. In other words, an organism in equilibrium is a dead organism. This is consistent with one of the basic considerations of this paper, namely that there are no isolated systems in nature. In particular, all particles are open systems; they have exchange of energy with the rest of universe. The complexity of life emerges from the complex interaction between particles. We will explain how this complexity comes about.

Supremacy of Hamiltonian Systems

Niels Bohr's work created , which is the foundation of modern quantum physics. Its theories and equations are many, but there is one unifying theme connecting all of these theories. The evolution of a physical system over time (as well as the steady states of that system) is controlled by a single object, the Hamiltonian of that system, which can often be interpreted as the total energy of any given state in that system. The Hamiltonian / Lagrangian formalism has ubiquitous presence in modern physics. It is now generally accepted that all real physical processes, which are thought to be of negligible dissipation, can be expressed in Hamiltonian form.

2 Equations of motion or field equations in fundamental theories derive from the variational principles (namely, the Hamilton principle). Theoreticians tend to assume that the theory is all set when the Lagrangian is written. This means that physics is stuck with occult qualities, namely closed or isolated systems and, consequently, attractive forces, even though the vast majority of don't see it that way, because they are practicing what Thomas Kuhn called “normal science”.

Closed systems, “point particles”, and the stubborn cling to Hamiltonian concepts have plagued physics for ages. J. Clerk Maxwell wrote in his book Matter And Motion: “In all scientific procedure we begin by marking out a certain region or subject as the field of our investigation. To this we must confine our attention, leaving the rest of the universe out of account till we have completed the investigation in which we are engaged. In physical science, therefore, the first step is to define clearly the material system which we make the subject of our statements.” From the book's preface (1877): “PHYSICAL SCIENCE, which up to the end of the eighteenth century had been fully occupied in forming a conception of natural phenomena as the result of forces acting between one body and another, has now fairly entered on the next stage of progress — that in which the energy of a material system is conceived as determined by the configuration and motion of that system...” Modern physics has followed faithfully the directive put forth by Maxwell, to the extremity of the “Geonium Atom”, which is composed of one electron and the planet Earth, but is still considered as an atomic ( = isolated) system.

The Missing Picture of “Truly Elementary Particles” Newtonian mechanics is about point particles, but everybody understands that a point mass is an idealization and in reality gravity is caused by the mass surrounding the center of mass at r = 0. What about a point charge? Do electrons have internal structure? In 1913 Niels Bohr simply ignored the question in his planetary atom model and treated electrons as charged point masses. In 1925 Yakov Frencel was probably the first to explicitly state that “the electrons are not only indivisible physically, but also geometrically. They have no extension in space at all.” He concluded that the mass [of the electron] could not possibly be interpreted electromagnetically and that the entire problem of the electron's inner structure was “scholastic”. This point of view became a doctrine in quantum theory. The electron was considered to be point-like because, as Dirac remarked in 1928, why should nature have chosen otherwise? Dirac also argued that “The electron is too simple a thing for the question of the laws governing its structure to arise.” (Dirac: A Scientific Biography by Helge Kragh.) There is also a second, more of a psychological reason for the quest for elementary particles: If we discovered that a particle is made up of more fundamental particles, we would then immediately ask what these more fundamental particles are made of. And if we managed to answer that, we would ask what the sub-particles were made of — ad infinitum. Somehow there has to be an end to this process. One way to stop it is to define a truly elementary particle as one that has no dimensions, and therefore no substructure.” This is exactly what has happened. According to the Standard Model of , the particles that make up an atom are point particles: they do not take up space. This impossible idea has reached the status of a dogma in modern physics. Waves, Particles or Both It is stated that “Wave–particle duality is deeply embedded into the foundations of quantum mechanics, so well that modern practitioners rarely discuss it as such.” What exactly are those ideas now embedded into physics?

Louis de Broglie. The following excerpts are from his note in Comptes rendus in 1923, his doctoral thesis in 1924, and On the Theory of Quanta in 1925. He suggested that if waves can behave like particles, as Einstein had shown, then one might expect that particles can behave like waves.

For the reader who is engaged in the reading of this work, it is convenient to first point out what the points are that I have assumed without proof. In the first place, I assume and suppose known the entire , whether in its original form, which is called “special” today, or in its general form. In particular, I have made constant use of relativistic dynamics, …[ thesis ]

3 “One may imagine that, by cause of a meta law of Nature, to each portion of energy with a proper mass m0 one

may associate a periodic phenomenon of frequency 0 such that one finds: 2 hn 0= m 0 c .

The frequency 0 is to be measured, of course, in the rest frame of the energy packet. This hypothesis is the basis of our theory.” ….. Must we suppose that this periodic phenomenon occurs in the interior of energy packets? This is not at all necessary; ...it is spread out over an extended space. Moreover, what must we understand by the interior of a parcel of energy? An electron is for us the archetype of isolated parcel of energy, which we believe, perhaps incorrectly, to know well; but, by received wisdom, the energy of an electron is spread over all space with a strong concentration in a very small region, but otherwise whose properties are very poorly known. That which makes an electron an atom of energy is not its small volume that it occupies in space, I repeat: it occupies all space, but the fact that it is undividable, that it constitutes a unit. [On the Theory of Quanta] “Now let us suppose that at the time the moving object coincides in space with a wave of frequency defined above and propagating in the same direction as it does with the speed c/β . This wave, which has a speed greater than c, cannot correspond to transport of energy; we will only consider it as a fictitious wave associated with the motion of the object.” [ Comptes rendus note ]

[ Note that, while de Broglie applied his theory to , he made it clear that this wave was distinct from the electromagnetic field. De Brogley also applied his ideas to the atom: ]

“Let us consider now the case of an electron describing a closed trajectory with uniform speed slightly less than c. ... The associated fictitious wave, launched from the point O [at the trajectory] and describing the entire trajectory with the speed c/β , catches up with the electron... ….. It is almost necessary to suppose that the trajectory of the electron will be stable only if the fictitious wave passing catches up with the electron in phase with it: the wave of frequency and speed has to be in resonance over the length of the trajectory.” [ Comptes rendus note ]

But de Broglie left hanging the question of: “A wave of what?” There are other questions: How did the fictitious wave become “launched” and why it would take a circular path? For us, these ideas are a bit too artificial. De Broglie insists that there are waves devoid of energy but propagating at superluminal velocity. It is impossible to believe that any “meta law of Nature” would force a contradiction like that. Louis de Broglie can be credited with first proposing that, “All particles have an intrinsic real internal vibration in their 2 rest frame,… f = m0 c / h.” We will discuss more of de Broglie's hypothesis later in the section On Formation of Quantum Physics.

Erwin Schrödinger

Erwin Schrödinger set out to find the wave equation for de Broglie’s waves. De Broglie’s “matter waves” are propagating waves, and Schrödinger wanted to find out how these waves could be refracted so that they would follow the Keplerian orbits inside the atom. There was also another goal for the model. In 1913 Niels Bohr published a theory about the structure of the atom based on an earlier theory of Ernest Rutherford. He had shown that the atom consisted of a positively charged nucleus, with negatively charged electrons in orbit around it. Bohr found no difficulty in its application to the hydrogen atom, which he simply treated as a microscopic Kepler system. He expanded upon this theory by proposing that electrons travel only in discrete set of orbits. He suggested that the outer orbits could hold more electrons than the inner ones, and that these outer orbits determine the atom's chemical properties. Bohr also suggested that when an electron jumps from an outer orbit to an inner one, it emits light. Niels Bohr’s planetary atom model gives results of such accuracy, that Schrödinger said the following about it: “It is difficult to believe that this result is merely an accidental mathematical consequence of the quantum conditions, and has no deeper physical meaning”.

4 In Bohr's atom theory one finds the integers n = 1, 2, 3, ...which we today know as principle quantum numbers. Schrödinger thought that Bohr's model was ad hoc in nature and wanted to find out what is the deeper physical meaning of these numbers. He came up with his own atom model, which is a three-dimensional standing wave. What was vibrating, he did not explain.

Schrödinger claimed as an advantage of his theory that it was anschaulich, meaning that it was expressed in terms of continuously evolving causal processes in space and time. ( He considered this condition to be an essential requirement on any acceptable physical theory. )

He also felt successful because “... the usual rules for quantization can be replaced by another requirement, in which mention of 'whole numbers' no longer occurs. Instead, the integers occur in the same natural way as the integers specifying the number of nodes in a vibrating spring. The new conception can be generalized, and I believe it touches the deepest meaning of the quantum rules.” [ Quantization as an Eigenvalue Problem.]

We agree with Schrödinger about what the integers specify (and also about the requirements on physical theories). We also notice that the present form of quantum physics is almost the exact opposite of what Schrödinger had in mind. Schrödinger believed that the final aim of science is not merely to describe phenomena, but also to explain, to gain significant understanding of them. He also regarded visualizability as a necessary condition for understanding by human beings.

1926: “Niels Bohr’s standpoint, that a spacetime description is impossible is something I reject a limine. … The purpose of atomic research is to adjust our empirical knowledge concerning it with our other thinking. All of this other thinking, in so far as it is about the outer world, is active in space and time. If it cannot be fitted into space and time, then it fails in its whole aim, and one does not know what purpose it really serves.”

1928: “We cannot really alter our way of thinking in space and time, and what we cannot comprehend within it we can’t understand at all.”

In the 1950s: “Let me say at the outset, that in this discourse, I am opposing not a few special statements of quantum mechanics held today, I am opposing as it were the whole of it, I am opposing its basic views that have been shaped 25 years ago, when Max Born put forward his probability interpretation, which was accepted by almost everybody. I don’t like it, and I’m sorry I ever had anything to do with it.”

What is Wrong with Quantum Mechanics?

The cornerstones of quantum mechanics are the relations of Planck and Einstein, the latter identity extended by de Broglie to cover also the particles of matter: E =hw,.p = h k

It was Louis de Broglie who first suggested in his doctoral dissertation in 1924 that massive matter could behave like waves, and that the angular frequency ω of the wave that is associated with a mass m of kinetic energy E and linear momentum p would be equal to E / ħ, while the wave number k would be equal to p / ħ . (This he understood to be consistent with the previously-established relationships that Einstein and had obtained for photons.)

The crucial idea of Erwin Schrödinger was the following: “I have the feeling...that we have not yet sufficiently understood the identity between energy and frequency in microscopic processes... What we call the energy of an individual electron is its frequency. Basically it does not move with a certain speed because it has received a certain 'shove', but because a dispersion law holds for the waves of which it consists, as a consequence of which a wave packet of this frequency has exactly this speed of propagation.”

5 Schrödinger constructed the dispersion relation for a free particle from the cornerstone equations above:

w= w(k ) = h k2 / (2 m ).

This was the mistake! The relation connects the velocity and the energy of a particle incorrectly. The correct view would be that the wave number k measured in the laboratory frame is only a manifestation of the particle's internal vibration. The quantities k and belong to different frames. The frequency  is attached to oscillation which must be described in the rest frame of the particle, as de Broglie suggested.

Although at first sight the difference between the two views appears small, the overall impact will be huge. Ideas such as “collapse of the wave function”, “uncertainty principle”, and “probability interpretation” can be dropped immediately. In fact the whole present quantum theory can be dropped altogether and replaced with a new one.

Proper Use of Planck-Einstein Relations

E =hw ,.p = h k

These relations must be used separately, for instance, as components of a conservation law.

A most important example is the generalized Compton- theory concerning scattering of photons off moving electrons:

2 2 hw1+mvvc()(). 1 1 = h w 2 + mvc 2

hw h w 1u+mv()(). v = 2 u + mv v c1 1 1 c 2 2 2

Compton and Debye started from the two conservation equations above and derived the following expression:

v 1- 1 cosq w 1 2 = c . v hw w1 1 1 2 1- cosj + 22 sin ( q / 2) c mvc()1

If the electron's velocity v1 before the impact is null or negligible, the well-known Compton formula follows:

w 1 2 = . hw w1 1 2 1+ 22 sin (q / 2) m0 c

If the electron's velocity v1 > 0 does not change in the collision ( this is the case if the electron is bound to an atom and the mass m0 can be idealized as being very large), the following form emerges:

v 1- 1 cosq w 1 2 = c . w v 1 1- 1 cosj c

This is the -effect for moving source and moving observer and it can be applied to trains of photons, i.e. electromagnetic waves, but it can equally well be applied to individual photons, as we shall later show. (The function m = m(v) will also be introduced later.)

6 What is Wrong with ?

Perhaps we should rather ask why such a theory exists at all. We hear from Einstein: “To talk of the motion and therefore also acceleration of a body A in itself has no meaning. One can only speak of the motion or acceleration of a body relative to other bodies B, C etc.” The empirical base of knowledge for the scientific work of a natural philosopher is different today than it was in Einstein's time. We firmly believe that if Einstein had known of the cosmic microwave background radiation as the absolute frame of reference, he would have withdrawn his argument above. But why the theory seems to have some experimental justification? The reason is that in the analysis of the Michelson-Morley experiment the relations of Planck and Einstein were totally absent. Reflection of light from a mirror is just what the Compton-Debye theory is all about; scattering of photons from electrons. For the world view expressed in this article the Compton-Debye theory is fundamental, always valid in the cosmic rest frame, which is defined by the cosmic microwave background radiation. In the following we use the more informative unit vector notation and, for obvious reasons, we name the equations as “source” and “mirror”. They are the familiar expressions for Doppler effect, but we emphasize that the mirror equation is derived from a conservation law that must be satisfied. ( See, e.g., On the Theory of Quanta by L. de Broglie, p.55.)

The wave numbers of the light beams radiated from a source or reflected from a mirror depend on their absolute velocity v. If the unit wave vector normal to the source output wave, the wave incident on mirror and the reflected wave are denoted as nS , nI and nR respectively, the expressions for light source and mirror behavior can be written as follows: 1 1-n × v 1 IM k L kk=,.. kk =c Þ= N S OSRI1 1 1 1-n × v 1 - n × v 1 - n × v cSSRM c c RM

Source equation Mirror equation MM equation

If the source and the mirror form a rigid construction, nS ≡ nI , vS ≡ vM and kI ≡ kO . So there are two terms that cancel in the product kO kR resulting the MM equation. If the output beam is reflected into an optical path of length L , we have N cycles in that path. ( N = kR L ).

Michelson - Morley Experiment The MM-experiment was an attempt to determine the absolute velocity of the earth through the aether, but it failed. There was no fringe shift at the detector as the apparatus was rotated with respect to the assumed absolute velocity vector v. The source S (in the picture below) can be replaced with a mirror reflecting sunlight or starlight, without affecting the result of the experiment.

Einstein finally interpreted this puzzling result as meaning that there is no aether and, thus, no absolute space at all.

7 In addition to a scheme of the Michelson Interferometer, the true diurnal behavior of the wave number in a leg is also featured in the picture. Let us study the interferometer exhibiting a particular orientation with respect to the absolute frame. We let the absolute velocity v of the apparatus coincide with the direction of Leg 1. N is the number of cycles.

1 v2 v×[]. n = cM PRE c2 First, we apply the MM equation to the horizontal light paths; beam splitter → M1, and M1 → beam splitter: kL kL2 kL N =SSS + = . 1 v v v2 1- 1 + 1- c c c2

The result follows from the fact that in these paths nR • vM = ±v in the MM equation.

The light path BS → M2 → BS is different: nR • vM ≠ v. The light beams need some “lead” to hit their moving targets; the beam from the splitter must have propagated to the predicted position of M2. In the picture on the left the predicted positions of the mirrors are shown red. The dot product term in the MM equation becomes as follows: 1 v2 v×[]. n = cM PRE c2 ( The angle in the scalar product is the aberration angle and its cos is v /c.) The sum of the number of cycles in the two directions in the leg 2 is: kL kL2 kL N =SSS + = . 2 v2 v 2 v 2 1- 1 - 1 - c2 c 2 c 2 We have N1 = N2 , meaning that there is no fringe shift at the detector D. The expected second-order effect exists, but it cannot be measured with the Michelson interferometer, due to the combined effect of aberration and Doppler shift. The Michelson-Morley experiment gives a null result because the reflection of light is subject to a fundamental conservation law, the Compton-Debye theory. From the equations above it is also clear that the result does not depend on the length of the legs or the angle between them. Some scientists are aware of the interchangeability of the concepts of length contraction / and the effects of aberration and Doppler shift (notably Oleg D. Jefimenko), but they do not see it as an explanation of the MM- experiment but as an alternative way to set up the theory of special relativity. Jefimenko writes: What is most remarkable, however, is that relativistic solutions completely ignore retardation, while retarded fields solutions completely ignore Lorentz contraction. There is no doubt that retardation is a real and a very important physical phenomenon. How then can it be ignored in relativistic derivations? And if Lorentz length contraction is an important physical effect, why can it be ignored in classical calculations.....? He then states that retardation is implicit in special relativity and derives the Lorentz-Einstein transformations from classical electromagnetics. His article is titled Retardation and relativity: Derivation of Lorentz-Einstein transformations from retarded integrals for electric and magnetic fields. Am. J. Phys. 63 (3), March 1995. The result above is also compatible with Henry Bateman's remark that the Lorentz transformations are derivable from study of the reflection of light from a moving mirror. H. Bateman, Bull. Nat. Research Council, Dec, 1922, p. 110.

8 The possible reason why the above explanation was not offered immediately after the experiment, can be found in the article Aberration and Doppler shift: An uncommon way to relativity by H. Blatter and T. Greber: Confusion was caused for a long time by the different interpretations commonly used for the words “observer” and “observe”. The first operational concept of the “observer” was introduced by Einstein in his 1905 paper... . This type of observer collects data on events in space and time with a whole set of recording clocks evenly distributed in space. The clocks for this Einstein type observer are associated with an inertial frame of reference. This concept is useful for physicists in their technical work, but is a difficult introduction to the theory. A second concept of the relativistic “observer” is close to the everyday use of the word. This observer is located at one place in space and collects all information by the light arriving at his position. This second type of observer is confronted with phenomena like aberration and Doppler shift whereas the Einsteinian observer registers time dilation and Lorentz contraction. So, in Einstein's own words: “It is the theory which decides what can be observed.”

Henry Poincare: “The Principles of ”, 1905: “I believe that in reasoning thus [that improving the resolution of measurements of stellar aberration would reveal the absolute velocity of Earth] one admits a too simple theory of aberration. Michelson has shown us, I have told you, that the physical procedures are powerless to put in evidence absolute motion; I am persuaded that the same will be true of the astronomic procedures, however far one pushes precision. ..While waiting, I believe, the theorists, recalling the experience of Michelson, may anticipate a negative result, and that they would accomplish useful work in constructing a theory of aberration which would explain this in advance.”

The Effect of Rotation on Michelson and Sagnac Interferometers The interferometer's motion in space can be resolved into translational and rotational components, the latter with angular velocity around an axis through the beam splitter. 1 1-n × v If one studies the mirror equation, there is no Doppler shift if the mirror moves at right i M k= k c . angles to both incoming and reflecting beam, because only then 0 1 1-n × vM c r ni vM = nr vM = 0. Mirror equation This can happen only if the two beams are directed along the normal of the reflecting surface. In pure rotation, this is the case in Michelson’s interferometer. The Michelson interferometer is insensitive to rotation. Since it cannot detect linear motion either, the instrument and all its variants ( like the ones used in the experiments of e.g. Kennedy-Thorndike, Cialdea & al. , Jaseja & al. and A. Brillet & J. L. Hall) are totally useless in determining our velocity in absolute space. Instead, the Sagnac interferometer can reveal the existence of absolute rotation. Its construction is such that for each of its mirrors ni vM ≠ 0, nr vM ≠ 0, if v M (caused by rotation) > 0. “[if the observer's] apparatus rotates with respect to the stars he will observe a Sagnac effect, if it does not, then no matter how great a relative rotation it exhibits with respect to its material surroundings, there will be no effect.” Herbert Ives. “Truth can be spoken in few words; lies require volumes.” The reader may check the truth of this aphorism by taking a look how relativity explains the Sagnac effect. “I decided to remove the tortuous section about the erroneous perception of the Sagnac effect as disproof of relativity. Especially, there is no need to mention Herbert Ives.” Wikipedia Talk: Sagnac effect.

Silvertooth & Whitney Experiment. (“A new Michelson-Morley experiment” in Physics Essays, vol. 5 , 1992 )

The title of the reference article is misleading. The experiment was based on “linear Sagnac effect”.

This experiment in 1992 was a variation of an experiment Dr. E.W. Silvertooth performed in 1987. It fully confirmed the 1987 results.

9 The idea of the new experiment was to compare the “interference fringes” formed by the counterpropagating laser light beams. In essence, the interferometer can be formed of a Michelson interferometer by removing one of the legs and replacing the beam splitter with a special detector D that senses its position in the standing wave pattern.

The standing wave has two different manifestations depending on the orientation of the interferometer. If it is transverse with regard to its velocity vector v in the absolute frame, there will be a standing wave, whose wave number k0 is determined by the light source. If the tube turns to point into the direction of absolute velocity, the standing wave will re-form into a sum wave, the beat wave, shown in the figure above.

It is an envelope of the sum of two rapidly oscillating waves, their wavelengths  and  differing slightly from one another.

The length  between nodes of the beat wave depends on the velocity of the apparatus in the ether frame. This is what was measured in the Silvertooth & Whitney experiment : D2-D1 =  = 0.25 mm. We can write down two equations. The first one measures the round trip length of the interferometer in wavelengths of light in two different orientations: along the velocity vector (  , ), and transverse to it: . The second equation expresses the fact that on the length  there are n+1 cycles of shorter wavelength and n cycles of longer length.

1: L/ + L/= 2L/

2:  / =  /+1.

Using the notation  =  / = (cv)/(c+v), equation 1. becomes:  = 2 / 1.

From 1. and 2.  can be solved:  = 2 / ( 2+.

From the above we can deduce  = ( 22+. Because  = (cv)/(c+v), it follows that: v = c (1  ) / (1+ ).

Now we can first calculate  by using the measured value of , the wavelength  = 630 nm of a HeNe-laser, and then calculate the velocity. ( These are not the calculations of Dr. Silvertooth but the result is the same.) The measurements with this new interferometer revealed an ether wind of 378 km/s ±5% in the direction of constellation Leo. It is worth noting that the experiment of Dr. Silvertooth predicted the outcome of the COBE– measurements! This velocity is measured in the preferred inertial frame, a.k.a. aether frame.

Anomalous Lengthening of the Perimeter of the World’s Largest Ring Laser

In the following, all the data concerning the ULTRA−G ring (Sagnac) laser is taken from an article by R.W.Dunn: Design and initial operation of a 367 m2 rectangular ring laser. ULTRA−G resides 30 m underground in the Cashmere cavern, New Zealand. The length of its rectangular perimeter is 76.899 m. However, if the free spectral range frequency ( ffsr = speed of light / perimeter) is measured and the perimeter’s length is calculated using cN and the measured ffsr , the result is 76.965 m! This is a huge discrepancy. There is a 66 mm difference between the known length and the calculated result.

10 The anomaly error is enormous, if we consider the accuracy with which the speed of light is measured, and length and frequency can be measured.

To explain the discrepancy we assume that the absolute velocity of the Earth must be taken into account. ULTRA−G has its plasma tube mounted midway on one of the four sides. In the following, we consider the tube pointing (and propagating) into the direction of constellation Leo, i.e. the rotation angle d is zero.

In a ring laser, no standing waves are generated but there are gyrating waves. Both directions (cw and ccw) are possible, although either one of the waves may be missing. The equivalent condition for a standing wave to be set up in a ring cavity configuration is that an integer number of wavelengths fit in the length of the perimeter.

The lasing frequency is determined by two decisive factors: the wave number of the electromagnetic wave propagating in the lasing medium, and the “standing wave” condition. The equation below (also used in the analysis of the MM- experiment with special directions d ) is suitable for calculating the wave number of the electromagnetic wave in the plasma tube as a function of d : k k k = S . If d= 0, k = S . tube væ v ö tube v 1- cosçd - arcsin( sin( d )) ÷ 1- cè c ø c

Because frequencies are directly proportional to wave numbers, we can write down the following expression for frequencies (ffsr) and solve it for the absolute velocity. ULTRA−G measures 21 m x 17.5 m, and the tube is mounted midway between the mirrors on one of the longer sides, which are the ones we take into consideration in the following calculation: f ff- c f=Þ=theoretical,,. v theoretical measured × cf = N measured v fN theoretical 42 m 1- theoretical cN

The frequency fmeasured is given with error bars: ffsr = 3.894 ± 0.002 MHz. Using these values, the speed of light 299792458 m/s, and the refractive index of nitrogen gas: 1.000298, we calculate the following absolute velocities:

3.894 − 0.002 MHz 3.894 − 0.001 MHz 3.894 MHz 3.894 + 0.001 MHz 3.894 + 0.002 MHz 192 km/s 333 km/s 472 km/s 613 km/s 754 km/s

Taking into account the Earth’s speed around the Sun, the measured absolute velocity may have values from 340 to 400 km/s. Absolute velocities of this magnitude have been measured in a variety of other experiments:

There is nothing wrong with the equipment in the Cashmere cavern. “Quantum noise-like” components several orders of magnitude above the expected values, -components drifting by 1 MHz per ten seconds, drifting of the beam spots over the mirrors by centimeters per day…These are not non-idealities of the ring laser / detector / analyzer – apparatus, but they are manifestations of a true physical phenomenon: ULTRA−G is sensitive to its absolute velocity. If one dares to interpret correctly the data from Cashmere cavern, it will correlate with the time-varying absolute velocity vector. The cause of “non-idealities” will become clear in the following section.

The Lock-in Phenomenon in Sagnac Devices

In the presence of a constant rotation rate, a beat wave is generated into the fibre loop. This beat frequency corresponds to that predicted by the sagnac effect. The beam splitter (above the detector) is a mirror in motion in absolute space. The light from the source is split in two beams. Both beams are reflected from a moving mirror and, therefore, become more or less red- or blue-shifted, depending on the (absolute) orientation of the device, as already explained earlier in the section Michelson- Morley Experiment Picture: Fibre optic gyroscope from Wikipedia.

11 The pictures below depict the real situation of a fibre gyroscope. The propagation vector of the loop (towards the constellation Leo) is parallel to the plane of the loop. The loop is put into rotational motion and the number represents the rate of rotation in milliradians per second. The velocity (of the absolute, linear motion) used was 368 Mm/s.

The same mathematical principles as in the earlier cases (Michelson & Morley, Silvertooth & Whitney and ULTRA−G) above were applied to small segments of fibre. It is easy to see why the beat counting methods of ring laser gyroscopes are in trouble if the rotational velocity is very small.

Alleged Reason: “Injection Locking” “...counter-propagating beams can allow for injection locking so that the standing wave “gets stuck” in a preferred phase, thus locking the frequency of each beam to each other rather than responding to gradual rotation. By rotationally dithering the laser cavity back and forth through a small angle at a rapid rate ... lock-in will only occur during the brief instances where the rotational velocity is close to zero; ...” Wikipedia. From this it is obvious that the lock-in-phenomenon is not understood but its effects are eliminated by dithering.

So this picture on the Wikipedia page “Ring laser gyroscope” is wrong, and so are the explanations, trying to save relativity. What is very wrong are the attitudes of some writes under the title To remove: Sagnac effect in translational motion (Talk:Sagnac effect — Wikipedia): “… this [the linear Sagnac effect] is definitely NOT recognized as part of the Sagnac effect, … crackpots who think that it somehow proves something or other about relativity.”

The three fundamental experiments to validate special relativity are the Michelson–Morley, Kennedy–Thorndike and Ives–Stilwell experiments. We have now explained why the first two produce null results. We will return to the third experiment ( and many more others) after discussing a few relevant concepts.

12 Modern Cosmology: Definitions as Substitute for Theory

The fundamental observation behind the big bang theory was the red-shift of distant galaxies. Their spectra are shifted towards longer wavelengths. The further out they are, the larger is the shift. This is seen to imply that they are receding away from us. “A is defined as a quantum of energy.” “A Quantum is defined as an indivisible unit of energy.” The modern 'Big Bang' Cosmology rests very much on this alleged knowledge. In debates concerning the cosmological red-shift, BB supporters insist that the photon is something indivisible propagating in empty space, conserving its energy and momentum ħk until it hits something. Changing the direction of the wave vector k would inevitably blur the distant objects and this is not observed. But that what cosmologists say about the nature of photons is only based on a definition, it is not supported by any theory of photons. By interpreting the cosmological red-shift as a Doppler effect cosmologists insist that the universe is expanding, but this is not the case. The observed red-shifts of astronomical objects are caused by the attenuation of the oscillation of photons themselves in free space. Today we know from studies of “parametric conversions” that photons are composite. In upconversion two photons with energies ħ1 and ħ2 create one photon of energy ħ0 so that ħ0 = ħ1 + ħ2. In downconversion one photon of energy ħ0 yields two photons of energies ħ1 and ħ2 , so that ħ1 + ħ2 = ħ0. In most cases ħ1

 ħ2.

Of course, in order to preserve coherence with the two definitions above, modern physics insists that the original photon in all interaction processes is first absorbed completely and then new indivisible photons are emitted. But to us the conversion is proof that photons are of composite structure. If photons were indivisible “atoms” of light, downconversion would not be possible. Photons are similar to all other waves of nature, they can leak energy into their surroundings without changing direction. This is the “Tired light effect”.The cosmological red-shift z is defined as: w 1+z = emitted . wobserved

Modern cosmology also knows that “If the red-shift were due to a tired light effect, the width of a supernova light curve would be independent of the red-shift.” But this is not correct either. The conclusion is apparently based on the particle model of light where the observed intensity of a beam of given cross section can be expressed by the number of photons per unit of length. The light we receive from astronomical objects is partially coherent, meaning that it is a mixture of coherent radiation (waves) and incoherent radiation. It is the wave part that is utilized in telescopes and grating spectrometers. These electromagnetic waves are solutions of the wave equation, and they are subject to the dispersion relation: w /.k= c If we now put z = 1 into the red-shift formula, we can calculate that the internal frequency of the initially emitted photons has halved on the way from the galaxy to us. It follows from the dispersion relation that the wave number k must also be halved. This means stretching the length of the received electromagnetic pulse by a factor of 2. A supernova that takes 20 days to decay will appear to take 40 days to decay when observed at red-shift z = 1.

Discovery That Quasars don't Show Time Dilation Mystifies Astronomers

Astronomer Mike Hawkins from the Royal Observatory in Edinburgh came to this conclusion after looking at nearly 900 quasars over periods of up to 28 years. When comparing the light patterns of quasars located about 6 billion light years from us and those located 10 billion light years away, he was surprised to find that the light signatures of the two samples were exactly the same. If these quasars were like the previously observed supernovae, an observer would expect to see longer, “stretched” timescales for the distant, “stretched” high-redshift quasars. But even though the distant quasars were more strongly red-shifted than the closer quasars, there was no difference in the time it took the light to reach Earth. ( PhysOrg.com, 9 April 2010.)

13 Understanding the conundrum above only requires understanding the photon. Photon as an electromagnetic soliton will be discussed later in this paper. The photon is composite, it is made up of dynamic electromagnetic field. It loses energy in exactly the same way as all the waves in nature, by slowly leaking into the environment. It doesn’t have to collide with anything to lose energy. Cosmologists of today feel sure that as space expands, the interval between light pulses also lengthens. Since expansion occurs throughout the universe, it seems that time dilation should be a property of the universe that holds true everywhere. However, astronomer Mike Hawkins’ new study has found that this is not the case. Quasars give off light pulses at the same rate no matter their distance from the Earth, without a hint of time dilation. From the abstract of the article “Time Dilation And Quasar Variability” by M. R. S. Hawkins: “We find that the timescale of quasar variation does not increase with red-shift as required by time dilation. Possible explanations of this result all conflict with widely held consensus in the scientific community.” Pulsation of quasars is of course the same be it measured from red-shifted or non-redshifted light observations.. The findings of astronomer M. Hawkins completely refute the idea of time dilation. In this article it will be shown that time dilation can be refuted by other means as well.

The Strongest Evidence for Big Bang Cosmology?

According to the Big Bang theory, the universe began about twelve to fifteen billion years ago in a violent explosion, and we can see the “afterglow” of it in the cosmic microwave background. The CMB is an almostuniform background of radio waves that fill the universe. The current wisdom is that “when the CMB was initially emitted it was not in the form of microwaves at all, but mostly visible and ultraviolet light. Over the past few billion years, the expansion of the universe has red-shifted this radiation toward longer and longer wavelengths, until today it appears in the microwave band.”

History of the 2.7 K Temperature Prior to Penzias and Wilson In most textbooks nowadays we see the statement that Gamow and collaborators predicted the 2.7 K temperature prior to Penzias and Wilson, while the steady-state theory of Hoyle, Narlikar and Gold did not predict this temperature. Therefore, the correct prediction of the 2.7 K is hailed as one of the strongest arguments in favor of the Big Bang. However, these two models have one very important aspect in common: both accept the interpretation of the cosmological red-shift as being due to a Doppler effect, which means that both models accept the expansion of the Universe. But there is a third model of the Universe which has been developed in this century by several scientists including Nernst, Finlay-, Max Born and Louis de Broglie (1966). It is based on a Universe in dynamical equilibrium without expansion and without continuous creation of matter... Although it is not considered by almost any textbook dealing with cosmology nowadays, this third model proves to be the most important one of all of them. …..Gamow and collaborators obtained from T ≈ 5 K to T = 50 K ... These are quite poor predictions compared with Guillaume, , Regener and Nernst, McKellar and Herzberg, Finlay-Freundlich and Max Born, who arrived at, respectively: 5 K < T < 6 K, T = 3.1 K, T = 2.8 K, T = 2.3 K, 1.9 K < T < 6.0 K! All of these authors obtained these values from measurement and or theoretical calculations, but none of them utilized the Big Bang. This means that the discovery of Penzias and Wilson cannot be considered decisive evidence in favor of the Big Bang. Quite the contrary, as the models of a Universe in dynamical equilibrium predicted its value before Gamow and with better accuracy. Our conclusion is that the discovery of the CBR by Penzias and Wilson is a decisive factor in favor of a Universe in dynamical equilibrium, and against models of an expanding Universe, such as the Big Bang and the steady- state. [An excerpt from an article by A. K. T. Assis and M. C. D. Neves.]

Prior to t=0 there existed no Universe, no observers, no physical laws. Everything suddenly appeared at t=0. At that moment there was a sudden and fantastic violation of the law of conservation of matter and energy. Big Bang, the first 10-40 second, inflation, expanding universe, dark matter, , black holes, wormholes, ….. Modern cosmology is an astounding tale that cosmologist have put together on a non-existent foundation, namely the non-existent knowledge of the nature of the photon. The Big Bang cosmology depends on the redshift-distance relationship. The present interpretation of the red-shift suggests that photons are immutable. Perhaps it is worth checking what else is known about photons?

14 What is a Photon, Where is it, and How it Came Into Existence?

To day the answers, in essence, are the following: “A photon is what a photodetector detects” and “A photon is where the photodetector detects it.” The third question is answered in the following way: According to modern quantum field theory photons are excitations of the electromagnetic field. Excitation, in turn, is defined as an addition of a discrete amount of energy to a system that changes it usually from a state of the lowest energy (ground state) to one of higher energy (excited state)

In the relaxation process (the inverse of excitation) a photon is somehow created. One might think that now the photon just leaves the system and propagates in a definite direction, but the present quantum theory says otherwise: A spherical wave of probability expands in all directions with speed c. If, after a time t = r/c, the expanding probability wave encounters a photodetector, the photon's “wave-function collapses” and the photodetector detects the photon containing all the energy of the relaxation process at t = 0.

Where Was the Photon During Flight Time?

James C. Maxwell in his time pondered that question. “When light is emitted, a certain amount of energy is expended by the luminous body, … During the interval of time after the light left the first body and before it reached the second, it must have existed as energy in the intervening space.” A layman might be attracted by Maxwell’s reasoning, but modern physics has made a “correction” to it: From the book Great ideas in physics by Alan P. Lightman: “Scientists can make theories to predict the result of a measurement. But the question of where and in what form a photon — or any other type of matter or energy – exists between measurements of it lies outside science.” Nevertheless, physicists, and indeed great physicists, have not refrained from speculation. There are two major schools of speculation. The first is called the 'Copenhagen interpretation of quantum physics', ...[it] holds that prior to the measurement of an object, the object has no definite physical existence.” The other major school of thought is called the 'many-worlds interpretation of quantum physics'... In this interpretation, an object does have a physical existence prior to being measured. In fact, the object exists in all of its possible conditions and locations. Each of these different existences occurs in a separate world. Every time an observer [detects a photon], the reality, or world, of that observer branches off from the other worlds and follows a track in which that particular photon and that particular observer have specific locations and properties.” Our study of the present knowledge of photons led us to the foundation of modern physics, which is the Schrödinger equation and its interpretations. The two schools of thought above are achievements of research in quantum phenomena. What can one say of these premises, except that they are absurd? Richard Feynman had his own understanding of what is absurd: “The theory of quantum electrodynamics describes Nature as absurd from the point of view of common sense. And it agrees fully with experiment. So I hope you accept Nature as She is — absurd.”

We accept Nature as She is, but we do not accept the fantastic theories put forth by physicists of modern times! Instead, we accept the idea of :

“What appears certain to me, however, is that, in the foundations of any consistent field theory, there shall not be, in addition to the concept of field, any concept concerning particles. The whole theory must be based solely on partial differential equations and their singularity-free solutions.”

15 How People Come to Believe Modern Physics Confessions of a Physics Professor “Every time I teach an introductory modern physics course and look at the students’ final exams, a sense of puzzlement comes over me. Not because some students have taken the elegant theories of relativity and quantum mechanics and made a total hash of them …, but because so many of them seem to actually believe the theories. ….. I used to ask myself why they believed what I taught them .... the ideas of relativity and quantum mechanics are so thoroughly contrary to everyday experience that I would expect students, on first hearing these notions, to reject them out of hand. ….. I finally concluded that most students believe me because they trust me, they feel that I have their best interests at heart and that I would not deliberately deceive them by teaching things that I myself did not believe. They also trust the institution that awarded me a physics PhD, and the university and the physics department that hired me and allow me to teach them. And I use that trust to effectively brainwash them. We who teach introductory physics have to acknowledge, if we are honest with ourselves, that our teaching methods are primarily those of propaganda. We appeal—without demonstration—to evidence that supports our position. We only introduce arguments or evidence that support the currently accepted theories, and omit or gloss over any evidence to the contrary. We give short shrift to alternative theories, introducing them only in order to promptly demolish them—again by appealing to undemonstrated counter-evidence. We drop the names of famous scientists and Nobel prizewinners to show that we are solidly on the side of the scientific establishment. All of this is designed to demonstrate the inevitability of the ideas we currently hold, so that if students reject what we say, they are declaring themselves to be unreasoning and illogical, unworthy of being considered as modern, thinking people. Of course, we do all this with the best of intentions and complete sincerity. I have good reasons for employing propaganda techniques to achieve belief. I want my students to be accepted as modern people and to know what that entails. The courses are too rushed to allow a thorough airing of all views, of all evidence. In addition, it is impossible for students to personally carry out the necessary experiments, even if they were able to construct the long chains of inferential reasoning required to interpret the experimental results. So I, like all my colleagues, teach the way I do because I have little choice. But it is brainwashing nonetheless. When the dust settles, what I am asking my students to do is to accept what I say because I, as an accredited representative of my discipline, profession, and academia, say it. All the reason, logic, and evidence that I use simply disguise the fact that the students are not yet in a position to sift and weigh the evidence and arrive at their own conclusions.” Physics Today – 0n the Web – June 2000 Mano Singham http://www.physicstoday.org/june00/opin600.hhtm

Paradigm Paralysis HY = E Y.

Schrödinger equation is the basis for the modern physical understanding of the atom. It is all there: closed system, Hamiltonian, point particles. It cannot be solved for any system containing more than one electron, and the meaning of the letter Ψ is a matter of philosophical interpretation. In essence, it tells us that total energy = kinetic energy + potential energy. For a closed system this is a law of conservation that guarantees perpetual motion under the influence of an attractive force, which for Newton was an “absurdity so great that no man, who has in philosophical matters a faculty of thinking, can ever fall into it.” Furthermore, as Schrödinger himself points out: “The wave function itself cannot be given a direct interpretation in three-dimensional space, as in the one-electron problem, because it is a function in configuration space, not in real space.” It is somewhat hard to understand the coexistence of the facts above and the following: in the opening paragraph of his 1929 paper “Quantum Mechanics of Many-Electron Systems” P. A. M. Dirac announced that:

“The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation.”

16 But later Dirac came to other thoughts. After witnessing the advent of QED (of which R. Feynman himself said that “... quantum field theory seems to be just a special consequence of the Schrödinger equation, and not an extra theory at all.”), Dirac wrote: “We have a theory in which infinite factors appear when we try to solve the equations. These infinite factors are swept into a renormalization procedure. The result is a theory which is not based on strict mathematics, but is rather a set of working rules. Many people are happy with this situation because it has a limited amount of success. But this is not good enough. Physics must be based on strict mathematics. One can conclude that the fundamental ideas of the existing theory are wrong. A new mathematical basis is needed.” The Future of Atomic Physics (1984) From Dirac's article The Development of Quantum Mechanics: “I think that it is quite likely that at some future time we may get an improved quantum mechanics in which there will be a return to determinism and which will, therefore, justify the Einstein point of view.”

Still, the time-independent Schrödinger equation is the foundation of computational chemistry. How can it be? If one delves into this question, one learns that the computer programs used by chemists search for minimum energy configurations composed of electrons and nuclei. This is a reasonable strategy because atoms and molecules are realizations of the principle of local energy minimum.

The True Wave Function?

Researchers in the field of computational chemistry have taken literally Dirac's early announcement, but are not interested at all in his mature concerns. Today we have the seemingly self-evident concept of “true wave function”. We also have a whole industry producing software for finding approximate solutions. These programs are based on procedures such as the Hartree-Fock method or the spin density functional theory, which have practically nothing to do with the original Schrödinger equation. While the original Schrödinger equation defines a linear theory, the HF theory that “approximates” it, is intrinsically a nonlinear, one-particle theory. In our opinion, in computational chemistry the “exact solutions of Schrödinger equation” are not approximated but are instead replaced, and the difference should be admitted to exist. The task of finding the minimum energy configuration for a molecule can be formulated and carried out without any reference to Schrödinger wave functions. Instead, there is something that is indispensable to molecular modelling: the Pauli Exclusion Principle. Its extreme importance was stressed by J. H. Van Vleck and A.Sherman in 1935: “The Pauli exclusion principle is the cornerstone of the entire science of chemistry. The exclusion principle is one of the most important discoveries in physics. It is of a completely general and fundamental nature. The Pauli principle cannot be derived from, nor is it predicted by, the present quantum mechanics. The particles that make up the atoms of all elements are subject to this principle, and the different properties (chemical, electrical, optical, etc.) associated with different elements are all a result of this principle.

The problem is that if any progress is made in computational chemistry, all credit goes to Schrödinger's equation, thereby strengthening the aura of excellence around the ideas it stands for.

In our view, the equation in question is the icon of acceptable scientific method. The developer of an algorithm must at least pretend that his program produces approximate solutions to the Schrödinger Equation. Physics is in a state of paralysis on the concepts around Schrödinger's equation!

Schrödinger's Picture of the Atom and Pauli's Dissatisfaction The modern picture of the atom is the heavy, positively charged atomic nucleus at the center of the atom and the lighter mobile electrons revolving around the nucleus at great speed (but not along any well-defined path). In our opinion, this picture is wrong. Wolfgang Pauli published a seminal paper in the Zeitschrift für Physik in January 1925. It was titled ‘On the Connexion between the Completion of Electron Groups in an Atom with the Complex Structure of Spectra’. In it, Pauli posited that no two electrons in an atom could have an identical set of quantum numbers. (Neither Pauli nor anyone else has ever been able to answer why, however.)

17 “Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had always the feeling and I still have it today, that this is a deficiency. ... The impression that the shadow of some incompleteness [falls] here on the bright light of success of the new quantum mechanics seems to me unavoidable.” It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. (...) This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world. ( Richard Feynman.)

We will return to this subject later in this article, in the section titled Atoms and the Nature of Chemical Bonds. There we show where the present atom model is wrong and give the logical reason for the exclusion principle. Our ideas will be in line with the arguments made by the authors listed in the excerpt below, which is a brief historical account of how the debate on molecular structure has developed. From Jan C. A. Boeyens's article A Molecular-Structure Hypothesis:

Since the late 1970s theoretical chemists, who worked hard on the development of quantum chemical models for chemical purposes, also began to question the naive reductionist, albeit common, view among western philosophers of science, according to which chemical concepts and laws could simply be derived from quantum mechanical principles. Guy Woolley argued that the concept of chemical structure cannot be deduced from quantum mechanics. Hans Primas devoted a whole book to the issue of reductionism, arguing that quantum mechanical holism does not allow the derivation of statements about chemical objects without further assumptions. Giuseppe Del Re and Christoph Liegener considered chemical phenomena to lie on a higher level of complexity that emerges from but does not reduce to the quantum mechanical level.” J. Schummer, The Philosophy of chemistry. “It must be concluded that a periodic system based on the energy spectrum of the hydrogen electron, according to Schrödinger’s solution, cannot be reconciled with the observed periodic table of the elements. It fails because it ignores all interaction with the environment and no amount of patchwork is ever going to rescue this situation. It is a myth that chemistry derives from quantum theory.” Jan C. A. Boeyens.

Physics has Not Explained Anything in One Hundred Years

Not all of Einstein's ideas were valuable. The two latest short periods of time when theoretical physics went profoundly wrong, are the following: The first period started in 1905, when special relativity was introduced and adopted. It was Einstein himself who started eliminating the most fundamental concepts for scientific understanding. He devised a theory in which inertial frames of reference were related by the Lorentz transformation. Especially, Einstein showed that the laws of electrodynamics remained unchanged under the Lorentz transformation. That is, they were Lorentz covariant.

Later, in developing the , he adopted Minkowski’s fourdimensional spacetime continuum, which brings space and time together in a unified spacetime structure. Mathematically, this means that only spacetime vectors may appear in physical equations. Physical events occur without a separation of space and time. Einstein wrote about this in the introduction to his 1921 Princeton lectures The Meaning of Relativity: “The only justification for our concepts and system of concepts is that they serve to represent the complex of our experiences; beyond this they have no legitimacy. I am convinced that the philosophers have had a harmful effect upon the progress of scientific thinking in removing certain fundamental concepts from the domain of empiricism, where they are under our control, to the intangible heights of the a priori. …. This is particularly true of our concepts of time and space, which physicists have been obliged by the facts to bring down from the Olympus of the a priori in order to adjust them and put them in a serviceable condition.”

In our view, geometrization of time was by far the greatest blunder committed in the history of theoretical physics! Einstein has also written the following: “It has often been said, and certainly not without justification, that the man of science is a poor philosopher.”

Einstein took the result of the Michelson-Morley experiment as a given fact, not to be explained in terms of absolute space. He denied the existence of absolute motion, but Poincare only denied the possibility to detect it. For Einstein, there can be no difference at all between the forms of the laws of nature in different inertial frames, whereas Poincare

18 can accept that the laws of nature take one form relative to a privileged frame and a more complicated form relative to all other frames. In this article, when speaking of motions in space, we follow the scheme proposed by H. Poincare.

The second misstep was taken during the years 1926 and 1927, culminating in the 1927 Solvay Conference. The knabenphysiker and their mentors Niels Bohr and Max Born did the rest. The principle of cause and effect was thrown overboard. In Heisenberg’s own words: “The law of causality is no longer applied in quantum theory.” After this ruination of physics it became impossible to produce scientific explanations.

Modern physics is in disagreement with the Kantian position that space and time as well as cause and effect must be taken as a priori categories for the comprehension of all knowledge. Of course, it is not immediately clear what is an explanation and what is understanding. These questions have been the subject of ageold philosophical debate. If one delves into these questions, one quickly learns that causality is a key concept. But there is more. Scientific Explanation

Paul Dirac ones wrote: “The difficulties in quantum theory are of two kinds. ... Class One difficulties and Class Two difficulties. … How can one form a consistent picture behind the rules for the present quantum theory? These Class One difficulties do not really worry the . If the physicist knows how to calculate results and compare them with experiment, he is quite happy if the results agree with his experiments, and that is all he needs. It is only the philosopher, wanting to have a satisfying description of nature, who is bothered by Class One difficulties.”

Physics now states that particles are excitations of quantum fields. Particles and forces are two manifestations of more complicated objects; quantum fields described by operators. The operator Aμ(x) creates or annihilates a quantum of the electromagnetic field at a point x, the operator ψ(x) annihilates an electron or creates a positron, and the conjugate operator annihilates a positron and creates an electron... How can this be explained and understood? What is the usefulness of those words for a natural philosopher who wishes to understand natural phenomena?

Most people and some scientist intuitively believe, that one of the goals of science is to explain the phenomena in nature. Some people even believe that explanation is the main goal of science. Nature is so complex that quantitative models are not always possible while an explanation still is. In this case the explanation states the reasons why a numerical model is unachievable. It is fascinating to ponder the questions of what is an explanation and what is understanding, but here we adopt the following short definitions: Wesley Salmon has provided us a philosophically sound basis for scientific explanation. Salmon claimed that “a scientific explanation is the state of affairs of something fitting into or being a part of a pattern in the world, where the pattern is constituted by at least one causal process.” A process is the real physical connection between cause and effect.” “The photon and the electron are causal processes. When the photon and the electron collide, both are modified. The frequency, energy, and momentum of the photon are changed, as are the energy and momentum of the electron. (In this interaction energy and momentum are conserved.) Moreover, these changes persist beyond the collision. This persistence is essential for a causal interaction; Compton scattering is an excellent example.” Causality and Explanation by Wesley C. Salmon.

“To understand the phenomena in the world requires that they be fitted into the general world-picture. … To have scientific understanding, we must adopt the worldview that is best supported by all of our scientific knowledge. The fundamental theories that make up this worldview must have stood up to scientific test; they must be supported by objective evidence.” Ibid. Causation and Unification “Explaining in physics involves two quite different kinds of activities. First, when we explain a phenomenon, we state its causes. We try to provide detailed accounts of exactly how the phenomenon is produced. Second, we fit the phenomenon into a broad theoretical framework which brings together, under one set of fundamental equations, a wide array of different kinds of phenomena.” Nancy Cartwright. (This statement serves as the motto of our article.)

There is only one geometry for an understandable theory. One of Kant's a priori requirements for scientific understanding was Euclidean geometry, the space of human experience. In our opinion, Kant was correct in arguing that we have an innate (and therefore a priori) intuition of Euclidean space.

19 The brain is a pattern recognition device that interprets electrical signal patters from all sensory organs. Pattern recognition is an innate ability of animals. Especially easy for the visual part of the brain is to recognize similarity in shapes and positions in all scales. Similarity, the preservation of shape across variations of size, is possible in Euclidean space only. The innate ability of pattern recognition and the innate sense of Euclidean geometry come together; they are the same thing. Euclidean space is the one in which we are at home, so to speak. Of course space could be very slightly non- Euclidean, but we agree with the following.

Henri Poincarè claimed that every allegedly observed deviation from Euclidean geometry must be only apparent. If it were observed that rays of light are not propagated in the void along straight lines, it would always be simpler to assume that space itself was not completely empty and that the curvature of rays was due to the presence of some subtle refracting medium, not due to the curvature of space.

In the theoretical entities, particles and fields, may be regarded as actually existing in the world – a view called “realism”. Hence, classical physical explanations may be given in terms of the continuous propagation through space of causes and effects. Such an explanation scheme is called “local causality”. In modern quantum physics the situation is different, as it requires neither realism nor locality.

This paper is confined to explanations in terms of classical electromagnetism, in the spirit of Lord Kelvin:

“I never satisfy myself until I can make a mechanical model of a thing. If I can make a mechanical model I can understand it.”,

and Maxwell: “ ...I wish to be understood literally:..The energy in electro-magnetic phenomena is mechanical energy.”

The One Thing Connecting Gravitation and Quantum Physics Theoretical physics has come up with a diverse collection of models. These are the Standard Model of Particle Physics (SMPP), quantum mechanics, general relativity, and the Standard Model of Big Bang Cosmology (SMBBC). These theories can be interpreted to agree with experimental results but they do not provide a plausible and coherent view of natural phenomena.

Today the central problem of physics is that there is no common theory which could include gravity and a quantum theory. General relativity describes macroscopic phenomena, quantum theory deals with microscopic phenomena. In their present form general relativity and quantum mechanics are completely incompatible. But they have one thing in common:

Defining local energy density is impossible in general relativity. For instance, the energy density of the gravitational field of a planet at a particular point could be determined by a co-moving observer measuring the kinetic energy of a freely falling object. Since both the object and the observer fall at equal rates, the observer would not assign any kinetic energy to the object. Other observers, like an observer who is at rest with respect to the planet, would measure different values. It is obvious that energy depends on the choice of an observer. (This also violates the philosophy of general relativity whose tensorial equations are independent of the used reference system). Any physical (single-valued, unambiguous) energy- momentum tensor for the gravitational field does not exist.

In QED the electrodynamic vacuum possesses “zero-point” energy. Because it exists in a vacuum, ZPE is seen as homogeneous (uniform) and isotropic (identical in all directions) as well as ubiquitous (exists everywhere). The one thing in common to general relativity and QED is that they do not include the concept of energy density gradient in free space. This is one of the crucial reasons why these theories fail to be descriptions of reality.

As quoted earlier, Einstein said that “We may therefore regard matter as being constituted by the regions in space in which the field is extremely intense.” We agree perfectly with Einstein. But we also notice that neither of the present theories (GTR, QED) can incorporate Einstein's constituent of matter, since if it is a concentration of energy of a field in space, then surely there must be a gradient of energy density somewhere in space.

Furthermore, a force is the gradient of energy density. This gives us an opportunity to understand what is behind the concept of “charge”. It will be discussed in a later section titled An Enhanced Concept of Electric Charge.

20 All in all, it is a useless effort to try to combine the theories of gravity and quanta in a way that the present peculiarities of these theories would then be included in the resulting theory. Neither of them can produce scientific explanations, and the combination would be even worse. There is one choice left, namely, to abandon them both.

A New View

Our Basic Philosophy behind our theory is that of process philosophy, according to which everything in manifest reality is dynamic. An example would be the elementary particles, bosons and fermions. Some of them have properties such as charge and mass, some have not. In our view, elementary particles and their properties are all processes. Ongoing processes are the essence of reality and matter, and all these processes extend to the whole universe. There are no closed systems.

In quantum physics, constituents of matter and light can behave as discrete particles and waves. This wave-particle dualism seems a strange concept, but in our approach it comes out naturally.

Classically, particles are discrete points with well-defined positions, momenta and energy. Particles are discrete from the medium or space they reside in. In a continuum we deal with densities of energy and momentum. In this article we try to demonstrate that it is possible to construct a theory of particles and space which is something between discrete and continuous, namely fractal. In this theory discreteness is measured by the magnitude of the gradient of energy density. We create a new vision based on the relations of Planck and Einstein: E =hw,.p = h k We are not going to build anything on any other principal achievement of physics and cosmology of the 20th century.

Guidelines From Dirac and Einstein The principal achievements of physics of the 20th century include a theory known as quantum electrodynamics (QED). It was built to describe interaction between point-like electrons and radiation. But (besides its inability to explain anything) this theory has severe drawbacks. Our theory can also be called “Quantum Electrodynamics”, but it is free of problems Paul Dirac discusses below: “Most physicists are very satisfied with this situation. They argue that if one has rules for doing calculations and the results agree with observation, that is all that one requires. But it is not all that one requires. One requires a single comprehensive theory applying to all physical phenomena. ... Furthermore, the theory has to be based on sound mathematics, in which one neglects only quantities which are small. One is not allowed to neglect infinitely large quantities. The renormalization idea would be sensible only if it is applied with finite renormalization factors, not infinite ones. For these reasons I find the present quantum electrodynamics quite unsatisfactory.” [Mathematical Foundations of Quantum Theory. (Academic Press, Inc., 1978), p. 1]

“The successes which we get with quantum field theories are rather limited. One is continually running into difficulties and one would like to broaden one's basis and have some possibility of bringing more general fields into account. For example, one would like to take into account the possibility that Maxwell's equations are not accurately valid. When one goes to distances very close to the charges that are producing the fields, one may have to modify Maxwell's field theory so as to make it into a nonlinear electrodynamics.” Lectures on Quantum Mechanics. Paul A. M. Dirac.

Dirac also wrote that “It seems clear that the present quantum mechanics is not in its final form. Some further changes will be needed, just about as drastic as the changes made in passing from Bohr's orbit theory to quantum mechanics. Some day a new quantum mechanics, a relativistic one, will be discovered, in which we will not have these infinities occurring at all. It might very well be that the new quantum mechanics will have determinism in the way that Einstein wanted.”

Einstein, in speaking of a Unified Field Theory once declared: “We may therefore regard matter as being constituted by the regions in space in which the field is extremely intense.”

These are notices and proposals from eminent scientists and we see no reason not to follow them.

21 Key Considerations

Below is a list of some fundamental ideas that the individual reader of this paper might consider possible. It will be shown that the world view built on them is a coherent theory that gives explanations to many natural phenomena, extending from elementary particles to quanta, unification of forces, cell biology and cosmology. It also explains why we can reject entirely the theory of relativity and the present form of quantum theory.

Space or vacuum is infinite in all possible ways. It has always existed and extends to infinity. It is not expanding or contracting. There is a ubiquitous flow of background radiation from all directions consisting of bosons at smaller and smaller scales, ad infinitum. Space is also absolute in the sense that there is one reference frame at “absolute rest” and preferred over all others—the CMB rest frame. Speed of light depends on local energy density. In higher densities the speed decreases. The variable energy density includes or replaces the “zero-point energy” of apparently empty space. Particles (bosons and fermions) are open systems. They have exchange of energy with the rest of universe and the very existence of particles presupposes a flow of energy from all directions. Particles are electromagnetic vortices in that flow. Force fields have sources only in the sense that fermions scatter the background radiation and thereby become apparent sources of force fields. Bosons are carriers of linear and angular momentum. The electromechanical properties of the field are expressed by the frequency-dependent electromagnetic energy-momentum tensor. The present idea of point particles as sources of force fields by the “exchange of virtual bosons” is simply wrong; there are no such things as virtual bosons or point particles. The process of bosons scattering off fermions is subject to a conservation law: energy and momentum are conserved in the absolute space as expressed by the Compton-Debye law. Formation of atoms, molecules and their aggregates is governed by the fundamental principle: Gauss's principle of the least constraint. Also, it goes without saying that a force originated from scattering cannot be attractive. In nature there exists no attractive forces. There are no means for any object to reach out across a distance to impose a “pull” on any other object. Time is “a free creation of the human mind”. The world is full of cyclical events, and we have memory (and we can count). “I have seen this happen before...it was ten sunrises ago!”. If there were no cyclical events or if we had no memory, there wouldn't be the concept of time.

And likewise time cannot itself exist, But from the flight of things we get a sense of time… No man, we must confess, feels time itself, But only knows of time from flight or rest of things.

Lucretius

Today the standard of time is set by atomic clocks. According to Einstein’s idealism the atoms of these clocks are isolated systems and therefore ideal and absolute oscillators for measuring time. But as was stated above, electrons feel their surroundings and adjust their vibrations accordingly. If we see two identical atom clocks behave differently when they are moved around, we can conclude that the electrons of the cesium atoms inside the two clocks feel different energy densities at their positions.

Applying the list above requires a revision of some well-established achievements, truths and theories in the field of modern physics.

Electromagnetic View of the World

In 1900 Wilhelm Wien published a paper On the Possibility of an Electromagnetic Foundation of Mechanics, in which he postulated that all mass was of electromagnetic origin. That paper started the electromagnetic program. Its aim was to replace the laws of Newtonian mechanics by the laws of Maxwell’s electrodynamics, which were to be recognized as fundamental laws of physics.

The three most notable men of the program in its last stages were Max Abraham, Gustav Mie and Albert Einstein.

22 The electromagnetic program also included models of the electron. But classical solutions of Maxwell’s equations cannot be interpreted as particles, because they are solutions of linear differential equations. They have the property of superposition and, therefore, are unable to model any kind of interaction.

Both Einstein and Mie understood this in 1908…1910, and went beyond the framework of the electromagnetic program in its original form. Mie tried to generalize Maxwell’s field equations, which, as he believed, became invalid at very high strength of the electromagnetic field.

The Fundamental Concepts

We need two kinds of concepts: the theory of radiative transfer for non-coherent radiation, and Maxwell's equations for coherent phenomena. All forces are caused by , the cosmic pressure of the background radiation, and this is best described with the aid of specific intensity Iv . This, in turn, is the solution of the following equation:

Equation of radiative energy transfer:

dIn ()s =-+an()(,)()'.s b n s s'I n s' d W ò4p Ls

This equation is of fundamental nature; it cannot be derived from any other physical principle. The left-hand side represents the rate of change of the specific intensity in the direction s. The functions αν and βυ are known as the extinction coefficient and the differential scattering coefficient, respectively.

The αν -term represents the rate of decrease in energy due to absorption along the s -direction. The integral term represents the rate of increase in energy along the s -direction due to scattering from all s' -directions.

Ñ´=-Εìw m H,. Ñ×= E r

Ñ´=Hìw e E+J, Ñ×= H 0.

These are the four Maxwell equations. In our opinion, there are entities belonging to two different categories. The electric charge density  and the current density vector J belong to the “engineering category”, but the curl equations belong to “fundamental category”. In other words,  and J are only practical means of expressing the result of integrating a very large number of fundamental processes, and they must not be used in modelling those processes. Maxwell’s curl equations: Ñ´=-Eiwm H,. Ñ´= HE i we

These are the fundamental field equations. Similarly as the equation of radiative energy transfer, they cannot be derived from anything. They are valid irrespective of scale. From specific intensity the following can be derived:

1 '()(,),'()(,).Hnr= I u rs d W Fr nu = I rss d W c ò4p ò4p

These equations are defined by S. Chandrasekhar; Radiative Transfer, 1960, and M. Planck; The Theory of Heat Radiation, 1959.

'Hv (r) = space density of radiation, Iv (r)= specific intensity, ' Fv (r) = energy flux density.

The space density of radiation 'Hv (r) may be identified with energy density and ' Fv (r) the energy flux density vector at the point r, at frequency v. (The equations are taken from Optical Coherence and Quantum by Leonard Mandel and Emil Wolf.)

23 In the course of this article we will explain how they can be directly connected to particle's energy and motion, by integrating over all relevant frequencies and inserting the factor 1/c into the latter integral to turn it into an expression of integrated radiation pressure.

1 1 energy= Iu (,),(,).rs d W dn force = Iu rss d W d n còòn4 p c òòn4 p

They are formally exactly the same, the only difference being that the energy integral is scalar-valued. The feature in question is not at all peculiar to the theory of radiative energy transfer. J.C. Maxwell wrote in 1873 in A treatise on Electricity and Magnetism: “In speaking of the energy of the field...I wish to be understood literally:...The energy in electro-magnetic phenomena is mechanical energy.” On radiation pressure he wrote: “Hence in a medium in which waves are propagated there is a pressure in the direction normal to the waves, and numerically equal to the energy in unit of volume.”

The energy integral expresses a property of space and if there is a fermion surrounding the point r, its energy is determined by the integrated specific intensity. From the force integral one can see that it is the distribution of radiation which determines if the particle is accelerated or not. Their dimensions are Nm / m3 and N / m2, respectively.

A particle receives radiation from every element dΩ of solid angle, at all frequencies. These beams of radiation cause pressure, shear and torsion on the surface of the particle. The effects can be expressed by a tensor of some sort, which we do not attempt to specify. We only state that the conditions at r are expressed by the “tensorial energy density”. This statement includes Mach's principle: local physical phenomena are determined by the large-scale structure of the universe. Pressure is caused by scattering, and the scattered field has energy density of its own. For this scattered field and its energy density we have the concepts of scalar and vector potentials, and from these the electromagnetic fields can be derived: E=-ÑV ,. B =Ñ´ A

Physical reality of the vector potential affects the conception that one can always add any arbitrary gradient of a scalar function to the vector potential and still obtain the same magnetic field from it. In other words: there is no such thing as the “gauge freedom” in this theory, where V and A are local measures of energy density. We need non-coherent flow of energy-momentum for gravity, and coherent radiation for the electromagnetic fields. These are the foundation for the electromagnetic view of the world. Electromagnetic Elementary Particles

During the last fifty years of his life, Albert Einstein sought relentlessly for a “unified field theory”  a theory capable of describing nature's forces within a single, coherent framework. He tried to solve this problem by completely rejecting the particle conception of matter, and instead replacing it with a pure field theory. Einstein assumed that the field equations of the theory had to contain nonsingular spherically symmetric static solutions that could be identified with particles, for example the electron. The absence of such solutions was always a main reason for abandoning a particular theory. In the article Concerning the Aether (1924) he wrote: “But there were two difficulties that could not be overcome. Firstly the Maxwell-Lorentz equations could not explain how the electric charge constituting an electrical elementary particle can exist in equilibrium in spite of the forces of electrostatic repulsion. ...In order to construct the electron, it is necessary to use non-electrodynamic forces...” Underlying these statements is the view that the electron is made of some kind of “charged dust”, which must be held together by an external, non-electromagnetic pressure. The charged particle, electron, is constituted out of charge (!). But Einstein put yet another constraint on the proposed constitution:

1 ¶2jæ ¶¶ 2 j 2 j ¶ 2 j ö 2 2-ç 2 ++ 2 2 ÷ = 0 ct¶è ¶ x ¶ y ¶ z ø

24 “The fundamental equation of optics must be replaced by an equation that also contains as a coefficient the universal constant e [the electron charge].” Combining those two requirements was a crucial part of Einstein's heroic but ultimately unsuccessful struggle to find the unified field theory.

Process philosophy leads us to another possibility. We can try to find a dynamical solution to the problem and state our working hypothesis as follows: The elementary particles are solutions of nonlinear Maxwell curl equations, and therefore they are nonsingular. Nonlinearity comes into play in the way that the “material parameters”  and  are in fact functions or representations of energy density.

What are “Truly Elementary Particles”?

The main problem in the study of the fundamental particles today is to discover what are the correct representations for the description of nature. At the present time, we guess that for the electron it is enough to specify its momentum and spin. We also guess that there is an idealized proton which has its π-mesons, and K- mesons, and so on, that all have to be specified. Several dozen particles—that’s crazy! The question of what is a fundamental particle and what is not a fundamental particle—a subject you hear so much about these days—is the question of what is the final representation going to look like in the ultimate quantum mechanical description of the world. Will the electron’s momentum still be the right thing with which to describe nature? Or even, should the whole question be put this way at all! This question must always come up in any scientific investigation. At any rate, we see a problem—how to find a representation. We don’t know the answer. We don’t even know whether we have the “right” problem, but if we do, we must first attempt to find out whether any particular particle is “fundamental” or not. [ Feynman lectures III, 8 – 3. ]

Maxwell's curl equations are scale–invariant. They produce self–similar solutions. This feature can be traced back to the mathematical definition of the curl operator, which is based on Stokes' circulation theorem. Because of its crucial importance, we visualize the content of the theorem. (Figure from Phil. Trans. R. Soc. A):

One can see here both microscopic and macroscopic circulation. In our case the loops represent electric and magnetic field lines. If we encounter an electromagnetic vortex in nature, it is always composed of smaller scale vortices. Bosons are the three-dimensional manifestation of the above. They form a ubiquitous entity which might be called the fractal boson space. Instead of “truly elementary particles” there is an elementary structure that is fractal. Fractal geometry is the geometry that we find in macroscopic nature, and nature has not chosen otherwise for microscopic phenomena.

The 3D image on the right shows how three-dimensional time-varying electric and magnetic vortices are nested in infinite series.

“I do not believe in micro- and macro- laws, but only in (structure) laws of general rigorous validity. And I believe that these laws are logically simple, and that reliance on this logical simplicity is our best guide. Thus, it would not be necessary to start with more than relatively small number of empirical facts. ...”. A. Einstein in 1954.

25 Field Topology of Bosons and Fermions The torus allows a vortex of electromagnetic energy. It is clear from the definition of the curl that the poloidal field (red arrow) is the curl of the toroidal field (blue arrow), and that the converse is also true. The field is a standing wave structure that can be seen formed of two counterpropagating waves. From the requirement of single- valuedness of the wave function it follows that the toroidal wave must be a periodic function with period 2π.

This requires that the speed of wave depends on the distance r to the origin of the torus coordinate system. In other words, the optical path length S in the toroidal direction must be constant at every point of the poloidal cross section of the field: S = 2π n(r) r = constant. The physical realization is as follows: The torus-shaped boson field resides in a compact region of space, inside of which the energy density increases towards the middle, following the function sech2(r). The function has a region of almost constant slope and this is the region where the cross sections are situated. (The cross sections most probably are not circular, the shape of the boson field is determined by minimum principles concerning energy density, volume, surface area etc.) If we write an equation for the propagating field in toroidal direction, it must express the r – dependence as follows:

2 2 2 2 2 2 2 Ñ+y(r ) n ( r ) k0 y ( r ) =ÞÑ+ 0E k0 n ( E ) E = 0.

Thus, the problem turns to solving the nonlinear Helmholtz equation. This justifies the assumption that bosons are fundamental solitons. (The sech2 shape is typical of ultrashort fundamental soliton waves in optics.)

2 i()±w t - a Azt(,,)(())w = Asech z - cute .

In its direction of propagation z (the symmetry axis) the boson has no well-defined wave number at all. It is just an impulse of electromagnetic energy and momentum. Thus, the describing function must be non-periodic, like hyperbolic functions with real arguments. The local velocity c of a boson depends on the energy density u at its position. In the toroidal direction there propagates a wave for which the angular phase velocity can be defined. It has an “average” wave number k', but exactly only one angular frequency ω connected with the energy of the boson. This and the polarization angle α manifest themselves in diffraction or interference phenomena. Bosons are the simplest possible soliton solutions to Maxwell's nonlinear curl equations and they are of toroidal form. Assumptions concerning boundary conditions will be discussed later.

The toroidal structure of the boson field and its direction of propagation also explain why electromagnetic waves are transverse. [ Usually it is understood that transverse waves can only exist in solids. ]

Presently the most important physical properties of the boson are encapsulated in the following two equations: E =hw,,p = h k but in contrast with present usage, in our theory these expressions belong to different frames. The radian frequency  refers to oscillation in the boson's rest frame, while k is the wave vector in the observer's frame. These are first principles. They cannot be derived from anything more primitive. In addition to these, bosons also carry angular momentum. In principle, it can be determined by the volume integral

J= r×E×B().dV òV

26 [It is an experimental fact that Laguerre–Gaussian beams carry angular momentum and that this angular momentum has a mechanical effect when such beams are incident on particles. In particular, when the particles are spherical and absorbing, they rotate steadily at a rate that is directly proportional to the theoretical angular momentum flux of the incident beam. The picture on the page 25 illustrates how the (cross-section of the) beam is formed of photons. If photons carry angular momentum, so does a beam composed of them.]

One may also note that there is nothing in principle to prevent the boson field from splitting or combining with another boson. These are known as parametric processes. Energy and momentum are conserved in these processes and the conservation of energy means conservation of vorticity. The boson is a compact packet of energy but not indivisible.

Birth of a Fermion “According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.” [“Vacuum state”, Wikipedia]. In our opinion, the word “vacuum” should be dropped from physics terminology and be replaced with “boson space”. Instead of fleeting electromagnetic waves there is a continuous flow of low-energy bosons from all directions.

Is there any kind of mathematical model to provide serious support to the claim that a particle may “pop” into existence? We have already stated above our basic hypotheses that the precondition for the existence of a particle in some region of space is a flow of electromagnetic radiation into that region. In steady state conditions there is no piling up of energy, so the flux of bosons must diverge from that region through which it just converged.

In space, there are cosmic sources of radiation in all directions at cosmic distances. So we can infer that there are “plane waves” propagating in all possible directions. What would be the sum of those plane waves at one arbitrary point in space? From Courant and Hilbert:

1 sin k r e i k0 sg r dW = 0 . 4p ò4p k0 r

This is the superposition of plane waves from all directions, all of the same amplitude 1/4. The righthand side of the equation is the spherical Bessel function, which is known to be the sum of two spherical Hankel functions, h(1) and h(2), which describe spherical waves travelling inwards and outwards, respectively. iωt We make an argument that fermions are standing wave structures of the form (space part)·e . So they are solutions of the threedimensional wave equation. The space part is a solution of the 3D Helmholtz equation:

2 2 Ñy +k0 y = 0.

For the solution of the Helmholz equation we choose the spherical coordinates, at the origin of which is the center of the particle. The following solution includes two waves (the Hankel functions), one converging and one diverging:

(1) (2) m y(r )= (Ahkrl (0 ) + BhkrP l ( 0 )) l (cos q )cos( m f ).

P is the associated Legendre function. At the moment of equilibrium (steady state), when A=B, the sum can be replaced with the spherical Bessel function, describing a spherical standing wave:

m y(r )= CjkrPl (0 ) l (cos q )cos( m f ).

In order to promote the electromagnetic view of the world, we must now link this expression to the electromagnetic theory. If  is assumed to represent the Debye potential, the corresponding electromagnetic field can be derived from it. Both Einstein and Gustav Mie were familiar with that field. It appears, for instance, in the Mie scattering. But as was noticed above, this field cannot be interpreted as a particle because it is a solution of linear Maxwell equations; it is not a soliton. We must find an extension into the nonlinear domain.

27 The Constituent Electromagnetic Field of a Fermion

If the permeability μ0 and the permittivity of space ε0 were a function of position με(r) inside the fermion, the following expression, derived from Maxwell’s equations, should be taken as a starting point:

Ñ+2Ew 2 m e( rE ) =Ñ+ 2 Ek 2 ( rE ) =Ñ×Ñ (( E me ) / me ).

We decide to be interested only in spherical transverse electric (TE) fields in which the product μ (and, consequently, the wave number k ) is a function of radius only. This means that E·Ñμ (r) = 0. The right side of the equation vanishes. So we have Ñ2E +k 2 ( r ) E = 0.

It was stated earlier that the speed of light depends on local energy density. We write the equation as follows:

2 2 2 Ñ++E(k0 g E ) E = 0.

This is the nonlinear vector Helmholtz equation which governs the propagation of light in nonlinear media. The interpretation in our case is that the (radial) wave number k in the outer layer of the particle is k 0 , inside the particle the additional term g |E|2 makes it larger. We have already made use of the sech function in dealing with boson fields, inside of which the energy density follows the function sech2(r). Now we apply it to fermions too, but in a less heuristic way:

Hyperbolic secant. The hyperbolic secant function expresses the increase of wave number inside the particle. It has a maximum at the center of the spherical field, because the energy density has its maximum there. No spherical solutions of the threedimensional nonlinear Helmholtz equation exist, so we must compromise to proceed.

Let us consider the Taylor series expansion of the square of the hyperbolic secant:

2 4 2 2 æx5 x ö 2 sech (x )=ç 1 - + -××× ÷ =- 1 x +××× è2 24 ø

By retaining only the two first terms of the series, we can write down the following Helmholtzlike equation for each of the components of E: 2 2 2 2 Ñ+-Ek(max b rE ) = 0. For this equation we know the spherical solution:

1   r2 l 2 l1/ 2 2 m ENreL 1/() 2n l  1 ( rP )l (cos  ) cos m 

It is the threedimensional quantum harmonic oscillator; an electromagnetic standing wave. It represents the electric field of a fermion. Every solution of the scalar wave equation corresponds to three independent vector solutions: L, M and N. The electromagnetic model of particles will be completed, if we manage to obtain the solutions of the vector wave equation, which in addition satisfy Maxwell’s equations.

28 These solutions can be deduced from the characteristic functions of the corresponding scalar wave equation. A purely solenoidal electromagnetic field can be represented in terms of fields M and N alone, for each is proportional to the curl of the other. The procedure is given for instance in Stratton’s Electromagnetic Theory, chapter 7.11. Pictures of Fermions

 Fermion field r The colors of the fermion pictures represent the strength of the fields: the  component of the electric field and the rcomponent of the magnetic field. The field of a boson represents the simplest possible solution of the vector wave equation. The field of a fermion is much more complicated; it is a higher mode solution and has a stratified structure. The allowed energy levels of a three-dimensional harmonic quantum oscillator are of the form:

E=hw( n + 3 2).

The layers can also become loose in greater numbers and still be radiated as a single quantum. Alternatively, the radiated energy can take the form of a fermion, as when a neutron emits an electron in beta decay. The topological form of a field layer is a torus, which is preserved when the field layer becomes a propagating boson. Evidently, the fermion’s radiation consists of toroidal quanta. The natural assumption is that the bosons are blown off like smoke rings into the direction of symmetry axis of the fermion and they propagate in the direction of their own symmetry axis, to which we hereafter refer as spin vector.

From Max Planck's Nobel lecture, 1920: “There is in particular one problem whose exhaustive solution could provide considerable elucidation. What becomes of the energy of a photon after complete emission? Does it spread out in all directions with further propagation in the sense of Huygens' wave theory, so constantly taking up more space, in boundless progressive attenuation? Or does it fly out like a projectile in one direction...?” A. Einstein: On the Quantum Theory of Radiation, 1917: “If the molecule undergoes a loss in energy of magnitude hv without external excitation, by emitting this energy in the form of radiation (outgoing radiation), then this process...is directional. Outgoing radiation in the form of spherical waves does not exist.” (This is Einstein's “needle radiation”.) Summary We have now deduced the sources of quantized radiation starting from cosmic background radiation, expressed as a converging wave. A debate over the possible physical reality of such waves started between A. Ritz and A. Einstein in 1908. As it ended in 1909, Einstein stated the following: “A spherical wave propagating inward is mathematically possible; but for its approximate realization an immense amount of emitting elementary structures are needed”. This requirement is fulfilled in our theory in the following manner:

In considering the birth of a particle we made the implicit assumption that the cosmic waves were coherent and of the same phase by putting α=0. (See the following equation.) In reality, the background radiation is mostly incoherent and α fluctuates randomly, but once a particle has been formed, the non-coherent flow of energy and momentum from all directions acts as a driver to maintain the oscillating vortex.

29 1 i() k sg r-a 0 W driver for the vortex. 4p ò4p e d =

The flow of momentum manifests itself in scattering processes as radiation pressure, to which we later refer as “cosmic pressure”. We recognize the effects of scattering as the electromagnetic forces. Fermions do pop into existence, but they don't emerge from nothingness. It is that small quantum vortices, bosons, are merged to form a larger quantum vortex, a fermion.

These particles, bosons and fermions, are all that exists. Matter and space consist of them. The energy of particles depends on their volume. The larger the particle, the higher the energy.

Absorption and Emission First few words about the general nature of the fermion field and its ability to change in space and time:

Ñ´=-Eiwm H,. Ñ´= HE i we

If these vectors are to form a wave, then the following dispersion relation must be satisfied:

(w2 me -k × k ) = 0.

The dispersion relation encapsulates all the governing physics of the system into a single equation. It is an expression describing the relationship between the wave number k, the associated frequency, ω, and the medium where the wave propagates (or resides, if it's a standing wave).

Both k and ω may be complex. If they are real, the wave is stationary. If an imaginary part emerges, the wave begins to change. It can grow or dampen both spatially and temporally. This shows up in the time part of the particle's wave description as follows. F(t,r) stands for the frequency/volume -dependent energy of a particle:

i()w 1 t i[(w1 tit )+ w 1 ] i()w 2 t FFr1(()()()()()te,),r = 1 ,.:,.:.abs starts Þ FrFr te =1 abs ends Þ FrFr2 te, = 2

It is only during the absorption process when a positive imaginary part exists and there is the energy of a boson accumulating onto the surface of a fermion.

Generally, the dispersion relation allows any combination of frequency and wave number. In our particle model we have only one frequency per energy state but a varying wave number. This is possible because in the model the wave number k2 depends on energy density, a.k.a. με. We must write the dispersion relation differently: k× k (w2 - ) = 0. me

Now it is manifestly consistent with the particle model; a single frequency but a dispersion relation always satisfied, provided that the wave number and energy density change at the same rate, as we have assumed in this theory. .

We believe that Erwin Schrödinger would have approved our particle model and also our model for emission and absorption. At least he wrote the following: “It is hardly necessary to emphasize how much more congenial it would be to imagine that at a quantum transition the energy changes over from one form of vibration to another, than to think of a jumping electron. The changing of the vibration form can take place continuously in space and time, and it can readily last as long as the emission process lasts...” Erwin Schrödinger in Annalen der Physic (4), vol. 79, 1926.

30 Particle's Interaction with its Environment

In its real physical environment every particle must be seen as a solution of the inhomogeneous wave equation. In this way the environment can be treated as a partial source of the particle's constituent field:

1 ¶2y Ñ-2y =-g(r , t ). v2¶ t 2

We may write down the solution as Kirchhoff's integral representation:

é¶1éùæöy ¶ 1 1 ¶¶ùr éù y y (,)[][].r' t=1 1 gdv + 1 -+y da 4pòVSr 4 p ò êêúç ÷ êú ú ërnëûèø¶¶ nr vrnt ¶¶ ëû û

The volume integral is a particular solution of the inhomogeneous wave equation representing physically the contribution to ψ(r',t ) of all sources contained within V. In our case “all sources within V ” means the internal field of the standing wave particle itself, so the volume integral express the fact that a particle must be a retarded solution of itself. To this particular solution is added a general solution of the homogeneous equation expressed as a surface integral extended over S and accounting for all sources located outside S, which is an arbitrary, regular surface enclosing a volume V. In case the values of ψ and its derivatives are known over S, the field is completely determined at all interior points. The field of a particle is a solution to a free boundary problem, and we imagine that the surface S follows the boundary quite closely. The volume V contains the energy of the particle in the form of harmonically oscillating standing wave.

The interaction process can be described by saying that what happens at the surface, affects the whole three- dimensional field of the particle. The scattering process on the surface S includes modulation (which we will discuss later), and for this reason information concerning the energy and orientation of the particle radiates into the surrounding space.

The Reason why Fermions are Sources of Electromagnetic Fields The problem of expressing the electromagnetic field at an interior point inside a volume V in terms of the values of E and H over the enclosing surface S can be solved by introducing a pair of fictitious auxiliary sources: the electric and magnetic surface currents JE and JM.

Ñ´-ΕHJHEJìwm =-M ,. Ñ´+ ì we = E

Integrating these field equations by applying the vector analogue of Green's theorem Stratton and Chu derived the equations for E and H, but they are not essential here. The surface currents produce a field inside the surface S, but outside there is no field. What is essential is that these surface current vectors can be interpreted as a source of scattered field, just by reversing the vectors. In this case the field inside the surface is zero. Combining the two fields cancels out the fictitious sources. What is left is a pair of fields, a standing wave and a scattered wave, each essential to the existence of the other. The scattered field is composed of bosons of cosmic background radiation, and this scattered field is what we now recognize as the electromagnetic field. Mathematically:

2 2 scat (Ñ+F=k ) 0,such that F (rB ) =-F ( r B ),

31 in which scat refers to the scattered field and B to the boundary. It is obvious that the scattered field carries information about the particle's interior field. If a fermion loses a quantum of energy, it gets smaller and the scattered force field around it weakens abruptly. So there are simultaneous, stepwise adjustments in both energies of fermions and force effects carried by the scattered bosons. Albert Einstein: “I feel that it is a delusion to think of the electrons and the fields as two physically different, independent entities. Since neither can exist without the other, there is only one reality to be described, which happens to have two different aspects; and the theory ought to recognize this from the start instead of doing things twice.” Resonance in the BosonFermion Interaction The following is taken from an article Physics and the wave equation by J. C. Slater, 1945:

There is another method for solving the inhomogeneous wave equation, namely the method of expansion in orthogonal functions. In this case the general solution of the homogeneous wave equation is expressible as a sum of particular solutions. By this method we can solve the inhomogeneous wave equation, for we can expand also the inhomogeneous function ƒ(x, y, z) in series of, say, wn, and equate terms on both sides of the equation.

iw t 2 2 ikn vt uwxyze=(,,), Ñ+= wkw 0, u = å Awxyzen n (,,). n

An interesting case is found where the inhomogeneous part of the equation varies sinusoidally with time. If, for instance, we have the equation 1 ¶2u Ñ2u - = fxyze(,,) iw t v2¶ t 2 and if we let u= Uweiw t ,, f = Fw ånn n å n n n then, remembering that

2 2 2 2 2 2 Ñ+wn(w n / vw ) n = 0, w n / vk = n , we have æw2- w 2 ö F v2 Un wFw=,,or, equating terms U = n ånç2 ÷ n å n n n 2 2 nèv ø n w- wn

showing that the coefficients Un show a dependence on ω similar to that of the amplitude of linear oscillator of natural frequency ωn , forced by an external frequency ω. In the following we will see the physical significance of this result.

Two fermions interacting through their scattered fields.

32 Polarization in the BosonFermion Interaction More information becomes available by subjecting the interaction to Maxwell's laws. J. A. Stratton (in his book Electromagnetic Theory) applies the above method to diffraction problem in a way which, in our opinion, is applicable also in the case of bosonfermion interaction. In the analysis the incident wave, expanded in vector spherical wave functions, diffracts from a spherical standing wave. That wave has a discrete set of natural frequencies, or modes of oscillation. The analysis shows that the radiation scattered at small angles is circularly polarized, while the radiation scattered at right angles is linearly polarized. Another feature of the interaction is the following: “The coefficients … are the amplitudes of the oscillations… Whenever the impressed frequency w approaches a characteristic frequency w of the free oscillations, resonance phenomena will occur.” The physical consequences of this result will also be discussed later in this article.

The Effect of Motion on Particle's Energy In an extraordinary book The Mathematical Analysis of Electrical and Optical Wave-motion, published in 1914, Harry Bateman examined a number of different classes of solutions to the electromagnetic wave equation. Bateman started by writing our fundamental equations (the two curl equations) into the most compact form possible: i ¶M Ñ´E = m ,.where M=H+Ei c¶ t

Each solution M satisfies the equation

2 2 MMHE×=( - ) ± 2i ( EHI ×=± )1 2 i I 2 .

The vectors I1 and I2 Bateman called invariants, and then defined: “When the invariants are zero over a given domain the field may be called self-conjugate for this region.”

The self conjugate condition requires that the magnetic energy density be the same as the electric energy density, and the electric field be orthogonal to the magnetic field. It means that the longitudinal field disappears entirely. Being an electromagnetic TEM standing wave, our fermion model fulfills the Bateman conditions at the surface. In order to study the flow of energy Bateman utilized the equation of continuity, the volume density of electromagnetic energy, and the Poynting vector S, one of whose components is shown below equated with ρu: ¶¶r ¶ ¶ 1 +++==+(rrru ) ( v ) ( w ) 0, r ( EH2 2 ), r ucEHEH =- ( ). ¶¶tx ¶ y ¶ z 2 yz zy

In our case the equation of continuity simply means that the electromagnetic energy of a particle stays inside a “closed” surface regardless of its motion. ( We will discuss the physical nature of this surface in a later section.) We must first take a look at the expression containing one component of the Poynting vector. It appears strange, because the motion of the surface in x–direction seems to be the source of an electromagnetic field at the y–z plane. But since we know that the field of a particle is inside the surface, we can understand that the field is composed of bosons that are scattered from the surface. Two intensities, one belonging to the surface of the particle itself and the other belonging to the scattered field, are equated. From the above Bateman derived the following equation: 1 r 22222()()().cuvw---= cEH 22 - 222 + cEH 2 4

33 For a TEM field the difference E 2- H 2 is zero. It indicates that in an electromagnetic wave the average energy densities of electric and magnetic fields are equal. EH and ρ also are energy densities. For the special case of no 2 motion (u = v = w = 0 ) we write it as ρ0 . Now, if we divide by c , take the square root of both sides and rearrange, we have r r = 0 . u2+ v 2 + w 2 (1- ) c2

The elevated energy density occurs in front of the propagating surface. Electromagnetic particles are sensitive to energy density at their immediate surroundings and gain or lose energy following that density. We can now write for the mass of a particle:

2 2 mmvm=( ) =0 / 1 - vc / .

This is the explicit form of the function m = m(v) in the Compton-Debye theory ( see page 6 ).

It has been said that all the “weirdness” associated with quantum physics can be reduced to the following statement: there is a wave associated with the motion of any matter, and the greater the momentum of the object, the shorter the wavelength of this wave. In our theory the mass/energy and the internal (de Broglie) frequency of a particle depend on local energy density, and setting a particle into motion is a special way to increase the local energy density felt by the particle, which is not so weird. Particles in Geodesic Motion “The Gravitational Red-Shift” R.F. and J.Dunning-Davies Abstract. Attention is drawn to the fact that the well-known expression for the red-shift of spectral lines due to a gravitational field may be derived with no recourse to the theory of general relativity. This raises grave doubts over the inclusion of the measurement of this gravitational red-shift in the list of crucial tests of the theory of general relativity. Conservation of energy yields the fact that the sum of the kinetic and potential energies is a constant. If a particle of mass m is moving under the influence of a gravitational field generated by a massive central body of mass M, Newton’s law of gravitation shows that the potential energy is given by -GMm/r, where G is Newton’s universal constant of gravitation and r is the distance of the particle from the central massive body. However, what of the kinetic energy which, for a normal material particle, is taken to be one half the product of the particle’s mass with the square of its velocity? Obviously, such a formula would not apply in the case of a ‘particle’ of light, which has zero mass. However, the kinetic energy of a photon is given by hν, where h is Planck’s constant and is ν the frequency of the photon. If the mass-energy relation E = mc2, which relates the kinetic energy to the product of mass and the square of the speed of light, is introduced, then an ‘effective mass’ for the photon may be deduced and is given by m= hn /. c2

The equation expressing conservation of energy then becomes GMm GM hn hn- =- h n = constant. r rc2

This equation immediately allows the well-established expression for the gravitational red-shift to be deduced. For example, if as r → ∞, ν → ν∞ , the equation of conservation of energy becomes

GMhn¥ - n GM hn- = h n , or = , rc2 ¥ n rc2 which is the desired result.

34 Discussion. In the above derivation of the expression for the gravitational red-shift, no appeal has been made to any aspect of the theory of general relativity, not even the principle of equivalence. Hence, the question must be raised as to how and why the measurement of the gravitational red-shift could ever be considered a real test of general relativity, let alone a crucial test as is so often claimed? …..

Robert Kirkwood's Work In 1953 Robert L. Kirkwood published the article “The Physical Basis of Gravity” in . In this article he demonstrated, how “…the verifiable results of Einstein’s gravity theory can be deduced directly from elementary physical principles without the use of the covariant formalism or of Einstein’s field equations.” He demonstrated the concept “velocity field v” (without giving it any concrete properties) to which he gave the name “ether velocity”. It has the following properties: 1. Far from all masses: v = 0. 2. Near a spherical mass the vector v points parallel to the radius, towards the center of the mass, and its ½ magnitude ( vr = (2GM / r) is such that it describes the spherically symmetric Newtonian gravity field [the equation for the free fall velocity].

If a test particle which moves at the speed v0 is brought into the vicinity of mass, the equation of the orbit will be the same as that of general theory of relativity if the mass of the particle depends on its speed v0 relative to the field v.

2 2 mmvm=( ) =0 / 1 -- (v 0 v ) / c .

It is a surprising fact that using this single equation and Newton's laws he derived a result which predicts the correct advance of the perihelion of Mercury. But he had to take the core of his equation from special relativity. We have shown that it was not necessary. Earlier we derived from Bateman's equation the following result and gave its interpretation: r r = 0 . u2+ v 2 + w 2 (1- ) c2

We may now briefly examine how these equations and the redshift equation work together. We have found earlier that the elevated energy density occurs in front of the propagating fermion. Electromagnetic particles are sensitive to energy density at their immediate surroundings and gain or lose energy following that density. It is justified to write for the mass of a fermion:

æv2 ö m= m ç(1 - ) ÷ . 0 ç2 ÷ èc ø

We also concluded that velocity itself is not the cause of increased mass. The cause is elevated energy density at the particle's position, and that can be reduced by shadowing the background radiation.

The equations above suggest to us the following: Redshift indicates reduced intensity of background radiation. In free fall the elevated energy density caused by velocity compensates exactly the shadowing effect caused by the mass M.

n ¥ GM m0 GM æGM ö =+12 , Þ =+ 1 2 , Þ=m m0 ç 1 + 2 ÷ . n Rc m Rc èRc ø

To see how this idea fits into the equations above, we first write from the redshift equation:

The expression m0 refers to the mass of a fermion in free space far from other masses and with zero velocity. The particle is “moving along the geodesic”, if the two contributing factors remain equal:

35 æv2 ö æGM ö ç1-0 ÷ = 1 + . ç2 ÷ ç 2 ÷ èc ø è Rc ø

Solving this for v0 we have:

2 2 2 v0 2 GM 4 G M 2 GM 1-=+2 1 2 + 2 4 , Þ=v0 . c RcRc14243 R ®0

This is Kirkwood's velocity field for the case v0 = vr . Now we write his mass formula into more general form:

2 2 mmvm=( ) =0 / 1 -- (vpm v ff ) / c .

The indices of vectors refer to particle's motion and free fall. The latter vector can be interpreted as a tangent vector defining the geodesic path in an arbitrary mass distribution. As long as the two vectors cancel the mass of the particle remains constant, because the overall energy density at the particle's position remains constant. [ This is Einstein’s principle of equivalence. The state of space felt by the particle is a homogeneous gravitational field; uniform acceleration and homogeneous gravitation are equivalent. ]

If the free fall is abruptly halted, vpm = 0, the energy of the particle suddenly appears too high for its shadowed position (the fit value being the one calculated from the redshift equation) and the particle radiates the excess energy ( = kinetic energy ) as quanta. (This is what happens in collider experiments.) Thus, instead of the idea that the presence of mass/energy determines the geometry of space and the flow of time, we have a model in which the local energy density (in Euclidean space) determines the energy of particles, and this model gives the verifiable results of Einstein’s gravity theory.

Bosons Scattering off a Fermion have Spectrum If a weakly nonlinear system is an electric field whose properties are described by a dispersion relation of the form k =k (ω, |E|2), as is the case with our boson / fermion model, solutions of the form

E = A(X,T) cos[k xwct + f (X,T)] of the nonlinear wave equation can be found. X and T are variables of the function f which represents the background radiation received by the fermion. It varies slowly compared to the function cos(kxwc t). Only the temporal behavior of the system is of interest here. So we drop the space components and have

E = A(T)cos[wct + f (T)]. In modulation theory this expression represents nonlinear (exponential) modulation, so the wellknown spectrum analysis is available. Modulation in the BosonFermion Interaction

There is still one more aspect to the bosonfermion interaction. In the following, the electric vector E is composed of bosons scattered by a fermion: E= e(,). r t ei()kg r - w t

This expression is a general form of a wave, in which the vector e(r,t) is complex. This vector contains the information about the amplitude and polarization of the wave. It is now important to understand that the vector e(r,t) is independent of the vector k. In practice this means that the low-frequency bosons of background radiation can act as carriers of high-frequency interaction.

36 A defining property of a soliton is that they can interact with other solitons, and emerge from the collision unchanged, except for a phase shift. Both fermions and bosons are solitons. We believe that the present mathematical models of solitons are very primitive, and the real situation is that the scattered bosons are able to become modulated in a most complicated way by amplitude, phase, frequency, and polarization.

There are Two Extreme Types of Spectra If the fermion is alone and interacts only with the boson space, its spectrum is at its simplest. It is composed of frequency spikes at and around the de Broglie-frequency of the fermion (ωc). The terms (ωm) represent the low- frequency boson background.

The main result is this: the spectrum of scattered boson beams is shifted to a new frequency band, the center of which is determined by the de Broglie –frequency of the fermion. If other fermions are brought into the neighborhood of the lone fermion, the situation changes drastically. Now all the fermions receive radiation from other fermions. In the following we see the spectrum of a fermion in the presence of two other fermions (effect of background not taken into account and the frequencies ω1 and ω2 not harmonically related, ω1 << ω2 .): The shift of spectral lines caused by the interaction with surrounding particles in the plasma are commonly observed in

¥ xtAJc( )= cnå (b )cos( w cm + nt w ) . n=-¥

(visible) laboratory and astronomical spectra. The effect is called pressure broadening of spectral lines. It is a manifestation of the principle of local energy minimum, explained in the following.

¥ ¥ xtAcc()=å å JJ nm ()()cos(b1 b 2 w c + nmt w 1 + w 2 ). n=-¥ m =-¥

fc – 3f2 fc – 2f2 fc – f2 fc fc + f2 fc + 2f2 fc +3f2

There results a line spectrum which contains in principle an infinite number of frequencies. The shape of the main band is repeated to infinity with equal spaces but damped. The spectrum can be described with the word selfsimilar. For spectrum analysis of, say, an atom or a bunch of atoms, the procedure above must be extended to the number of fermions involved in the interaction.

There are sideband lines at ωc ± nω1 ± mω2 which appear to be beat-frequency modulation at the sum and difference frequencies of the modulating signals. Due to this, there are regions (the nodal surfaces) of reduced force effects acting on fermions of certain energy. If the source of beat-frequency radiation is the atomic nucleus, we are led to the shell model of atom in a natural way, and the nodal surfaces are recognized as “bound orbits”. To the two extreme cases of spectra we can assign the concepts of explosion and implosion, or vaporization and condensation, or bond breaking and bond making. In the first case all the particles or molecules are alike; they have similar spectra and resonant repulsion keeps them far apart from each other. In the second case the particles are all different; their spectra are completely interleaved. This is the condition underlying phenomena such as superconductivity and superfluidity.

37 In the intermediate region, forming of atoms, molecules, clusters, clathrates and solids results from Nature's elemental effort to follow the principle of local energy minimum. This is realized by interleaving the spectra of particles. In this theory the pressure broadening of spectral lines is seen as a fundamental variational principle. It is a component of Gauss's principle of least constraint. Returning from interleaved condition to non-interleaved spectra requires energy. An example of this is the transition from water to water vapor. Generations of Fermions

In high-energy physics it has been noticed that there are three generations of fermions. Second generation fermions are heavier but otherwise identical to their first generation versions. Similarly, the third generation fermions are even heavier but otherwise identical to their first and second generation versions. Nobody knows why the second and third generations exist. It can be understood as follows. The physical processes of particle detectors work at certain frequency bands. As the energy of accelerated particles increase, it is their lower sidebands that occupy the frequency band of the detector. For this reason the results in these kinds of experiments repeat the same pattern as the energy of particles increases.

Inertia and the MassEnergy Equivalence

Consider the case of a test particle in free space and two opposite elements of solid angle, dΩ1 and dΩ2 . Beams of background radiation arrive along the elements at the test particle. Both of the beams become scattered and radiation pressure occurs, but the forces are of course balanced. If we now try to move the particle into the direction of the other beam, the Doppler effect arises: v v ForceduetodWµ+: F (1 ) , Forceduetod Wµ- : F (1 ). 1 1 c2 2 c

The net force on the particle depends on the difference between momenta of the two beams, whose combined energy we can take as E = 2ħω, and then write: hwéæv öæ v ö ù 2 h w E p=1 +--= 1 vvmv = = . êç ÷ç ÷ ú 2 2 cccëè øè ø û c c

The reader may also note that a certain formula comes out from radiative considerations: the above relationship implies E = mc2, but the energy E is the energy of boson beams. Now we have arrived at the crucial point of understanding the mass–energy equivalence. We have two different 2 physical meanings for the formula E = mc . Firstly, as explained above, the E is the energy of the boson beam which also carries linear momentum. Secondly, the formula may refer to the energy of the fermion itself. Why the expressions for these two physical phenomena are formally and numerically identical? The answer comes from our fundamental equations: 1 1 energy= Iu (,),(,).rs d W dn force = Iu rss d W d n còòn4 p c òòn4 p

éNm ùé N ù Their respective dimensions are the following: = . ëêm3 ûëúê m 2 ûú

If a particle at a point r is exposed to repulsive radiation (say, in a linear accelerator), integration of pressure over all directions generates a force F. From the formal identity of the integrals we can write: 1 dE dp F = in electrodynamics, while in mechanics F= ma = . c dt dt E above refers to radiation energy impinging upon some target at the rate dE / dt. Equating the forces F we see that dE / dp = c, from which it follows that p = E / c. Momentum carried by fermions is p = mv, and the momentum carried by bosons is p = mc, from which we have dm = dp / c. Substituting into it p = E / c from above, we get dm = dE / c2 and E = mc2.

38 Conclusion From the two derivations above one can infer that the inertial mass of a particle is the Doppler effect of scattered background radiation. We now know what mass is. In measuring the inertial mass of a fermion we in fact measure changes of background radiation scattered by the particle. Through the identity of expressions for energy and force, the energy of the fermion also becomes determined.

Considering bosons we can write the massenergy equivalence into the following form: E= mc2 = hw.

The last term refers to internal dynamics of both bosons and fermions. It reveals the oscillating character of the particle's electromagnetic field. The energies of a fermion and a boson are equal if they have equal volumes. The three-term mass-energy equivalence is clearly demonstrated in pair production. For pair production to occur, the electromagnetic energy of a photon must be at least equivalent to the mass of two electrons. The mass m of a single electron is “equivalent” to 0.51 MeV of energy. To produce two electrons the photon energy must be at least 1.02 MeV.

Enhanced Concept of Electric Charge “Electric charge is the physical property of matter that causes it to experience a force when close to other electrically charged matter. There are two types of electric charges – positive and negative.” This is a concept of occult quality. It is a relic that has survived unscathed in all subsequent physical theories. When individual electrons were discovered, the older concept of charge was applied to each electron by stating as a fundamental fact that every electron “carries an elementary charge”.

From our point of view the concept is most unsatisfactory. Charge is seen as a property that is “glued” on some particles of matter, but not on others. This does not fit into our philosophy which is based on processes. It also possesses the idea of an attractive force, which we consider fundamentally impossible.

The concept has survived because the densities of charge and current work so well as sources of electromagnetic fields in classical electromagnetic theory. But they are of limited applicability. As was noted earlier, the electric charge density  and the current density J belong to the engineering category; as such they are not adequate for atomic and . Evolution of the concept of charge is in order.

We have already stated earlier that a particle is composed of a pair of fields, a standing wave and a scattered wave, each essential to the existence of the other. The scattered field is composed of bosons of cosmic background radiation. As a result of nonlinear scattering process, around every fermion there is a cloud of modulated bosons. This cloud is what we call an electromagnetic field. The electric component is caused by bosons scattered at small angles by the fermion. The magnetic component of the field is caused by bosons scattered nearly at right angles. The scattered field is modulated with the de Broglie –frequency of the fermion. There is a continuum from electric field to magnetic field. The boson field is not static or spherically symmetric and therefore carries information about the energy (frequency) and the orientation of the fermion. All particles and their aggregations are sources of multipole radiation. The structure of the radiated field is best seen from the definitions of electric and magnetic multipole fields:

()()M wt E wt ELlm=gkrY l()(,)()(,) lmq j e, or HL lml = fkrY lm q j e .

There are radial functions, the basic angular functions (spherical harmonics) and the time factor in separated variables. [ J. D. Jackson's Classical Electrodynamics is an authoritative overview of the theory of multipole radiation. ] Due to multipole radiation, around every particle there is a space region in which a test particle can encounter a varying energy density and all kinds of forces, including torsion carried by the torus-shaped bosons. The space around a particle must be described as a tensor field at a certain frequency band. This tensor replaces the electric permittivity and magnetic permeability tensors.

39 Enhanced Understanding of the Source of Magnetic Fields

This picture is taken from the article Spin Hall Effect by M. I. Dyakonov. “The Spin Hall Effect consists in spin accumulation at the lateral boundaries of a current-carrying conductor, the directions of the spins being opposite at the opposing boundaries. For a cylindrical wire the spins wind around the surface. The boundary spin polarization is proportional to the current and changes sign when the direction of the current is reversed.” This is the experimental proof for the following assertion: When the ends of a wire are connected to a battery, free electrons are forced into a collective motion. Randomness of the spin orientations disappears. The electrons turn into such a position that the spin vectors are perpendicular to the motion and parallel to circles around the wire. Background radiation scatters from these electrons and the final result is a cloud of linearly polarized bosons surrounding the wire, i.e. the magnetic field.

A Fundamental Feature of Electromagnetic Fields

In 1955 Emil Wolf, in the article “A macroscopic theory of interference and diffraction of light from finite sources.”, introduced the two-point space- time correlation function, now known as the mutual function, and showed that in free space this function obeys two wave equations. (On the left in his own handwriting, from the book Tribute to Emil Wolf: Science and Engineering Legacy of , by Tomasz P. Jannson and Emil Wolf .) The derivation of these equations “demonstrated the fundamental phenomenon that not only the field but also the spatial coherence propagates in the form of waves.” In our theory a force field is a manifestation of spatial coherence of bosons, and this can be accomplished by arranging the sources of fields (fermions) in a coherent manner.

On Some Constants

The fine structure constant is much more constant than its e2 2p d constituents. We have already stated that these constants have a = ´ . “free space values”, in principle they can be measured one by one 4pe0d hc in space, far from massive bodies and at rest in absolute space. But in closer interactions, due to high local energy densities, none of the constants remain constant; they have “experimental values”.

The fine structure constant can be defined as ratio of two energies: (i) the energy needed to overcome the electrostatic repulsion between two electrons a distance of d apart, and (ii) the energy of a single photon of wavelength λ = 2πd. Following this we interpret the fine structure expression above as follows. If two similar electrons (or fermions in general) are slowly approaching each other in free space, they both feel increasing energy density and absorb energy. If the distance between the two is then slowly increased, they both emit quanta whose energy is determined by local energy density, which in this case depends solely on the distance between the two electrons (fermions). This mechanism is fundamental for our theory. It gives new content to the equation of electric power:

P= I × V.

The current I gives the amount of electrons and the voltage V gives the energy of electrons by expressing the pressure of the electron gas in a conductor. The energy content of electrons can be locally utilized by directing the flow of electrons into a resistor, in which a voltage drop V takes place. The present explanation is that the potential energy of the charges is converted to kinetic energy in the resistor, but our formulation refers only to the decreasing energy density felt by the conduction electrons, leading to quantized radiation of electromagnetic energy. Our new view also lessens the craziness of physics:

40 On this matter Feynman wrote: “So our crazy theory says that the electrons are getting their energy to generate heat because of the energy flowing into the wire from the field outside.” But applying Poynting's theorem to this case is not correct. The correct view is that energy is conveyed along the wire by electrons, and the emission (heating) takes place due to decreasing energy density felt by electrons. (See: Poynting vector – Wikipedia.) Common Origin of Forces

To start we remind the reader of the two-fold structure we introduced earlier under the heading Further details to the interaction model. A fermion is a composition of a pair of fields, a standing wave and a scattered wave, each essential to the existence of the other. The scattered (force) field is originated from cosmic background. It has been long known that bosons are carriers of linear and angular momentum. As explained above, we wish to suggest an addition: Bosons scattered by fermions are modulated and polarized. Furthermore, in their interaction with fermions there appears a resonance phenomenon. This also means that force fields have sources only in the sense that bosons of background radiation scatter from fermions and then act as carriers of linear and angular momentum.

Propagating energy density, i.e. electromagnetic radiation, is intimately connected with force effects. In our view boson space means flow of electromagnetic energy and is the substitute for the “zero-point energy”. If we place a scattering center on the way of the flow, there will be (spectrum-dependent) force effects. The close connection is best seen in the dimensions of energy density and pressure:

éN m ùé N ù = . ëêm3 ûëúê m 2 ûú

From Feynman lectures: A considerable knowledge of the force between proton and proton has been accumulated, but we find that the force is as complicated as it can possibly be, meaning that it depends on as many things as it possibly can. First, the force is not a simple function of the distance between the two protons. At large distances there is an attraction, but at closer distances there is a repulsion. Second, the force depends on the orientation of the proton's spin. ...the force is different when the spins are parallel from what it is when they are antiparallel. The difference is quite large; it is not a small effect. ….. In general, forces between all nucleons are equally complicated. “To this day we do not know the machinery behind these forces–that is to say, any simple way of understanding them.” [ Lectures on Physics, Vol 2, 8-4. ] In our particle model fermions are all of the same structure, so we must state that forces between all fermions, including electrons, are equally complicated. Fermionic forces are still more complicated than Feynman thought, because they are frequency dependent; their strength is a matter of modulation and resonance.

The Basic Force Model

This type of radial force between particles is of universal importance in understanding the existence of atoms, molecules and solid bodies. Its typical form can be seen in the Lennard-Jones potential which describes the interaction between pairs of atoms:

é12 6 ù æörm æö r m VLJ =e êç÷ - 2 ç÷ ú . ëêèør èø r ûú

The Lennard-Jones potential is composed of short- range and long-range terms, neither of which is attractive.

41 The strong repulsion (presently known as “Pauli repulsion”, but without any clear physical justification) originates from resonance and the “attraction” is the cosmic pressure. The potential has a shallow potential well, which is the region surrounding the minimum potential energy.

The minus sign is a coordinate-dependent feature, it does not refer to attraction. The repulsive force and the force that pushes particles toward each other are in opposite directions.

“I suspect that they [the chemical properties of matter] may all depend upon certain forces by which the particles of the bodies, by some causes hitherto unknown, are either mutually impelled towards one another, and cohere in regular figures, or are repelled and recede from one another. These forces being unknown, philosophers have hitherto attempted the search of Nature in vain.” Isaac Newton: Preface to the Principia.

Impelled towards one another...

Modern physics pretends to give an explanation of an attractive force as an exchange force, which is composed of things like Heisenberg's uncertainty principle, virtual photons, negative kinetic energy... It is one of the main themes of this article to show that by admitting only repulsive forces it is possible to create a credible view of the world.

The shape of the force curve can be explained as follows: If the particles in space are of same energy, they stay far from each other due to repulsion resonance. The situation remains until the other particle changes its frequency, which happens when the particles are exposed to a gradient of energy density. After that, the two fermions of different energies approach each other, because they both see the least amount of resonant radiation coming from the direction of the other particle, and hence they are pushed towards each other. The near-field interaction of particles depends on the strength of resonance. This is the effect of condensation nuclei (or cloud seeds) in a phase transition from vapor to liquid.

The fermion-boson-fermion interaction is the most complex thing in the world. Those extremely delicate processes and structures of nature do not come from “emergent properties” of systems of simple particles and simple forces. Instead, they emerge from particles and forces that are highly complex due to their funds-mentally nonlinear nature. It is obvious that any practical force model, based on the interaction outlined above, requires the use of some suitable overlap integral, applied to the spectra of the interacting particles.

The model will be complex, but it will have its own advantages as a unified force model that includes gravity. Furthermore, all forces have a common origin, namely the Universe itself. There is no attractive force, which for Newton was an “absurdity so great that no man, who has in philosophical matters a faculty of thinking, can ever fall into it.” We will name the force described above “universal force”.

Simple Heuristic Model for the Combined Gravitational, Electromagnetic and Nuclear Force

2 2 We modify the classical force laws of Newton and Coulomb by adding the factor 1/(0  + i), familiar from the theory of driven damped oscillators, but here we adopt the definition developed by Max Planck. He introduced the idea of ‘‘radiation damping’’ or ‘‘conservative damping,’’ in which a resonator loses energy not to dissipative processes, but to the electromagnetic field with which it is in equilibrium.

m1 m 2 1 F= G 2/. 2 2 r()w1- w 2 + i g w 2

We can take a look at Coulomb’s force law and interpret the electric charges as scattering centers:

qq QQ F=,. F = 1r2 2 r 2

Q refers to a proton and q to an electron. If the forces are mediated by scattered bosons, then the forces should be proportional to geometrical cross sections of the particles. We can calculate the relative radii of these particles under the 1/3 assumption that their volumes are proportional to their masses: Re = Rp / (1836) . This leads to the ratio of cross sections:

42 Ae / Ap = 1 / 150. These are the relative strengths of electric and nuclear forces according to the simplest possible calculation. In physics textbooks the ratio in question is given as 1 / 137. There is a 9% error.

How strong are these forces as compared to the gravitational force?

m m F= G e e . r 2

F would be the gravitational force between two electrons. Since we believe that there is resonance involved in the electromagnetic interaction, we modify the force law by adding a factor from the theory of driven damped harmonic oscillations. (See e.g. Feynman Lectures 23–2: The forced oscillator with damping.)

me m e 1 F= G 2/. 2 2 2 2 2 r (())w1- w 2 + g w 1

The subscripts 1 and 2 refer to the de Broglie –frequencies of two different fermions and, consequently, to the modulated boson beams between them. In the case of two fermions of same energy the frequencies are equal.

The graph on the left is taken from Feynman’s book, in which he introduces a certain (complex) magnification factor ρ, which gives the “amount” of oscillation. Without trying to fit the resonance factor into anything we choose  = 1. The de Broglie–frequencies of electrons are about 7.761020 . Using this we calculate the resonance factor: 1 / ( 7.761020 )2 = 1.6610  42. R. Feynman has checked this out. In his Lectures on Physics (vol.1: 710) he gives the relative strengths of electrical and gravitational interactions between two electrons: 0.2410  42. The results support these two new ideas, namely, that the energy of a particle is proportional to its volume, and that resonance is involved in the forcemediating interaction.

Feynman also wrote that: “Perhaps gravitation and electricity are much more closely related than we think. ...the most interesting thing is the relative strengths of the forces. Any theory that contains them both must also deduce how strong the gravity is.” For gravity, we drop the resonant factor from the equation, because in large aggregates of fermions there is no direct line of sight between particles of same frequency. For this reason resonance phenomena are greatly diminished. Limited coherence length of force-mediating radiation provides another reason. So gravitation and electricity are closely related. Gravity is caused by the non-coherent flow of energy-momentum, and in electromagnetic phenomena this flow is turned (in the process of scattering/modulation) into coherent flows of bosons. “Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by and Maxwell would reach a satisfactory conclusion.” Albert Einstein.

Vacuum Catastrophe In Einstein’s field equations (from 1917) there appears the Λ, which is the value of the energy density of the vacuum of space. Quantum field theory predicts a very large energy density for the vacuum, and this density should have large gravitational effects. However, these effects are not observed, and the discrepancy between theory and observation is an incredible 120 orders of magnitude.

43 The number follows from calculating the energy density of the vacuum by simply summing up the zero-point energies of all the vibrational modes of the quantum fields. We explain later why the present scheme of field quantization is wrong. The number 10  42 above is the ratio of two pressures, one caused by non-coherent flow of electromagnetic energy and momentum, and the other by a coherent (and resonating) flow of the same. So the number 10  42 is also the ratio of energy densities.

The so called “zero-point field” is, in our theory, the non-coherent, ubiquitous flow of background radiation from all directions consisting of bosons at smaller and smaller scales, ad infinitum. “Vacuum catastrophe” is an artificial problem, caused by a flawed scheme of field quantization. We return to this subject in the section Canonical (Dirac) Quantization of the Electromagnetic Field. Energy of Atoms and Molecules

Internal Energy: the Present view. “Internal energy is the energy of the atoms and molecules -- translational energy, rotational energy, and vibrational energy.” “Heat is internal energy that is exchanged between two objects due to their difference in temperature. Heat and internal energy are obviously connected or associated; but they are not the same thing. Heat is a transfer of internal energy due to a difference in temperature. This is analogous to the connection between work and energy. When you do work on a system, you change its energy. Work is a transfer of energy.”

These ideas, combined with the use of Schrödinger's equation, have given us a picture of the atomic world that is inconceivable. One of the silliest postulates is that elementary particles have no volume. Still, they are claimed to possess energy in their rest frame.

A directive given by J. C. Maxwell in 1877 is as follows:

“In all scientific procedure we begin by marking out a certain region or subject as the field of our investigation. To this we must confine our attention, leaving the rest of the universe out of account till we have completed the investigation in which we are engaged. In physical science, therefore, the first step is to define clearly the material system which we make the subject of our statements.”

What is a system? According to Ludwig von Bertalanffy: A system is an entity which maintains its existence through the mutual interactions of its parts.

Bertalanffy maintained in the early 1930s that “the conventional formulations of physics are, in principle, inapplicable to the living organism qua open system and steady state, and we may well suspect that many characteristics of living systems which are paradoxical in view of the laws of physics are a consequence of this fact.” In modern physics we have the concept of a closed system that does not need to interact with its environment to maintain its existence. Atoms and molecules are given as examples of closed systems.

In our opinion, the scientific procedure, following the guidelines such as those given by Maxwell, has failed completely in its attempt to explain microscopic nature. We wish to add to Bertalanffy's argument that the conventional formulations of physics are also inapplicable to particles, atoms, and molecules. We have a dissenting view: particles exist only because they are open systems; vortices in a three-dimensional flow of energy-momentum. It follows from this, of course, that atoms and molecules are also open systems. It also follows that the formal definitions of temperature and heat become very simple.

Internal Energy Reviewed

If the attribute of volume is admitted to particles, things clear up. kT is the average energy of bosons emitted by electrons at an absolute temperature T. We may now expand on this.

If we take a vector field F to describe the ubiquitous flow of energy-momentum, we can also state that there is a certain energy density u at every point in the universe. From these we can construct a conservation law:

¶u = -Ñ ×F. ¶t

44 Temperature

Thinking of temperature, it is obvious that we must consider a certain region of space containing a distribution of fermions, but we don't have to say anything about the motion of particles or molecules in that region. We just write the integral form of the conservation law: d udVS= -n × F d . dt òVSÑ ò

The volume integral expresses the internal energy of the fermions themselves. It is the energy enclosed in the volumes of particles. (S is a mere surface of observation. It is not a constraint in any way.) Zero divergence means that the flow F going into the region equals the amount coming out. This is the thermal equilibrium. A positive divergence means “hot” and negative divergence “cold”. Non-zero divergence reveals the fact that fermions are unstable; they are not in balance with the energy density at their positions. The concepts of conduction or convection of heat belong to engineering category. Basically, the process of heat transfer is always radiation and the temperature T is a numerical measure of the divergence of F. A net change of mass/energy inside the surface is associated with the process of heat transfer through S

A Fermion in Thermal Equilibrium A fermion in thermal equilibrium emits and absorbs alternatingly. The absorption and emission processes do not necessarily follow each other in a see-saw -like manner, the times of a fermion in an excited state or in a next-lower state may be long or short, but one might define the thermal equilibrium by stating that those times are equal in average. In this way thermal equilibrium can be defined for an individual particle. Vortex Tube

Picture credit: Wikipedia. Hot and cold molecules of an individual gas can be separated by centrifugation because they have different masses. If Occam's razor is used to pick between this and any other proposed explanation of the vortex tube effect, the conclusion is obvious. Revision of the Kinetic Energy of Gases

The frequency dependent multipole interaction described above is the one behind all equations of state, P(V,T), which relate the pressure to the volume and temperature of a fluid or gas.

The most fundamental equation of state is the Ideal Gas Law, PV = NkT. It is said to apply to ideal gases composed of randomly moving, non-interacting point particles.

In 1662, Robert Boyle published the results of his experiments on the compression and expansion of air under pressure. The results are expressed as Boyle's law: pV = constant. Newton showed how to derive Boyle's law by assuming that particles of air mutually repel with a force inversely proportional to their separation.

Newton originated the Static Theory of Gases. According to this, the forces which hold atoms together in a solid are attractive forces which give the solid its cohesion, but in a gas these change into repulsion.

In the 18th century most scientists accepted the Newtonian repulsion theory, which was compatible with the idea that heat is a fluid, “caloric.” It was thought to be composed of particles that repel each other but are attracted to the atoms of matter. Thus gas pressure increases with temperature because the gas acquires more of the self-repelling caloric fluid. Temperature itself could be defined as the density of caloric (amount of caloric fluid divided by volume).

45 The caloric theory was not clearly formulated mathematically. For this reason Laplace set out to refine Newton's gas theory. His aim was to give a theoretical basis and a quantitative explanation of the empirical laws of gases within the caloric theory of heat. This would have been the crucial test of the caloric theory.

But finally, Laplace failed in his attempt to derive the laws of gases from the caloric theory. He admitted that the only way to carry out the derivation was to retain only the repulsive force. [ According to Laplace's theory, the action between two molecules of gas is actually the product of four forces, three of them attractive. ]

Then came Count Rumford, his -boring experiments and his questions:

“What is heat? Is there any such thing as an igneous fluid? Is there anything that can with propriety be called caloric? ...It is in hardly necessary to add that anything which any insulated body, or system of bodies, can continue to furnish without limitation cannot possibly be a material substance: and it appears to me to be extremely difficult, if not quite impossible, to form any distinct idea of anything, capable of being excited and communicated, in the manner the heat was excited and communication in these, except it be MOTION.” [ Benjamin Thompson (Count Rumford) Philosophical Transactions (vol. 88), 1798. ]

The caloric theory could not explain the heat resulting from friction, or melting or evaporation. Therefore, in the middle of the 19th century, after about 75 years of existence, caloric theory was abandoned and replaced with the kinetic- molecular theory. The kinetic theory of gases and the dynamical theory of heat were developed at the same time and largely by the same people. The new science of was born, and it is founded on Herapath's hypothesis.

Excerpt from Annals of Philosophy 1, 1821, pp. 278, 280-1: “...it struck me that if gases, instead of having their particles endued with repulsive forces, subject to so curious a limitation as Newton proposed, were made up of particles, or atoms, mutually impinging on one another, and the sides of the vessel containing them, such a constitution of aeriform bodies would not only be more simple than repulsive powers, but, as far as I could perceive, would be consistent with phenomena in other respects, and would admit of an easy application of the theory of heat by intestine motion.” [ John Herapath. ]

Applying Newton's laws to Herapath's idea leads easily to the following expression:

1 2 PV= 3 mNv .

But we also have the Ideal Gas Law, first written down in 1834 by Emil Clapeyron:

PV= NkT.

It is almost inevitable that one draws the conclusion that the temperature T is proportional to the kinetic energy of gas particles. This is exactly what has happened. In the kinetic gas theory, pressure is explained by gas particles colliding with the walls of a container. Gas pressure is the result of the collisions, through the change in momentum of the gas particles per unit surface area per second. But ultimately, the forces that change the momentum of gas particles are electromagnetic, repulsive forces. Therefore, it is advantageous to write mv2 in the equation above as mv·v, in which mv is the momentum of a photon = p, and the velocity v is the velocity of the photon = c. All this is carried out without changing the numerical value of mv2. Now we have 1 1 1 PV=3 Npc = 3 NEphoton = 3 Nhw.

PV depends only on the number of photons and their energy. There is no reference to the motion of gas particles. The only thing that can be said of the velocity of gas particles is that, if they had some velocity, it would soon dissipate by the mechanism of Doppler cooling.

The physical significance of the Boltzmann constant k (in the Clapeyron equation) is that it provides a measure of the amount of energy (i.e., heat) corresponding to the random thermal motions of the particles making up a substance. In our theory we consider this wrong.

46 According to the above, 1 1 kT=3hw,. PV = 3 N h w

These equations are derived by means of the present concepts of physics. The Boltzmann constant can be used without any connection to the velocity of molecules. kT is proportional to the average energy of photons emitted by electrons at an absolute temperature T, but the temperature is the spectral temperature, which is defined according to the wavelength at which the electromagnetic (EM) energy that an object emits is greatest.

Thermodynamic temperature, the measure of kinetic energy, must be abandoned.

Real gases are not ideal. They have complicated relationships between pressure, density and temperature. Furthermore, they show a phase transition between gas and liquid states. Johannes Diderik van der Waals wrote in 1873: “That the same substance at the same temperature and pressure can exist in two very different states, as a liquid and as a gas, is a fact of the highest scientific importance, for it is only by the careful study of the difference between these two states, the conditions of the substance passing from one to the other, and the phenomena which occur at the surface which separates a liquid from its vapor, that we can expect to obtain a dynamical theory of liquids.”

The van der Waals equation is the second most simple equation of state; only the ideal gas law is simpler. It was the first equation capable of representing vapor-liquid coexistence. It works quite well for interacting gases if the form of interaction is described by the Lennard-Jones potential.

In 1901 Heike K. Onnes introduced a generalization of the ideal gas law. It is called the virial expansion and it expresses the pressure as a power series in the density:

P =r +BT()(). r2 + BT r 3 + ××× kT 2 3

We can now apply our equation kT = constant·ħω and see what follows as a result:

2 3 P= constant ×hwr1 + B 2 h wr 2 + B 3 h wr 3 + ××× .

Since r is the number density, we can see that the pressure P depends on photon density. In other words, The pressure P depends on the spectral energy density of electromagnetic radiation. But all complexities following from differing distributions of photon energies and the resonant nature of forces should be included.

Isaac Newton wrote in his Principia (1687): “If a gas is composed of particles that exert repulsive forces on their neighbors, the magnitude of force being inversly as the distance, then the pressure will be inversly as the volume.”

Why Newton's repulsion theory was abandoned? It is said that the kinetic theory of gases is founded on “Herapath's hypothesis.” He wrote that: “...it struck me that if gases, instead of having their particles endued with repulsive forces, subject to so curious a limitation as Newton proposed...” So Newtons proposal was seen as an unnecessary complication, but it fits perfectly to the electromagnetic view! (Repulsive force of the form 1/r, of course, approximates the real force only at the beginning of the repulsive regime of the pair particle potential.)

The problem comes down to this: There should exist a force that is cohesive over short distances but repulsive over long distances for the same molecules. As van der Waals put it: “That the same substance at the same temperature and pressure can exist in two very different states, as a liquid and as a gas, is a fact of the highest scientific importance, ...”

The physical background of these phenomena is explained in our theory. The molecules of solid matter and liquids have interleaved spectra, but the molecules of vapor have mutually similar, non-interleaved spectra. This is the mechanism behind all nucleation processes and phase transitions.

The most fundamental reason behind all this is nature's tendency to form structures of minimum energy. A mathematical expression for this tendency is Gauss's principle of least constraint. Particles of a system move, until there is a three-dimensional structure in which the particle positions are determined by local minimums of force/energy.

47 In this process the spectra of particles become interleaved. At the macroscopic level, we say that “entropy is maximized”. Returning from interleaved condition to non-interleaved spectra requires energy, as was discussed earlier.

In the name of consistency, our theory revives Newton’s proposal and includes it in an equation of state in which pressure is the photon pressure.

As noted earlier, in kinetic gas theory the pressure is caused by gas molecules elastically impacting the walls of a container. But one may well ask, what is the force that causes changes of momentum of molecules hitting the wall? It is the electromagnetic repulsive force. It is in effect everywhere in the gas and not only at the wall.

The kinetic theory of gases was developed with certain assumptions about the nature and state of motion of the molecules of gases. These assumptions possess perpetual motion which, in our view, is impossible.

On Laminar and Turbulent Flows To understand these one must have a little background knowledge of the structure of liquids:

Dr. Rustum Roy writes about the structure of liquid water [http://site.fixherpes.com/roy_structure_water.pdf]:

“... many scientists, due either to ignorance or powerlessness, hold the naïve view that all liquids, like most crystalline matter, are more or less completely homogeneous in structure down to the unit cell, atomic or molecular level...

The structure (architecture) and these assemblages or units themselves are dependent on temperature (hence its many anomalous property-temperature relationships), on pressure, and on composition. The structure is thus more responsive to composition including very low levels of solutes, to magnetic and electric fields, and to “subtle energies.”

On the basis of these well-established principles one can conclude that the structure of liquid water at say 25° C and 1 atm is a highly mobile assemblage of interactive clusters (dominantly perhaps of half a dozen different oligomers), with minor amounts of dozens of others, and possibly a few larger “polymers” in the 200 H2O range. What is very significant about this model is that this arrangement of a “zoo” of mixed sizes of molecules is also highly likely to be highly anisodesmic. First there will be a cluster of bond strength values around the typical hydrogen bond within the cluster, or in small molecules. But these intra-cluster bonds are likely to be much stronger than the inter-cluster van der Waals type bonds. Most interrogatory experimental tools may be inappropriate for making this distinction especially among its weakest bonds. Hence water is ideal for responses to small and large changes in all the intensive thermodynamic variables. Water is therefore probably the most easily changed phase of condensed matter known. It is this unique anisodesmicity, or structural and bonding heterogeneity, that helps explain its amazingly labile nature and hence the various extraordinary data, e.g. the clustering of water and solute in very dilute solutions reported by Samal and Geckeler, much of the ultra-dilution work, and the reported influences of very weak magnetic fields.”

Fluid flows can be divided into two distinct categories: smooth layered flows known as laminar flows, and chaotic flows, also called turbulent flows. Application of the ideas above leads to the following picture: in a quiescent liquid molecules have formed clusters, which in turn have formed structures of minimum energy. In laminar flow these structures are maintained in a way that prevents molecules of same energy from coming to close interaction. [These structures can actually be seen in infrared spectrum. The transition region between two Benard cells has a continuum of temperatures. This means that the electrons in that region have interleaved spectra, required for laminar flow.] The breakdown of laminar flow occurs at a certain pressure gradient. At this point the orderly internal minimum energy configuration is lost, molecules of same (valence electron) energy come into sight of each other and the forces between molecules suddenly increase. In our theory nucleation processes, phase transitions and the onset of turbulence in flows are all explained by the same mechanism.

Finally, we review some radiative phenomena in the light of our theory.

48 Radiation Reaction

J. D. Jackson in Classical Electrodynamics, in the introductory remarks of his chapter on 'Radiation Damping, Classical Models of Charged Particles, says that the problem of radiation reaction on motion of charged particles is not yet solved. He opens his discussion on this subject as follows:

It is well-known that accelerated charges emit electromagnetic radiation. … But in one class of radiation phenomena the source is a moving point charge or a small number of such charges. In such problems it is useful to develop the formalism in a way that relates the radiation intensity and polarization directly to the charge's trajectory and motion. For non-relativistic motion the radiation is described by the well-known Larmor result.

In some older textbooks things are not so well-known. H. A. Wilson writes in his 1937 Modern Physics :

“Electromagnetic radiation is obtained in practice from electrical oscillations produced by the discharge of a condenser through a wire. In such cases, in which enormous numbers of electrons are involved the radiation obtained agrees with that calculated by electromagnetic theory. Radiation from single electrons has not been observed, and according to the Quantum Theory the electrons in atoms do not radiate when they are moving around orbits and so have an acceleration. The success of quantum theory makes it possible that the expression just obtained for the radiation of an electron is erroneous, and in fact that the equations of the electron theory are probably only true when the density of electricity us taken to be the average density over a large volume containing a large number of electrons and atomic nuclei.”

Jackson derives the relativistic generalization of the Larmor formula and concentrates mainly on that. But both special and general theories of relativity are erroneous. Taking these theories into account does nothing but obscures the issue. For instance, in general relativity the fundamental principle is the equivalence between gravity and acceleration. In textbooks, we are told that the radiation of an electron depends on its acceleration only. But an electron can be both stationary and accelerating. It can be at rest on the Earth's surface and yet it is also subject to a (gravitational) acceleration of about 9.8 m/sec2. Does the electron radiate or not?

“When we accelerate a charge, it radiates electromagnetic waves, so it loses energy. Therefore, to accelerate a charge, we must require more force than is required to accelerate a neutral object of the same mass; otherwise energy wouldn’t be conserved. The rate at which we do work on an accelerating charge must be equal to the rate of loss of energy by radiation. We have talked about this effect before—it is called the radiation resistance. We still have to answer the question: Where does the extra force, against which we must do this work, come from? When a big antenna is radiating, the forces come from the influence of one part of the antenna current on another. For a single accelerating electron radiating into otherwise empty space, there would seem to be only one place the force could come from—the action of one part of the electron on another part.”

This is how Feynman introduces the problem of radiation reaction which is still unsolved in the framework of quantum theory. (Chap. 28 of Vol. II.)

How do we Know That “When we Accelerate a Charge, it Radiates Electromagnetic Waves.”

http://www.imedpub.com/articles/electromagnetic-radiation-from-the-acceleration-of-charged-particles.pdf Michael Singer writes: ABSTRACT. Charged particles are held to radiate as a function of their acceleration in some situations. One well-known exception to this is in atomic structures, and some believe that classical electromagnetism cannot be used to describe atomic behavior because it requires that an electron radiates energy under acceleration. This is naive, for if accelerating charges radiate under some conditions but not others, every complete model of the universe must be able to predict precisely which behavior occurs in new situations, or accept that that model of the universe cannot be complete. ... The concept that charged particles radiate electromagnetic energy seems to have arisen in an attempt to describe how X-rays arise, in Röntgen’s discovery of 1895. With the Thompson plum-pudding model of the atom the electrons were believed to smack into the plum pudding and stop dead. With this model the only possible explanation for the X-radiation was that the extreme deceleration during the impact had generated the radiation. This became so embedded in the scientific consciousness that ever since there has been little critical analysis of whether it does in fact occur. With the Rutherford model of the atom it became possible to describe the radiation as an interaction between the electron and the powerful fields inside atoms. The presence of K-shell radiation in the X-ray spectra confirmed

49 this view; here, an incoming electron knocked a k-shell electron out of an atom and produced characteristic radiation as another electron fell into that shell to replace it. Although the velocity of the moving charges in the antenna is relevant to the induced magnetic field, Maxwell’s equations describe nothing that radiates as a function of charge acceleration. After studying several radiating systems he concluded that: A charged particle undergoing acceleration cannot radiate energy purely as a consequence of that acceleration without violating the Principle of Conservation of Energy. The concept of accelerative radiation from charged particles may have no validity in an energy-conserving universe, and therefore in Classical Electromagnetism. If this is so, the fact that electrons orbiting inside atoms do not radiate presents no problem to Classical Electromagnetism.

In our theory every fermion is composed of a pair of fields, a standing wave and a scattered wave, so all fermions radiate all the time, regardless if they are accelerated or not.

The electron does not radiate into empty space, it radiates into the boson space, and from there it also receives radiation; therefore it is possible to establish a balance of radiation. A relevant concept here is the geodesic motion:

2 2 mmvm=( ) =0 / 1 -- (vpm v ff ) / c .

As discussed earlier (Robert Kirkwood's Work on General Relativity), the indices of vectors refer to particle's motion and free fall. A fermion does not radiate if its motion follows the free-fall route. In a linear accelerator particles experience free-fall conditions and do not radiate, but if they are forced to deviate from their free-fall path by a wiggler magnet, they radiate. This gives the answer to the question of why accelerating charges radiate under some conditions but not others. Feynman’s problem with radiation reaction is this: What happens to the mass of something as it's radiating? The answer is: it depends on the nature of radiation.

Two Kinds of Radiation

There are two kinds of radiation from fermions. Thermal radiation (see section Temperature) diminishes the volume of fermions by emitting quanta, and the emission process without doubt follows Newton’s third law. The other form of radiation is scattering. It becomes measurable only by collective orientation of fermion spins. Almost always (in electromagnetic devises) these two types of radiation occur simultaneously. In antennas, electrons are forced to oscillate back and forth by a generator; they are under periodic acceleration and deceleration. The radiation resistance of an antenna arises from the process of forcing the conduction electrons into an arrangement which is not the minimum energy configuration. Orientation is carried out by an alternating current. In this process energy density inside the conductor changes periodically. The energy of conduction electrons follow the changing energy density and thermal radiation becomes an unavoidable part of total radiation. The unwanted thermal radiation of the antenna consists of “newly born” bosons, emitted by the vibrating electrons, while the useful signal consists of bosons of background radiation, scattered from and modulated by the same conduction electrons. The amount of free electrons and the degree of their spin orientation in a volume element of an antenna is described by the current density vector J.

Now that we have introduced an alternative foundation to quantum theory (and some applications in hope to still clarify the basic ideas), it is of interest to review some early works and experiments that have led quantum physics into its present state. Some important experiments are reviewed.

50 On Formation of Quantum Physics

De Broglie's Hypothesis: a False Foundation

It started from de Broglie's concept of matter waves. Today, one of the unquestioned dogmas of modern physics is de Broglie's hypothesis, that particles exhibited wave properties, was confirmed by the -Germer experiment. We must review this experiment in the light of our new understanding, because the experiment was one of those carried out with accelerated electrons. How does it fit into the picture given above?

Let us look at de Broglie's ideas in some detail. Some of the following notices have appeared in “De Broglie’s thesis: A critical retrospective.” by Edward MacKinnon, Am. J. Phys, 1976, v. 44, 1047–1055.

2 De Broglie started from the relation hn 0= m 0 c , and stated that “This hypothesis is the basis of our theory.” He also wrote down an equation for kinetic energy:

2 æ1 ö E= m0 c ç -1 ÷ kin. ç2 2 ÷ è1- v / c ø

This is a reasonable start because the equations are quite correct. But then: “Having recalled the above... we now seek to find a way to introduce quanta into relativistic dynamics.”

De Broglie’s considerations led him to assign three different frequencies to the same particle:

2 m0 c 2 2 n 0 n0=, nn 1 =- 0 1v / c , n = . h 1- v2 / c 2

These are: the internal frequency in the rest system , the internal frequency as measured by an external observer who sees the system moving with velocity v, and the frequency this observer would associate with the particle’s total energy, respectively. According to De Broglie, a particle of rest mass m0 and velocity v was associated with an inherent periodic phenomenon of frequency f ...always in phase with an accompanying wave:

12 1 1 2 2 v fmc= 0 , always "in phase" with fmc1 = 0 1-b , b = . h 1- b 2 h c

Here we have reduced the number of equations to two by substitution. It is easy to see that the frequencies f and f1 behave in exactly opposite manner as the velocity v increases toward the speed of light. How can they be “in phase?” De Broglie explains:

“Now let us suppose that at the time the moving object coincides in space with a wave of frequency defined above and propagating in the same direction as it does with the speed c/β . This wave, which has a speed greater than c, cannot correspond to transport of energy; we will only consider it as a fictitious wave associated with the motion of the object.” In the end he writes down the following equations:

2 m c2 æm c2 ö hn=0 , from which one deduces: b = 1 - ç0 ÷ . 1- b 2 èhn ø

On these he comments: “Unfortunately, it is also subject to a perplexing difficulty: for decreasing frequencies , the 2 velocity c of energy transport also gets lower, such that when hn = m0 c , it vanishes or becomes imaginary (?).”

This and the need of a fictitious wave should have been seen as a warning that something is fundamentally wrong. But no. De Broglie got his Nobel Prize in 1929 and quantum wave mechanics got its false foundation.

51 How the Fictitious Wave Came to be

De Broglie's failure was embedded in the Special Relativity. He stumbled on a paradox by first assuming that matter 2 has an “internal periodic phenomenon”, a wave nature of some sort. Equating the rest energy of the particle, m0c , to the energy-frequency relation Einstein discovered for the photo-electric effect, E=h, he calculated a “rest frequency” 2 2 2 0 = m0 c /h. If the massive particle is moving with velocity V, then it will have relativistic energy mc =  m0c , 2 2 − 1/ 2 2 where  ≡  1−V /c  , which implies a frequency  =  m0 c /h. This far everything is correct. But then, according to relativity, moving clocks run slower and the internal periodic phenomenon should have a time- dilated frequency of  = 0 /. De Broglie believed relativity to be valid, and demanded that these two distinct 2 oscillations should remain in phase, and found out that this occurs for a fictitious wave of phase velocity Vph = c /V.

What Was Confirmed by the Davisson-Germer Experiment?

The Davisson–Germer experiment was a physics experiment conducted by American physicists Clinton Davisson and Lester Germer in 1923–1927, which confirmed the de Broglie hypothesis. (Wikipedia)

From the article The Diffraction of Electrons by a Crystal of Nickel by C. J. Davisson: … And any reluctance we may feel in treating electron scattering as a wave phenomenon is apt to be dispelled when we find that the value calculated for the wave-length of the equivalent radiation is in acceptable agreement with that which L. de Broglie assigned to the waves which he associated with a freely moving particle – that is to say, the value h /mv (Planck's constant divided by the momentum of the particle). The quantities coordinated in each case are the wave-length of the incident beam as calculated ½ from λ = h/mv = (150/V) and the sine of the colatitude angle of the diffraction beam as observed. We may notice that Davisson's lines of extrapolation start from wavelength 0. This would mean a wave of infinitely high frequency. But this comes out from one of de Broglie's three equations only by putting v = c. Anyhow, it is obvious that there is some kind of oscillation associated with the beam of electrons. But we have to impose restrictions on its nature: it must start with zero frequency at zero velocity. So what is it? Every one must have observed that when a slip of paper falls through the air, its motion, through undecided and wavering at first, sometimes become regular. James Clerk Maxwell. A falling leaf does not reach the ground with a straight vertical trajectory. Elongated objects fall down oscillating from side to side, drifting side-wise and spinning through an axis. They can also have more complicated gyrational motions or they can fall down with a particular combination of these modes. Electromagnetic particles are of dipole nature. In a sense, they are elongated objects in free fall, if accelerated in an electric field. We make an assumption that the results of Davisson-Germer experiment are due to axial precession of the electrons and the axis of precession is perpendicular to the direction of acceleration, by analogy with macroscopic objects. This is consistent with what is known from X-ray generators. The highest intensity of X-ray photons is 60° to 90° from the beam. Our conclusion is that the results of Davisson – Germer experiment do not confirm the existence of de Broglie's fictitious wave. Instead, the experiment suggests that the electron has electric dipole moment (EDM), which has been sought for more than half a century but still eludes observation. To relate particle's energy to the oscillation discovered in the Davisson – Germer experiment is simply wrong. The oscillation in question is a manifestation of an extra degree of freedom of motion. It follows naturally from the non- point-likeness and dipole nature of particles. The fictitious wave came to be, because de Broglie believed that time dilation exists.

52 Why Is It Believed That Time Dilation Exists? Ives–Stilwell experiment

We are now in the position to discuss this experiment, because we have the necessary results from earlier sections: 1. Electromagnetic particles are sensitive to energy density at their immediate surroundings and gain or lose energy following that density. 2 2 mmvm=( ) =0 / 1 - vc / .

2. Light is emitted from individual particles in the formation of a minimum energy structure, namely a molecule.

Illustration credit: Wikipedia

The present concept of “vibrational ground state” must be replaced by another because all vibrations (except the internal oscillation of particles) decay as the ground level is reached. The ground level is determined by the energy of nucleons, and this can be raised by accelerating the nuclei in motion. The ground states of molecules in the Ives – Stilwell experiment are lifted as expressed by the equation m = m(v) above. As the free electrons (due to ionization) and the nuclei of hydrogen molecules with increased energy form neutral, temporary molecules, their emission spectra show the “Stokes’ shift” in addition to the Doppler effect.

The last lines of the paper An Experimental Study of the Rate of a Moving Atomic Clock – II By Herbert E. Ives and G. R. Stilwell (1941) are on the right. The authors interpreted the result of their experi- ment in favor of the electromagnetic theory of Larmor and Lorentz, which was based on the aether.

53 W. Kündig's Mössbauer Rotor Experiment

Wikipedia introduces this experiment as a more precise test of Einstein’s time dilation formula. The maximum deviation from the theoretical value is claimed to be 10−5, much smaller than in Ives–Stilwell experiments (10−2).

In the early 1960s (soon after the discovery of the Mössbauer effect) a series of experiments in rotating systems was carried out to verify the time dilation effect. The experiments provided a value k for the equation ΔE/E = ± k u2/c2, where k = 0.5 according to Special Relativity. From all of these experiments the value of k = 0.5 within an accuracy about 1 %, was reported.

New wave of interest to the Mössbauer experiments in a rotating system emerged after publication of the paper Mössbauer experiment in a rotating system: The change of time rate for resonant nuclei due to the motion and interaction energy by A. L. Kholmetskii, T. Yarman and O. V. Missevitch, where serious methodological errors in the available experiments achieved in the 1960s were revealed.

In that paper Kholmetskii & al. pointed out that almost all the authors ignored the distortions due to the chaotic mechanical vibrations in the rotor system, which are always present. They realized that there was only one experiment, namely that by W. Kündig, which was free of the influence of mechanical vibrations on the measured value of k.

This experiment, as reanalyzed by Kholmetskii & al., shows that k=0.596 ±0.006, which drastically deviates from the relativistic prediction. They also carried out an experiment of their own and came up with k = 0.68 ±0.03. ( Novel Mössbauer experiment in a rotating system and the extra energy shift between emission and absorption lines by T. Yarman, A. L. Kholmetskii & al.)

From all this it is clear that there is an “extra time dilation” in rotor Mössbauer experiments which requires explanation. To explain, we recall the important statement from the section On Some Constants:

If two similar electrons (or fermions in general) are slowly approaching each other in free space, they both feel increasing energy density and absorb energy. If the distance between the two is then slowly increased, they both emit quanta whose energy is determined by local energy density, which in this case depends solely on the distance between the two electrons (fermions). This mechanism is a physical interpretation of the fine structure constant and is fundamental for our theory. It generalizes as follows: Particles inside of a solid are sitting in potential holes, at the minimum of the Lennard– Jones potential (or something very similar to it).

If the solid is exposed to stress or strain, particles are forced nearer to, or farther from each other, into positions in which they feel an increased or decreased energy density. If the particles in these changed conditions emit light, it is blueshifted or redshifted, respectively.

54 This picture is taken from the article The Color of Stress, Discover Magazine, March 1997.

The infrared image of a heavy-duty lifting hook shows that the regions of compression emit blue-shifted light and regions of stretch emit red-shifted light. The article explains that “Stretching cools the metal, just as a gas cools when it expands; compression heats it.” From this one should draw the conclusion that there must be a flow of energy from hotter regions to cooler regions. But this cannot happen in a static situation. What would be the source of such a flow of energy?

The image proves to us the following: all parts of the hook are in thermal equilibrium, but the particles in blue-shifted regions feel increased energy density. Therefore, their internal energy is higher and they emit blue-shifted light. The converse is true in stretched regions, which is the case in the Mössbauer rotor experiments. Stretching is caused by the centripetal force.

So there are two causes for the changes of resonance frequencies in Mössbauer rotor experiments: velocity of fermions and the tension caused by rotation. Ultimately, both are means of changing the energy density felt by particles in the rotor. The experiment by W. Kündig, as reanalyzed by Kholmetskii & al., shows that k = 0.596 ±0.006. This result can now be understood. Relativity is not needed in the explanation.

Lifetimes of Relativistic Particles Unlike electrons, muons are short-lived and will quickly decay into other particles. Laboratory experiments show that their average half-life is 2.2 microseconds. Muons are created at high altitudes where the top of the atmosphere gets bombarded by solar and cosmic protons. The generated muons rain down at high speed, some of them decaying partway down and others making it all the way to the ground at sea level.

It has been observed that the lifetime of muons in cosmic rays increases with Lorentz factor γ = (1−β2) −1/2. This is claimed to prove that the “internal clocks” of muons tic much more slowly than the clocks in the laboratory. But to attribute an own time to each muon, that moreover would dilate with speed, is just plain wrong. In our theory the factor γ gives the increase of energy density felt by a moving particle. The willingness of a particle to emit or decay depends on the energy density at its location, so a high-speed particle is slower to decay.

Furthermore, we opine that an explanation (or theory) that appeals to a “local time” is not a scientific explanation at all.

Newslines from Physicsworld.com: Dec 4, 2013: Mystery of neutron-lifetime discrepancy deepens A re-evaluation of a 2005 measurement of the lifetime of the neutron has deepened the mystery of why two different experimental techniques yield two different neutron lifetimes. After recalibrating a key part of their “beam” experiment, physicists in the US confirmed that their value for the neutron lifetime is 8 s longer than that determined by others who had done a “bottle” experiment. ….. There are currently two experimental strategies for measuring the lifetime. The bottle method involves trapping neutrons in some kind of container and counting the proportion remaining after a fixed time. The beam method involves monitoring a beam of neutrons and measuring the number of the particles that decay to protons as it passes through a particular volume of space. ….. But bottle and beam experiments do not agree. The beam method seems to give a lifetime about 8 s longer than the bottle method, and this discrepancy is significant when compared with the uncertainties of the experiments.

According to the above the mystery disappears if these two methods observe the neutron decay at two different values of ambient energy density.

55 Some More on the Michelson-Morley Experiment Because the measured ether movement came nowhere near the expected 30 km/sec, the science community invariably considered the Michelson-Morley results as “null.” Still, there were a few voices that did not consider the results trivial. In 1902 W. M. Hicks, a British mathematician and physicist (a student of Maxwell’s), made a thorough criticism of the experiment and concluded that instead of giving a null result, the numerical data published in Michelson-Morley’s paper shows distinct evidence of an expected effect (i.e., ether drift). W.M. Hicks, “On the Michelson-Morley Experiment relating to the Drift of the Aether”, Philos. Mag., 3, 9. (1902): The theory is not so simple as it may appear at first sight owing to the changes produced by actual reflexion at a moving surface. The correction due to alteration in the angle of reflexion was first introduced by Lorentz, and was taken account of in the joint paper by Michelson & Morley in 1887. But reflexion produces also a change in the wavelength of the reflected light. Further, when the source of light moves with the apparatus, the light incident at any instant on a plate does not come from the position occupied by the source at that instant, but from a point which it occupied at some interval before.....(Our emphasis.) [ In other words: Doppler effect and aberration must be taken into account. But Hicks did not reach the same conclusion as we did, in the analysis of the Michelson interferometer. This must be due to some approximations he made in his calculations, e.g., dropping terms of the form v2/c2. Our analysis is free from any approximations. ] In looking at the sets of readings, one is struck at once with the fact that all the readings continuously increase or decrease. This is evidently the effect of temperature changes. For short intervals, it is extremely likely that the temperature disturbances will be a linear function of time. If this is exactly so, and if the readings were taken at equal intervals of time, it is possible to eliminate the disturbances due to this. For the readings at the beginning and at the end of a complete revolution ought (in absence of temperature effects) to be the same... The preceding attempt to get rid of the temperature effect is not proposed as one which gives an accurate result. The object is to show that the observations of Michelson and Morley do give an affirmative answer to the question “Is there a drift of aether past the earth ?” The argument is sufficient to show that the experiments should be repeated with extreme care to eliminate temperature errors, and if possible in vacuo.

It is obvious that after reading this (the emphasized text above) Einstein always pushed the “thermal artifact” argument against all experiments similar to the MM experiment, first of all the Miller experiment.

Miller Experiment The now famous Michelson-Morley experiment is widely cited, in nearly every physics textbook, for its claimed “null” or “negative” results. Less known, however, is the far more significant and detailed work of Dayton Miller. Miller's work, which ran from 1906 through the mid-1930s, most strongly supports the idea of an ether-drift, of the Earth moving through a cosmological medium. Dayton Miller's 1933 paper in Reviews of Modern Physics details the positive results from over 20 years of experimental research into the question of ether-drift, and remains the most definitive body of work on the subject of light-beam interferometry. “My opinion about Miller's experiments is the following. ... Should the positive result be confirmed, then the special theory of relativity and with it the general theory of relativity, in its current form, would be invalid. Experimentum summus judex. Only the equivalence of inertia and gravitation would remain, however, they would have to lead to a significantly different theory.” — Albert Einstein, in a letter to Edwin E. Slosson, 8 July 1925 (from copy in Hebrew University Archive, Jerusalem.) “The trouble with Prof. Einstein is that he knows nothing about my results.” Dr. Miller said. “He has been saying for thirty years that the interferometer experiments in Cleveland showed negative results. We never said they gave negative results, and they did not in fact give negative results. He ought to give me credit for knowing that temperature differences would affect the results. He wrote to me in November suggesting this. I am not so simple as to make no allowance for temperature.” (Cleveland Plain Dealer newspaper, 27 Jan. 1926) (Miller's control experiments showed that, for his insulated apparatus, shifts approximating 0.07 fringe can only be produced by strong thermal heat sources.)

56 Certainly Einstein knew of the results of the Miller experiment. Already in June 1921 he wrote to Robert Millikan: “I believe that I have really found the relationship between gravitation and electricity, assuming that the Miller experiments are based on a fundamental error. Otherwise, the whole relativity theory collapses like a house of cards.” ( What if Einstein had been shown the results from the Cosmic Microwave Background experiments? )

Miller reported in 1925 the data from the first three epoch periods:

“The curves for the three epochs were simply averaged and it was found that when plotted in relation to local civil time, the curves are in such phase relations that they nearly neutralize each other; the average effect for the three epochs thus plotted is very small and unsystematic. The curves of observation were then plotted with respect to sidereal time and a very striking consistency of their principles was shown to exist, not only among the three curves for azimuth and those for magnitude, but, what was more impressive, there was a consistency between the two sets of curves, as though they were related to a common cause. The average of the curves, on sidereal time, showed conclusively that the observed effect is dependent upon sidereal time and is independent of diurnal and seasonal changes of temperature and other terrestrial causes and that it is a cosmical phenomenon.” Dayton Miller. “My opinion about Miller's experiments is the following. ... Should the positive result be confirmed, then the special theory of relativity and with it the general theory of relativity, in its current form, would be invalid. ... Only the equivalence of inertia and gravitation would remain, however, they would have to lead to a signify- cantly different theory.” Albert Einstein, in a letter to Edwin E. Slosson, 8 July 1925 (from copy in Hebrew University Archive, Jerusalem.)

Mt. Wilson, Sept.23,1925. Raw data ( = without sequential re-adjustments)

A problem with this experiment was the following. In Miller’s own words: “The results show a definite displace- ment, periodic in each half revolution of the interferometer of the kind to be expected, but having an amplitude of one tenth of the presumed amount.” Another curiosity was noticed by W. M. Hicks: “In looking at the sets of readings, one is struck at once with the fact that all the readings continuously increase or decrease. ...”

57 Why Velocities in Morley – Miller Experiments Were Found Much Smaller than Expected? The expectation in the ether-drift interferometers used in Morley – Miller experiments was that the effect would be graphable as a sine wave with two peaks and two troughs per rotation of the device. (This result could have been expected because during each full rotation, each arm would be parallel to the wind twice, facing into and away from the wind and perpendicular to the wind twice.) Additionally, due to the Earth's rotation, the wind would be expected to show periodic changes in direction and magnitude during a sidereal day. To understand the velocity anomaly we must first recall our analysis of the Michelson interferometer:

The Effect of Rotation on Michelson and Sagnac Interferometers The interferometer's motion in space can be resolved into translational and rotational components, the latter with angular velocity around an axis through the beam splitter. 1 1-n × v If one studies the mirror equation, there is no Doppler shift if the mirror moves at right i M k= k c . angles to both incoming and reflecting beam, because only then 0 1 1-n × vM c r ni vM = nr vM = 0. Mirror equation This can happen only if the two beams are directed along the normal of the reflecting surface. In pure rotation, this is the case in Michelson’s interferometer. The Michelson interferometer is insensitive to rotation. Since it cannot detect linear motion either, the instrument and all its variants ( like the ones used in the experiments of e.g. Kennedy-Thorndike, Cialdea & al. , Jaseja & al. and A. Brillet & J. L. Hall) are totally useless in determining our velocity in absolute space. Instead, the Sagnac interferometer can reveal the existence

of absolute rotation. Its construction is such that for each of its mirrors ni  vM ≠ 0, nr vM ≠ 0, if v M (caused by rotation) > 0.

Now let us take a look at the Kennedy – Thorndike experiment.

The mirror M1 and the beam splitter B are not tangential to the rotation of the apparatus. Therefore, Doppler effect is present in the experiment. This effect is always present in the Sagnac interferometer. (Sagnac effect can even be understood as cumulative Doppler effect.)

The experimenters wrote the following: “It was intended when the experiment was proposed to look chiefly for an effect of a change of velocity due to the orbital rather than the rotational motion of the earth. However, with the first apparatus constructed, in which the mirrors were mounted in invar frames, it was found impossible to eliminate a slow, rather irregular variation in the interference pattern which would have masked the effect sought; hence it was decided to concentrate on the possible rotational effect.”

Messieurs Kennedy and Thorndyke were forced to concentrate on rotational effects, because that is the field of Sagnac interferometers, and in effect their apparatus is a Sagnac interferometer.

Thinking the requirement expressed by the vector dot product equation above, it turns out to be crucial to under- standing. It can be fulfilled only in the interferometer with mirrors of infinitesimally small area. Even if the mirrors are tangential to rotation, all mirrors of finite area break the normality rule everywhere else except in the middle. The first reaction of W. M. Hicks to Michelson – Morley data was: “In looking at the sets of readings, one is struck at once with the fact that all the readings continuously increase or decrease. ...” The reason is this: all the ether drift interferometers (of Michelson – Morley type) are incapable of measuring the translational velocity (as proved by our analysis), but they all possess residual sensitivity to the Sagnac effect! In the Kennedy – Thorndike interferometer this is obvious, but in Michelson – Morley interferometers it must be identified as the “second order Sagnac effect”. Everything is now clear: the signal in the Miller experiment is “Periodic in each half revolution of the interferometer of the kind to be expected, but having an amplitude of one tenth of the presumed amount.” Besides this, there is the problem of continuous, linear changing of the readings.

58 After our analysis on the interferometers in question, the expected result in the Miller experiment is a “sinusoidal signal superposed on a linearly and continuously increasing or decreasing background signal”, and it is exactly what can be seen in the raw data graph above. But, the amplitude of the readings cannot be directly connected to velocities. The absolute velocity of the apparatus can only be determined by analyzing the beat waves at the optical paths inside the device, and that is what was carried out in the Silvertooth experiment. The result was the same as later determined from the CMB measurements. Our results render completely flawed the conclusions in the article An Explanation of Dayton Miller’s Anomalous “Ether Drift” Result by Thomas J. Roberts, from which the raw data graph is taken. For the interested reader we recommend a history of the doings of Mr. Robert S. Shankland, who built his professional career upon publications misrepresenting the Michelson-Morley experiments, and to whom Einstein wrote: “I thank you very much for sending me your careful study about the Miller experiments. Those experiments, conducted with so much care, merit, of course, a very careful statistical investigation. This is more so as the existence of a not trivial positive effect would affect very deeply the fundament of theoretical physics as it is presently accepted. You have shown convincingly that the observed effect is outside the range of accidental deviations and must, therefore, have a systematic cause [having] nothing to do with 'ether wind', but with differences of temperature of the air traversed by the two light bundles which produce the bands of interference.” Dayton Miller's Ether-Drift Experiments: A Fresh Look by James DeMeo, Ph.D.

Absence of the Relativistic Transverse Doppler Shift at Microwave Frequencies

by Hartwig W. Thim, Life Senior Member, IEEE.

Abstract: An experiment is described showing that a 33 GHz microwave signal received by rotating antennas is not exhibiting the frequency shift (“transverse Doppler effect”) predicted by the relativistic Doppler formula. The sensitivity of the apparatus used has been tested to be sufficient for detecting frequency shifts as small as 10-3 Hz which corresponds to the value of (v/c)2 = 5.10-14 used in the transverse Doppler shift experiment reported here.

From the observed absence of the transverse Doppler shift it is concluded that either the time dilation predicted by the standard theory of special relativity does not exist in reality or, if it does, is a phenomenon which does not depend on relative velocities but may be a function of absolute velocities in the fundamental frame of the isotropic microwave background radiation.

We reviewed the experiments above in order to answer the self-posed question “why is it believed that time dilation exists?”, because it is an integral part of de Broglie’s wave–particle duality theory of matter, and thereby is also an integral part of the theory of quantum mechanics, because Schrödinger followed de Broglie’s matter wave hypothesis. But we are working under the heading On Formation of Quantum Physics, and now we return to that subject.

59 A Brief Genesis of Schrödinger's Equation

Erwin Schrödinger believed that “...material points consist of, or are nothing but, wave systems.” He accepted de Broglie's hypothesis and set out to find a wave description for the particles of matter. Schrödinger's “point” was not a mathematical abstraction: “The original picture was this, that what moves is in reality not a point but a domain of excitation of finite dimensions...” (Albert Einstein shared this view: “We may therefore regard matter as being constituted by the regions of space in which the field is extremely intense. .. . There is no place in this kind of physics both for the field and matter, for the field is the only reality.”) In 1925 Schrödinger gave a talk on de Broglie‟s 1924 wave thesis that every particle of momentum p had a wavelength λ = h/p associated with it. Peter Debye ignored the talk as “childish”, remarking that to deal properly with waves, one had to have a wave equation. Later Schrödinger took a two-week vacation to Swiss Alps with his girlfriend (whose role was to act as a muse), took with him de Broglie's thesis, and began to work on wave descriptions. After the vacation Schrödinger started his next talk by saying: “My colleague Debye suggested that one should have a wave equation; well, I have found one!” But what was waving, he could not explain.

2mE [- Vx ( )] Ñ+2ykx 2( ) y = 0, kx 2 ( ) = . h2

Schrödinger's invention was effectively to take the classical wave equation and merge it with concepts from Hamilton's mechanics of a mass point. He just took de Broglie's wave number of matter waves and inserted it into the wave equation in which the coordinate x is measured in the atom's frame. The result was not good. It has led to a situation in which nobody understands quantum mechanics.

Heisenberg's Uncertainty Principle

The uncertainty principle is a central consequence of quantum theory and a pillar of modern physics. In 1927 Werner Heisenberg stated that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa. Heisenberg derived his uncertainty principle from a thought experiment. He imagined what would occur if one tried to see one electron with one photon. From this thought experiment he concluded that “At the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum. This change is the greater the smaller the wavelength of the light employed, i.e., the more exact the determination of the position”.

Niels Bohr derived Heisenberg's uncertainty principle from an elementary analysis of wave properties. Bohr stated in his Como Lecture that “Rigorously speaking, a limited wave-field can only be obtained by the superposition of a manifold of elementary waves ...”. Modern physics insists, following Bohr, that a photon or a free moving electron can be thought of as a wave packet constructed as a sum of traveling waves. A narrow wave packet (which is more confined in space) requires a wider distribution of wave numbers Upon measurement the wave packet “collapses” and picks a single value of energy. But a strictly localized wave packet has a wide distribution of energies to collapse into, and we don't know which amount of energy emerges in the collapsing process. How about zero kinetic energy? The uncertainty product Δp⋅ Δx goes over into the form 0 ⋅ ∞, which means that the uncertainty principle breaks down. To prevent this, all particles must move at least to some extent at all times. The uncertainty principle forces particles into perpetual motion.

60 Obviously none of the architects of quantum physics was familiar with the Korteweg – de Wries equation, which leads most directly to sech 2 - type solitons, first appeared in a dissertation of de Wries written in 1894 under the supervision of Korteweg. To these solitary waves the superposition principle cannot be applied at all, but still they are local and non-dispersive!

In Niels Bohr’s mind the concepts of field and particle were mutually exclusive. A wave is spread out in space, while a particle is concentrated nearly at a point. This inconsistency became Bohr’s “complementary principle”. Bohr regarded the “duality paradox” as a fundamental fact of nature. Heisenberg maintained that it might be postulated that two separate entities, one having the properties of a particle and the other having the properties of wave motion, are combined in some way. But he added that such a theory is unable to bring about the “intimate relation” between the two entities. Heisenberg argued that wave and particle are a single entity, and that the apparent duality, wave and particle, is due to limitations of language. ( It is obvious that they didn’t consider the possibility that a particle may be a wave, a solution of a nonlinear wave equation, but still so compact that it is nearly a point. ) Heisenberg did not like Bohr’s principle, and it was one of the reasons why he derived the uncertainty principle.

The present quantum physics emphasizes that the uncertainty principle is not a statement about the inaccuracy of measurement instruments, nor a reflection on the quality of experimental methods; it arises from the wave properties inherent in the quantum mechanical description of nature. Even with perfect instruments and technique, the uncertainty is inherent in the nature of things.” We can see the uncertainty principle everywhere: it explains the existence of virtual particles, it explains why electrons don't fall into the nucleus of an atom, it can be used to “explain” the impossible, namely the attractive force...It is the snake-oil of physics. The Uncertainty Principle prevents us from defining initial conditions in quantum mechanics since we cannot know the position and velocity of a particle simultaneously. Furthermore, the same principle prevents us from talking about trajectories in quantum mechanics because a trajectory is nothing more than a complete description of the position and velocity of a particle at all times.

Feynman articulates the principle in question as follows: “If we try to “pin down” a particle by forcing it to be at a particular place, it ends up by having a high speed. Or if we try to force it to go very slowly, or at a precise velocity, it “spreads out” so that we do not know very well just where it is. Particles behave in a funny way!”

That would be funny but luckily, it is not true. The origin of this all-round principle can be seen in Erwin Schrödinger's unfortunate, misleading dispersion relation of his equation:

w= w(k ) = h k2 / (2 m ).

We have repeatedly stated that this relation connects the velocity (p = mv = ħk ) and the energy of a particle incorrectly. The correct view would be that the wave number k measured in the laboratory frame is only a manifestation of the particle's internal vibration. The quantities k and belong to different frames. The frequency  is attached to oscillation which must be described in the rest frame of the particle. Due to this separation of frames we can pin down a fermion at a position x and control its energy (volume) by tuning the energy density at that fixed position x.

The disturbance caused by measurement can, in principle, be made as small as required. All particles are sources of modulated radiation, they don't have to be illuminated by a single photon as in Heisenberg's thought experiment. Their radiation carries many kinds of information that can be used to localize the sources and determine their energy and momentum; all at one time. This requires only that the detectors must be sensitive enough for “weak measurements”.

61 Finally, we note that Heisenberg's principle has been proven wrong by a direct measurement: Rozema & all: Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements. Physical Review Letters, 2012; 109 (10).

Does Magnetic Moment Emerge From Spinning Charge?

“In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei. Spin is a solely quantum-mechanical phenomenon; it does not have a counterpart in .” (Wikipedia)

The spin-magnetic moment of an electron is calculated as in classical mechanics from a rotating distribution of charge. Some experimental evidence (spectral fine structure and the Stern-Gerlach experiment) suggest that the spin angular moment is ½ ħ. This is equated to the classical angular (mechanical) momentum:

2 1 mr 2w = h. 5 2

It is common practice to speak of velocity of charge even in classical electrodynamics, so we can ask what is the velocity of charge on the surface of a spinning electron? In the following we take the electron radius as one tenth of the proton radius:

5 1h 5 1.05457´ 10-34 Js v==×= rw =´=´1.45 1012 m / s 4800 c . 2 2mr 4 (9.10938´ 10-31 kg ) ´ 10 - 16 m

Albert Einstein had a problem with the electron: “But there were two difficulties that could not be overcome. Firstly the Maxwell-Lorentz equations could not explain how the electric charge constituting an electrical elementary particle can exist in equilibrium in spite of the forces of electrostatic repulsion. ...In order to construct the electron, it is necessary to use non-electro- dynamic forces...”

In other words, there must be something to prevent the electron from exploding. Now we have the same charge exceeding the speed of light by a factor of 4800.

One can safely say that electric charge, as it is presently described, is impossible. The whole concept must be changed, just as we did in an earlier section. Magnetic field is not caused by the convective motion of charges. All fermions, including electrons, have magnetic moment, but it is not caused by spinning charge.

Evolution of the Bohr-Heisenberg Wave Packets

The width of free particle wave-packet grows as time progresses. The characteristic time for a wave-packet of original width Δx to double in spatial extent is m()D x 2 t : . h

If we estimate the electron diameter as 10-16 m and use it as the original width of the wave-packet, then the doubling time is about 10-29 s. The fermion wave-packets (for freely moving particles) disperse almost immediately. However, for the special case where w is a linear function of k , there is no dispersion of wave-packets and they propagate without changing shape.

Schrödinger's dispersion relation for a free particle, =ħk2/(2m), of course is not linear. Hence, the fermion wave- packets disperse as time progresses.

62 The dispersion relation for light waves cannot, as such, be applied to fermions, as de Broglie suggested!

A particle cannot be formed by a linear superposition of waves since it will not remain local and stable but will disperse after its extremely brief existence. For this reason Schrödinger abandoned the idea of representing an electron by a wave packet. He changed his mind and assumed that the electron's behavior could be described by a three-dimensional standing wave. He derived an equation which described the amplitude of this wave.

The Schrödinger equation says that the rate at which the phase of an energy eigenvector rotates is proportional to its energy: d ih y= H y . dt

If he simultaneously had changed the origin of the system of coordinates to the position of the electron, we probably would now have quantum physics that everybody can understand, and all the foolishness associated with the interpretation of the Schrödinger equation would not have started at all.

Boson-Boson Interaction in the Double Slit Experiment “We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot explain the mystery in the sense of “explaining” how it works.” ( Feynman lectures, Quantum Behavior. ) Here Feynman refers to the two-slit experiment.

The boson-boson interaction is in fact a soliton-soliton interaction. A soliton is defined as a localized wave of permanent form, capable of interacting strongly with other solitons and still retaining its identity.

The double slit experiment: In this case the other solitons are low-energy bosons of background radiation and the interaction can be dubbed as modulation. Every boson propagating in space is accompanied with a shock front of modulated lower energy bosons. In a double slit experiment an individual boson “interferes with itself” as the shock front diffracts at one slit and the boson itself diffracts at the other slit. So there are two sources of same frequency at the slit exits and interference results. This explanation is essentially the same with the one from Albert Einstein. He came to the conclusion that the energy of an electromagnetic wave has concentrated on very small space regions, the quanta, the energy of which is ħ, and that the wave also includes a part reaching outside the quantum causing the interference and diffraction phenomena. The concentration of energy explains the photoelectric phenomenon.

The reader may find interesting the following article: “Physicists Modify Double-Slit Experiment to Confirm Einstein's Belief.” Paradox in Wave-Particle Duality by Shahriar S. Afshar & al, Foundations of Physics, 23 January 2007.

[ According to the Copenhagen Interpretation of Quantum Mechanics, in any experiment light shows only one aspect at a time, either it behaves as a wave or as a particle. Afshar’s experiment shows both complementary wave and particle characteristics in the same experiment for the same photons. This result fits perfectly with our particle model. ]

So, light is composed of discrete packets, but it also displays wave phenomena! Since light is an electromagnetic phenomenon as explained by Maxwell, there arose an obvious need to “quantize” the electromagnetic field.

63 Quantization of the Electromagnetic Field and Zitterbewegung

At present there is no satisfactory mathematical theory of quantization of classical electromagnetic fields. Furthermore, from Stanford Encyclopedia of Philosophy: “ in contrast to many other physical theories there is no canonical definition of what QFT is. ”

The following is an excerpt from E. T. Jaynes’ “Scattering of Light by Free Electrons as a Test of Quantum Theory”: Is Quantum Theory a System of Epicycles? Today, Quantum Mechanics (QM) and Quantum Electrodynamics (QED) have great pragmatic success – small wonder, since they were created, like epicycles, by empirical trial-and-error guided by just that requirement. … In advancing to QED, no theoretical principle told Dirac that electromagnetic field modes should be quantized like material harmonic oscillators; … we think it still an open question whether the right choice was made. It leads to many right answers but also to some horrendously wrong ones that theorists simply ignore; but it is now known that virtually all the right answers could have been found without, while some of the wrong ones were caused by, field quantization. Because of their empirical origins, QM and QED are not physical theories at all. … To this day we have no constraining principle from which one can deduce the mathematics of QM and QED; in every new situation we must appeal once again to empirical evidence to tell us how we must choose our mathematics in order to get the right answers. In other words, the mathematical system of present quantum theory is, like that of epicycles, union-strained by any physical principles. Those who have not perceived this have pointed to its empirical success to justify a claim that all phenomena must be described in terms of Hilbert spaces, energy levels, etc. This claim (and the gratuitous addition that it must be interpreted physically in a particular manner) have captured the minds of physicists for over sixty years. And for those same sixty years , all efforts to get at the nonlinear [conceptions] underlying that linear mathematics have been deprecated and opposed by those practical men who, being concerned only with phenomenology, find in the present formalism all they need. The point we want to stress is that the success – however great – of an empirically developed set of rules gives us no reason to believe in any particular physical interpretation of them. No physical principles went into them.

Our Job for Today Theoretical work of the kind presented at this meeting is sometimes held to be “out of the mainstream” of current thinking; but that is quite mistaken. There is no mainstream today; it has long since dried up and our vessel is grounded. We are trying rather to start a new stream able to carry science a little further. Indeed, our efforts are much closer to the traditional mainstream of science than much of what is done in theoretical physics today. Talk of tachyons, superstrings, worm holes, the wave function of the universe, the first 10-40 second after the big bang, etc., is speculation vastly more wild and far-fetched than anything we are doing. In the present discussion we want to look at the problems of QM from a very elementary, lowbrow physical viewpoint in the hope of seeing things that the highbrow mathematical viewpoint does not see. I want to suggest, in agreement with David Hestenes, that zitterbewegung (ZBW) is a real phenomenon with real physical consequences that underlay all of quantum theory... [Our emphasis.] In short, the currently taught physical interpretation has elements of nonsense and mysticism which have troubled thoughtful physicists, starting with Einstein and Schrödinger, for over sixty years. The more deeply one thinks about these things, the more troubled he becomes and the more convinced that the present interpretive scheme of quantum theory is long overdue for a drastic modification. We want to do everything we can to help find it. What Is a Free Electron? We have long been intrigued with the fact that in applications of quantum theory, in our equations we write only wave functions , either explicitly or implicitly (as matrix elements between such wave functions). But in the interpretive words between those equations we use only the language of point particles. Even the Feynman diagrams are a part of that inter-equation language, depicting particles rather than waves.

64 Thus the wave-particle duality is partly an artifact of our own making, signifying only our own inability to decide what we are talking about. But the predictions of observable facts come entirely from wave functions ψ(x; t) = r(x; t) exp[i(x; t)]; ... Then if anything in the mathematics of QM could be held to represent some kind of reality, it is surely the complex wave function itself, not that point particle imagined to be hiding somewhere in it, but which plays no part in our calculations. This is just Schrödinger's original viewpoint.

Zitterbewegung in Radiative Processes David Hestenes Evidence that irradiated single free electrons can absorb harmonics of the laser frequency exists already in the pioneering “stimulated bremsstrahlung” experiments of Tony Weingartshofer. These experiments have been regarded as anomalous in the high intensity laser field, because they cannot be explained by standard arguments. However, I submit that they are just further examples of the ZBW mechanism at work. To establish unequivocally that energy can be stored in the ZBW of a single free electron, we need cleaner experiments on single electrons. The prediction is that an electron can absorb an nth order harmonic to put it in a metastable state with mass m given by

2 2 mc = m0c + nћωL

where m0 is the rest mass and ωL is the laser frequency. Then, under suitable conditions, the electron can be released in this excited state to transport the additional energy until the electron is induced to release it by a collision or some other means. This phenomenon may actually have been observed already in the infamous Schwartz-Hora effect described briefly by Jaynes. I hold with Jaynes that this effect is probably real and the possibility deserves to be investigated thoroughly. ( We are pleased to notice that our theory is perfectly in line with the assertions from E. T. Jaynes and David Hestenes, from whose book The Electron: New Theory and Experiment the excerpts above are taken. )

Feynman’s Way: the Probabilistic View of the World Sum Over Histories In 1948 Richard Feynman introduced his “sum over histories” version of quantum mechanics. “The electron does anything it likes. It just goes in any direction at any speed, forward or backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function.” “Very interesting theory – it makes no sense at all.” Groucho Marx.

65 When applied to the double slit experiment Feynman's quantum mechanics says that there is no physical reason or explanation for a particle (photon or electron) to arrive at a certain position on the screen; it only follows the rules of probability. Feynman's idea behind the path integral approach was to take the implications of the double slit experiment to its extreme consequences. One can imagine adding extra screens and drilling more and more holes through them “until there is nothing left of the screens.” This is the procedure illustrated by Feynman in his book “Quantum Mechanics and Path Integrals”. Picture credit: Matthew Schwartz. According to Feynman, the paths involved are continuous but possess no derivative. At a closer look the paths have a zig-zag nature; they are of a type familiar from study of Brownian motion. But there is more to this zigzag motion. The following is taken from Paul Dirac's Nobel lecture: “It is found that an electron which seems to us to be moving slowly, must actually have a very high frequency oscillatory motion of small amplitude superposed on the regular motion which appears to us. As a result of this oscillatory motion, the velocity of the electron at any time equals the velocity of light. This is a prediction which cannot be directly verified by experiment, since the frequency of the oscillatory motion is so high and its amplitude is so small. But one must believe in this consequence of the theory, since other consequences of the theory which are inseparably bound up with this one, such as the law of scattering of light by an electron, are confirmed by experiment.” To understand what he is speaking about, we must start from the Klein-Gordon equation. Dirac required that each component of his wave function separately fulfills the K-G equation. At the non-relativistic limit its solutions have the form: i y(x ,t )= j ( x )exp( - mc2 t ). h

This equation is presently understood meaning that in addition to the rectilinear motion the electron oscillates around its average position with its de Broglie frequency. Schrödinger called this motion zitterbewegung.

Zitterbewegung in QED? Thinking of the fact that entire non-relativistic quantum mechanics was raised in order to explain the absence of radiation during the oscillatory motion of the bounded electron, it is rather surprising that the zitterbewegung problem (that an electron in continuous acceleration should act as a source of continuous radiation and a freely moving electron should lose all its kinetic energy through electromagnetic radiation) has gone almost unnoticed! Feynman briefly mentions it in his 1986 Dirac memorial lecture:

“But Dirac had the courage to simply guess of the form of the equation and try to interpret it afterwords. ... He also invented some ideas called zitterbewegung and other things which turned out not to be very useful in interpretation of the equation.” Here Feynman hints at something more. It is explained in A. Wüthrich's article Feynman’s Struggle and ’s Surprise: The Development and Early Application of a New Means of Representation :

“Because he was not able to satisfactorily incorporate the interaction of particles into the model of the quivering electron, Feynman abandoned it. To him, it seemed impossible to analyze interacting electrons and positrons in terms of the microscopic Jitterbugging implied by Dirac’s equation.”

We have already discussed a solution similar to the one above. We made an argument that fermions are standing wave structures of the form iw t ().space part e

Fermions are solutions of the threedimensional wave equation. But we also imposed an extra constraint that fermions are solutions of the vector wave equation, which in addition satisfy Maxwell’s curl equations. Therefore, the wave function of the particle must acquire a phase factor as above.

66 Now the nature of zitterbewegung is immediately clear. It is the energy of the electron that oscillates between electric and magnetic forms of existence.

The reality of the zbw-oscillation has been confirmed experimentally, see the article Hunting for Snarks in Quantum Mechanics by David Hestenes. In Feynman's probabilistic theory the physical meaning of the phase factor is unknown. In the original Schrödinger formulation, the complex phase was an ad hoc mathematical device invoked to bring wave behavior into the particle framework.

Now, one might ask “why the zigzag path?” It goes against all our intuition and experience of the propagation of particles. One would prefer smooth trajectories in the path integral. But no, the paths must be non-differential, otherwise particles would have well-defined positions and momenta for each moment of time, and that would be against the uncertainty principle! From Feynman’s path integral Huygens’ principle follows at the limit ћ → 0. Huygens's principle works for any phenomenon that is described by the Helmholtz equation. In our theory all particles and their motion are described by electromagnetic solutions of the nonlinear Helmholtz equation in non-isotropic and inhomogeneous media, which is the boson space. Generalized this way, Huygens’ principle is the framework of our theory.

So why Feynman’s path integral is nonsense but Huygens’ principle is not? It is because Feynman had to incorporate Heisenberg’s uncertainty principle into his theory. The existence of all these certainly non-physical paths is a consequence of the uncertainty principle. Theoretical physicists express the same thing by saying that “divergent standard deviations of the velocities are needed to get nonzero commutators from the path integral.”

Virtual Particles

The concept of “virtual particles” was developed by Richard Feynman. It says that charged particles are surrounded by clouds of virtual photons, which are the mediators of forces between charged particles.

The underlying idea is that virtual particles are somehow originated from the charged particles, in a way that forces one to ask: where does the energy come from to produce these virtual particles? The answer is from Nowhere, which of course means that the law of conservation of energy is not fulfilled. This is not a problem for the theory because it can be fixed with a deus ex machina. The uncertainty principle allows violation of the conservation of energy.

Above: the first published Feynman diagram. It appeared in Physical Review in 1949. (Two electrons exchanging a virtual photon in a scattering process.)

Charged particles are surrounded by clouds of virtual photons. A particle constantly emits and absorbs virtual photons which cannot fly off. They are re-absorbed by the mother particle as soon as they are emitted.

Some questions arise from this. If the electromagnetic field is made out of virtual photons, it must be that a single electron doesn’t produce any EM field until it meets another charged particle. Particles with the same electric charge repel each other and particles with opposite charge attract each other. How does a charged particle “know” whether another particle has the same charge or opposite charge? Questioners on the internet are directed to the following web page: http://math.ucr.edu/home/baez/physics/Quantum/virtual_particles.html

By reading this, one learns two things: Theoretical physicists have no clue of how to explain attractive forces, and they speak fluently of things happening backwards in time, with the same easines as of things far or near or at right or left.

The problem above (i.e. the need of virtual particles) originates from one basic flaw: Standard physics views particles as closed systems. Modern physics conceives nature at its most basic level to be composed of immutable structures. Unlike particles in our theory which require a continuous flux of energy with their environment to sustain their forms, conventional physics views particles as self-sufficient entities that require no interaction with their environment in order to continue their existence.

67 Feynman wanted to create a new kind of quantum theory, a one which would contain all the aspects of Dirac's equation: And, so I dreamed that if I were clever, I would find a formula for the amplitude of a path that was beautiful and simple for three dimensions of space and one of time, which would be equivalent to the , and for which the four components, matrices, and all those other mathematical funny things would come out as a simple consequence - I have never succeeded in that either. … A simple physical view by which all the contents of this equation can be seen is still lacking. (Feynman's Nobel lecture.)

Feynman failed, because he founded his attack on the Dirac equation, which combines the principles of quantum mechanics and special relativity. In our view, both theories are in error. Besides being based on Special Relativity, Feynman's quantum physics is in error for the same reason as the Schrödinger model. The fault can be traced back to de Broglie's assertion that any moving particle or object had an associated wave.

In analogy to the phase of light quanta travelling along light rays, de Broglie proposed that a particle of matter gains phase as it travels. He had noticed the similarity between ’s principle and the principle of least action and associated the phase of a matter wave with the classical action, the integral of the Lagrangian, along the particle’s path.

We can see the same (wrong) connection between the energy and motion of a fermion in Schrödinger's dispersion relation and in Feynman's propagator ( Dx means: perform this integral over every possible path from x to x' ) :

w== w()kh k2 /(2), m Kxtxt (, ¬µ ',')ò eiS[ x ()]/ t h Dx .

Louis de Broglie's Nobel-Prize-winning discovery of the wave behavior of matter is the following: df =k × d x.

In our theory a fermion can and does gain phase without propagating at all, because it oscillates in its rest frame, independent of its propagation, but dependent on local energy density.

As a summary we state that the motion of particles is always constrained in one way or the other. It is for this reason that particles end up in certain positions in space; it is not a matter of probability. In a following section The Electromagnetic “General Relativity” we show how the constraint on particle motion arises from the electro- magnetic theory.

The Quantum Theory of the Electromagnetic Field by R. Feynman In his book Quantum mechanics and path integrals Feynman writes down the Maxwell equations as components of the vector potential A in the following form: akca&&+2 2 = 4p j , and then states that “The hypothesis of quantum electrodynamics is that the oscillators defined are quantum oscillators.” [ page 240. ] In quantum electrodynamics quantization is a mere assumption. Feynman admits this in the same book, on page 230:

“As we have seen, almost all of the effort in quantum field theory is devoted to solving the classical equations of motion to find the normal modes, an activity completely within the realm of classical physics. The “quantization” consists then of no more than the additional remark that each of the normal modes is a quantum oscillator, with the energy levels En =hw (n + ½).

“Presented in this way, quantum field theory seems to be just a special consequence of the Schrödinger equation, and not an extra theory at all.” [Our emphasis.]

So the theory called “Quantum Electrodynamics” settles the problem of field quantization by making an assumption that seems to concern only mathematics. The quantization idea of QED was discovered by P. Dirac.

68 Canonical (Dirac) Quantization of the Electromagnetic Field.

The formulation of quantum electrodynamics is almost entirely due to Paul Dirac. Quantization of the electromagnetic field was first performed by Dirac in 1927. He introduced the scheme in the article The Quantum Theory of the Emission and Absorption of Radiation: The underlying ideas of the theory are very simple. Consider an atom interacting with a field of radiation, which we may suppose for definiteness to be confined in an enclosure so as to have only a discrete set of degrees of freedom. Resolving the radiation into its Fourier components, we can consider the energy and phase of each of the components to be dynamical variables describing the radiation field.

Since then, most field quantization schemes have relied on the mathematical fact that any function on a finite interval can be written as a Fourier series. In modern physics quantization of the electromagnetic field is carried out by treating the whole of the electromagnetic field like a harmonic oscillator.

The general solution to the EM field equation is an integral over plane waves (Fourier composition, or normal-mode expansion). The amplitude of each mode satisfies the harmonic oscillator equation. In the present quantization method each mode is interpreted as a quantum harmonic oscillator, meaning that the mode amplitudes are assumed to change in a stepwise (or quantized) manner. These quantized, oscillatory states are called “excitations” of the field, namely photons. The present standard procedure of field quantization is the “box quantization”, in which the field is enclosed in a finite volume, the size of which is sent to infinity in the course of the calculation. A free (non-interacting) quantum field is represented as a sum of an infinite number of modes. It is thought that there results a universal field that fills all space, and at each point in space the field has the ability to oscillate at any frequency. The field can then be represented by an infinite sum of harmonic oscillators. There is nothing wrong with the Fourier part of the field quantization procedure because Fourier transform is a superposition of plane waves. The procedure above constitutes a definition of the photon: they are “quantum excitations of the normal modes of the electromagnetic field”, and plane waves of definite wave vector k are associated with them.

There are several drawbacks to this approach. The worst three disadvantages are the following: 1. The above only holds for a free, non-interacting field. 2. The excitations (photons) are not localized in space in any way. 3 It leads to infinite vacuum energy, “the worst prediction physics has ever made”.

One may now ask how, where and for what reasons the stepwise changes of modes, or photons, come into existence or disappear? What physical principles are involved? The answer is a story as follows. Werner Heisenberg had the conviction that only observable quantities should be used in a quantum theory. He argued that the electron orbiting the nucleus was not only “unobservable”, but also “unvisualizable,” and that perhaps such an orbit did not really exist at all. [ A reasonable suspicion! ] But then in 1926 Heisenberg declared that the root of the difficulties facing the development of quantum theory is the inappropriate transfer of classical concepts and ideas to the problems of atomic structure... “The program of quantum mechanics has to be free itself first of all from these intuitive pictures...The new theory ought above all to give up visualizability [ anschaulichkeit ] totally”.

Heisenberg introduced this idea to Einstein and found out that it was nonsense. Einstein admitted that it could be useful to focus on measurable quantities, but he insisted it makes no sense to assume that what is measurable can be specified without the theory. He convinced Heisenberg to consider that, “It is the theory which decides what we can observe”. Despite Einstein’s critique, a new theory of matrix mechanics was created. The following summary is taken from the article The Development of Elementary Quantum Theory from 1900 to 1927 by Herbert Capellmann.

69 Brief summary of the New Quantum Theory, the Matrix Mechanics All physical variables have lost their classical significance, being replaced by an associated matrix; particle position or any other variable do no longer have precise values, only statistical statements are possible. Time as well has lost its traditional meaning; just as position can no longer be exactly assigned to a particle, the exact time of a quantum transition can not be given. The general quantum conditions take the form of commutation relations; the “quantum theoretical equations of motion” have become relations between matrices; for example the time derivative of momentum in classical physics is replaced by a matrix, which, via the new “quantum mechanical equation of motion”, is related to the commutator of the Hamiltonian matrix with the matrix representing the momentum. Physical content and mathematical form of the new Quantum Theory were so dramatically different from all traditional concepts in classical physics, that it is no wonder, that the new basic laws shocked the established scientific community, and considerable resistance was widespread. Determinism and continuity, mathematically expressed via differential equations, had been the foundation of classical physics, and it is maybe not surprising that a return to seemingly familiar concepts appeared soon afterwards.

The quantum theory based on “more familiar concepts” was that of Schrödinger, who claimed that the only way to obtain understanding of nature is to construct theories that possess Anschaulichkeit, meaning that they are visualizable in space and time. Matrix mechanics was introduced by Werner Heisenberg, Max Born and Pascual Jordan in 1925, and it was applied also to the problem of the quantization of the electromagnetic field. The Fourier coefficients in the expansion of the EM field were replaced with matrices and by the commutation rule it was shown that they were quantized in amplitude! So the field quantization problem was “solved” by the non-commutative property of matrix multiplication. And the price payed was not more than that all physical variables lost their classical significance. Fine. Dirac built on Heisenberg’s idea and introduced the creation and destruction operators with commutation rules for the photon oscillators (1926, Quantum theory of light radiation and absorption).

The answer to the question above “what physical principles are involved in the stepwise changes of the Fourier coefficients in the canonical (Dirac) quantization of the electromagnetic field” is none! Heisenberg pulled his “kinematical rule” out of thin air. (Quantization in Schrödinger's theory is a natural consequence of the underlying mathematics of the wave equation.) Thanks to Born and Jordan, the commutator is now an integral part of modern quantum theory. The quantity pq−qp lies at its core. The change in focus from commuting variables to non-commuting variables must be seen as a paradigm shift in quantum theory, and this was carried out without any physical justification. Pauli, in the months before Heisenberg's paper wrote to a friend, “At the moment physics is again terribly confused... “ Less than five months later he said the following: “Heisenberg's type of mechanics has again given me hope and joy in life. To be sure it does not supply the solution to the riddle, but I believe it is again possible to march forward.” Knabenphysiker (and their mentors/coworkers Niels Bohr and Max Born) were quickly ready to sacrifice all the tools of understanding just to get the feeling of “marching forward”.

About this, Schrödinger wrote the following: “What I vaguely envision is only the idea: even if a hundred attempts [to show that “certain discontinuities in nature” are due to single systems and not to be interpreted statistically] have failed, one ought not to give up hope of arriving at the goal, I don't say through classical pictures, but by means of logically consistent conceptions of the true nature of the space-time events.” For most of their careers, Schrödinger and Einstein were convinced that quantum theory was in need of a major overhaul, but Heisenberg’s quantization scheme they did not accept. Einstein wrote to Schrödinger: “I am convinced that by your formulation of the quantum condition you have made a decisive advance, as much as I am convinced that the way of Heisenberg and Born is misleading.”

70 Heisenberg had more ideas that we also consider irrational. In Heisenberg’s mind, physics develops in a sequence of ‘closed’ theories. He considered four theories of physics to be closed: Newton’s mechanics, electrodynamics (including special relativity), thermodynamics, and quantum mechanics. Because they are closed, they are “valid forever.” “Wherever experiences can be described with the concepts of these theories, be it in the most distant future, the laws of these theories will always prove to be right.” (The Notion of a “Closed Theory” in Modern Science. 1948.) He thereby excludes the possibility of refutation! In this case it is advantageous that the Heisenberg–Born–Jordan theory is closed, because it can be removed from physics as a whole by a surgical operation. But it has metastasized in the electromagnetic field theory (another closed theory) as the canonical commutation relation. This must also be removed, and with it everything that is based on non-commutative mathematics, first of all the “Heisenberg's uncertainty principle.” Wavelet Quantization Instead of Canonical Quantization? Wavelet transforms represent a natural development of Fourier transforms and may be used for similar purposes. Where the Fourier transform lets us decompose a wave function into its component plane waves, a wavelet transform lets us decompose a wave function into its component wavelets. Fourier basis functions (infinite plane waves) are localized in frequency but not in time. Wavelets are local in both frequency/scale (via dilations) and in time (via translations). To this end we introduce an article Wavelet Electro- dynamics I by Gerald Kaiser (Appeared in Physics Letters A, 1992). Abstract: A new representation for solutions of Maxwell’s equations is derived. Instead of being expanded in plane waves, the solutions are given as linear superpositions of spherical wavelets dynamically adapted to the Maxwell field and well–localized in space at the initial time. The wavelet representation of a solution is analogous to its Fourier representation, but has the advantage of being local. It is closely related to the relativistic coherent–state representations for the Klein–Gordon and Dirac fields developed in earlier work.

As a field quantization scheme, this solves the problem of lacking locality. But it is still linear and thereby incapable of local interactions. Physical Experiments That Reveal the Nature of Photons by Local Interactions The article Direct measurements of the extraordinary optical momentum and transverse spin-dependent force using a nano-cantilever, in Nature Physics 12, 731–735 (2016), introduces an experiment which demonstrates the locality of photons and their complex momentum properties. We quote from the abstract: Radiation pressure is associated with the momentum of light, and it plays a crucial role in a variety of physical systems. It is usually assumed that both the optical momentum and the radiation-pressure force are naturally aligned with the propagation direction of light, given by its wave vector. Here we report the direct observation of an extraordinary optical momentum and force directed perpendicular to the wave vector, and proportional to the optical spin (degree of circular polarization). ….. We measure this unusual transverse momentum using a femtonewton-resolution nano-cantilever immersed in an evanescent optical field above the total internal reflecting glass surface. Furthermore, the measured transverse force exhibits another polarization-dependent contribution determined by the imaginary part of the complex Poynting vector. By revealing new types of optical forces in structured fields, our findings revisit fundamental momentum properties of light and enrich optomechanics. The idea of unlocalized linear plane-wave photons is completely useless in trying to explain the following:

Exchange of Spin and Orbital Angular Momentum with Matter

A local, toroidal photon.

71 When a light beam carrying nonzero angular momentum impinges on an absorbing particle, its angular momentum can be transferred on the particle, thus setting it in rotational motion. This occurs both with SAM and OAM. However, if the particle is not at the beam center the two angular momenta will give rise to different kinds of rotation of the particle. SAM will give rise to a rotation of the particle around its own center, i.e., to a particle spinning. OAM, instead, will generate a revolution of the particle around the beam axis. These phenomena are schematically illustrated in the figure. (Credit Wikipedia.)

If, instead of plane waves, the light beam is composed of (nonlinear) superposition of toroidal photons, the interactions SAM and OAM can readily be explained, because one of the possible beam radiation modes is a hollow “vortex light beam” Both interactions can be caused by such a beam. This kind of beams are possible in our boson space because it possesses a fractal-like structure: large vortices are composed of smaller vortices which are themselves composed of smaller vortices and so on. This is the thing that leads to the following: Anschaulichkeit via Fractality Erwin Schrödinger claimed that the only way to obtain understanding of nature is to construct theories that possess Anschaulichkeit, meaning that they are visualizable in space and time. To this we agree, and in the following we give an Anschaulich description of the emission process and the mathematics involved. This means that we refer to pictures constructed from previous visualizations of physical processes in the world of perceptions. Fractal processes are possible in our theory because the assumption made for this theory is that Stoke’s theorem and the Maxwell curl equations can be applied regardless of scale. It also allows us to consider the physics of microscopic phenomena like the surface and interior of a fermion.

Emission Process as the Origin of Field Quantization A. Einstein wrote in the article On the Quantum Theory of Radiation (1917) the following: “If the molecule undergoes a loss in energy of magnitude hv without external excitation, by emitting this energy in the form of radiation (outgoing radiation), then this process...is directional. Outgoing radiation in the form of spherical waves does not exist.” (This is Einstein's “needle radiation”.) Earlier in this article we stated that “When a fermion radiates a boson, the energy contained in the volume of a surface layer takes the form of a boson and becomes emitted as a quantum of energy at the speed of light.” In the following we give a heuristic and anschauliche discussion of the emission process.

The emission process is described by the Kadomtsev–Petviashvili (KP) equation.

2 (-++ 4ut 6 uu x u xxxx ) + 3s u yy = 0.

In the picture a fermion (of any size) is modelled by a sphere. Emission process is going on at its surface. A quantum of electromagnetic energy propagates as a wave on the surface towards the pole. The wave changes shape on its way from the equator towards the pole as illustrated by the pictures below. (This is analogous of waves running up an inclined bottom topography and reaching the limit of sharp crests.)

72 The KP equation has different classes of soliton solutions. A first class is a generalization of the solitons of the KdV equation. These solutions decay exponentially in all but a finite number of directions along which they limit to a constant. For this reason these solutions are referred to as line solitons. In the simplest case the solitons all propagate in the x-direction, adding a second dimension to the KdV solitons. Solutions of the one-dimensional Korteweg – de Wries equation are the cnoidal waves,

2 æéK() m ù ö h(,);,xtHcn=çê ú[] kx - w t + j m ÷ èëp û ø of which the solitary sech2 wave is a special case:

Axt( , )= 12 s2 sech 2 [ sxct ( - )].

Cnoidal waves in the general mathematical and physical sense span the range from linear Airy-shaped waves over weakly nonlinear Stokes-shaped waves up to highly nonlinear solitons. ( See the three pictures above.) By applying the cnoidal wave solution and variation of the modulus m all wave shapes of the conventional wave theories can be generated. For m→0 follows cn→cos, meaning that the nonlinear cnoidal wave approaches the linear Airy wave. In the other end, as m→1, non-periodic, solitary waves occur. In the emission process the index m sweeps through the values 0 to 1. Furthermore, we have reasons to believe that the emission wave is in fact a KP solution of genus 2. The genus g is the number of independent phases in the solution, and the two phases are of course for electric and magnetic fields separately. But a special feature of these solutions is a spatial hexagonal structure, which will later turn out to be important. Physics of the Riemann Hypothesis

A century ago Polya and Hilbert suggested that the imaginary part of the non-trivial zeros of the Riemann zeta function would be the oscillation frequencies of a physical system. Today's physicists and mathematicians seem to agree; we can read from the book The Music of the Primes “We have all this evidence that the Riemann zeros are vibrations, but we don't know what's doing the vibrating.” In mathematics, numbers are used to count and measure. Between numbers 1, 2, 3... there are more numbers. If one decides to seek a physical counterpart for a number-theoretic conjecture, one must first decide how and by which means counting is carried out. We choose the number of electromagnetic waves. Between wave crests there is some distance and in this theory this distance is variable; it depends on energy density on the path of the wave. Potentials are in fact expressions of energy density; that is why forces can be derived from them as a gradient. 2 Laplace's equation ∇ V = 0 tells us that there are closed surfaces of constant energy density. These surfaces can be nested and we can assign differing values of energy density to them. The assumption made for this theory is that the Maxwell curl equations can be applied regardless of scale. It allows us to consider the physics of the surface and the interior of the electron. Intuitively, the field of a fermion may be thought of being obtained by cutting off a segment of a higher-order Gauss- Laguerre beam and pasting the two ends together in such a way that its central-axis line forms a circle. The form of the surface is determined by some minimum principle; the pictures below are modified from an article Minimum energy magnetic fields with toroidal topology by A. Y. K. Chui and H. K. Moffatt. They probably give us a hint of the structure and form of particles.

These are a boson and a fermion of equal energy. Fermions have a series of nested nodal surfaces inside the outermost one. A boson can be converted into a fermion of equal energy just by a switch of the mode of oscillation.

73 In order to solve Maxwell's equations inside a closed surface, boundary conditions must be specified to make the solution unique. In our case it is the boundary condition, meaning that the field vanishes on the (free) boundary. This condition is satisfied in our fermion model at every nodal surface. Furthermore, being an electromagnetic TEM standing wave, our fermion model fulfills the Bateman condition (i.e. vector Dirichlet condition) on the surface (see the section The effect of motion on particle's energy). The E field is orthogonal to the B field, and the electric and magnetic energy densities are equal. This kind of surface must be associated with minimal surfaces. (On minimal surfaces and physics; R. M. Kiehn.) The Riemann Zeta function is at the heart of our fermion model. There is a (perhaps not widely known) contour integral representation of the Riemann Zeta function:

1 ¥ z()s=× pò sech()(1/22 p t + it ) 1-s dt , s > 0. 2(s - 1) -¥

Another representation of the Riemann zeta as a Mellin transform is the following:

2s-1 ¥ z (s )= ts sech2 ( tdt ) . 1-s ò (1- 2 ) G (s + 1) 0

2 2 2 ¶ ¶ ¶ 2 E+ E + EkE ++2 g EE = 0. ¶x2 ¶ y 2 ¶ z 2

For this nonlinear wave equation Yang Yong and Yan Zhen-Ya (Commun. Theor. Phys. (Beijing, China) 38 (2002)) have obtained a solution in terms of Jacobi elliptic functions of the form

Exyz( , , )= E (x )exp( i h ), in which ξ and η are functions of x, y and z. For the electric energy density E2 the form of the solution is

E2= sech 2 (x )exp(2 i h ).

The nonlinear wave equation above is the same from which we started to derive the “almost sech2 ” fermion model in polar coordinates. The following helps to understand what we are going to propose:

“Series solutions to linear separable partial differential equations may be constructed with complex wave- numbers on a periodic domain if a complex separation constant is used and the boundary conditions are modified to include a jump factor. The imaginary part, ki, is then interpreted as the spatial growth rate at the boundary and the actual wave-number is given by the real part, kr. When ki=0, the solution becomes a normal Fourier series composed of periodic modes with no growth at the boundary. When ki is a non-zero constant, the modes are growing (or decaying) in space, which forces a jump at the boundaries.” (Analysis of instabilities in liquid sheets, a dissertation by N. S. Barlow.)

The real part (blue) and imaginary part (red) of the Riemann zeta function along the critical line Re(s) = 1/2.

74 z(½+it ) = Z ()cos( t J () t - iZ ()sin( t J (). t

The function ζ(s) is zero only when both its real and imaginary parts are simultaneously zero. We identify these as some projections of the electromagnetic vectors E and H. The graph is a representation of the field of a fermion along the radial direction.

The Gram points (red dots) are at the nodal surfaces. The vectors act differently between the nodal points because the propagating wave, existent only during emission and absorption, is in fact a cnoidal wave of genus 2. These may possess double (or half-) period wave components.

In the emission process an electromagnetic cnoidal wave of genus 2 propagates along the surface of the fermion and end up at the pole and form a sech2-shaped wave, a toroidal boson, which then leaves at the speed of light. We have already discussed the emission process as a cnoidal wave. It will now become obvious that the Riemann Zeta function is an inseparable part of the description of that process.

In the section Birth of a Fermion a particle model was formulated by deriving an electromagnetic standing wave from the threedimensional quantum harmonic oscillator: 1   r2 l 2 l1/ 2 2 m ENreL 1/() 2n l  1 ( rP )l (cos  ) cos m  l + ½ is a separation constant. If we now add an imaginary part but assign to it a momentary existence only, it can be interpreted as a smooth change between two stationary states, as N. S. Barlow explains above.

Here we have a three-dimensional representation of the issue discussed above. The electromagnetic vectors Re(ζ(½ + it), Im(ζ(½ + it), and the direction vector t form the Poynting vector (faintly seen in the picture). Even though the particle field is a solution of a nonlinear system of equations, a stable particle must have some separated expression of the form (space part) x (time part), because it oscillates in time maintaining its spatial structure.

We can interpret the increasing t value as the existence of a real parts of the Poynting vector and the wave vector. The real part is associated to propagation of the wave. In the transition from one Gram point to another there is flow of electromagnetic energy, emission or absorption. At a Gram point both the Poynting vector and the wave vector are purely imaginary, pointing along the surface; there is no flow of energy in or out, and t stays constant.

Picture by Jens Bossaert, Curiosa Mathematica.

A stable particle has the Dirichlet condition on its free surface. If the energy density changes at the particle's position, it triggers a bifurcation process. The boundary condition turns into a mixed Dirichlet and Neumann condition and the separated form is lost for a brief moment. During the transition the Poynting vector is directed outwards (emission) or inwards (absorption). After the transition, as the separated standing wave condition is recovered, oscillation takes place in a new mode; the fermion has changed both its frequency and volume. Infinite series expressions are, in most cases, only mathematical abstractions. As such they are not descriptions of nature (as the renormalization theory of QED so clearly indicates). This is the case also with the Riemann Zeta function. Later in this article we meet with very large fermions, but not infinitely large. One might suspect that the Riemann Zeta series is somehow “spoiled” by truncating it, but it is not. For the interested reader there is the article Zeta Function Zeros, Powers of Primes, and Quantum Chaos by Jamal Sakhr & al. Here is an edited excerpt from its introduction: “We first verify that Riemann's formula does produce spectral lines at the positions of the primes and their powers, even when the series is truncated.

75 ...one could ask whether it is also possible to calculate the prime number sequence from a sum of oscillatory terms, with one term for every zero of the zeta function. Although less widely-known, such series was actually given by Riemann himself. Riemann derived an exact formula for the density of the primes (and their integer powers) that can be expressed as the sum of a smooth function and an infinite series of oscillatory terms involving the complex zeros of the zeta function. The smooth part has been thoroughly studied in the context of the prime number theorem whereas the oscillatory part has been largely ignored. Interestingly, it is the latter that contains the essential information about the location of the primes, ...To our knowledge, Riemann's series has not been studied numerically.”

The Riemann Zeta function, interpreted as above, is the foundation of the interaction mechanism between fermions in this theory. It is because we make the following obvious assumption: when the energy (volume) of a fermion changes through absorption or emission, the frequency of oscillation also changes and it adopts the value of the prime between the two outermost Gram points. The frequency in question is the de Broglie( = zitterbewegung) frequency. The picture below is taken from the article Quasicrystals and the Riemann Hypothesis by John Baez.

It shows results of computations (by Matt McIrvin) concerning the zeros of the Riemann Zeta function and prime numbers. The result is a series of sharp spikes at logarithms of powers of prime numbers. The result follows from a summation as below:

¥ 1 ikj x z (),().+ikj f x = å e 2 j=1

0 < k1 < k2 < k3 <⋯kj are zeros of the Riemann zeta function. Matt McIrvin used j = 1.....10000.

We have already discussed a relevant topic in the section Absorption and Emission which we will review shortly: Ñ´=-Eiwm H,. Ñ´= HE i we

If these vectors are to form a wave, then the following dispersion relation must be satisfied:

(w2 me -k × k ) = 0.

The dispersion relation encapsulates all the governing physics of the system into a single equation. It is an expression describing the relationship between the wave number k, the associated frequency, ω, and the medium where the wave propagates (or resides, if it's a standing wave).

Both k and ω may be complex. If they are real, the wave is stationary. If an imaginary part emerges, the wave begins to change. It can grow or dampen both spatially and temporally. This shows up in the time part of the particle's wave description as follows. F(t,r) stands for the frequency/volume -dependent energy of a particle:

76 i()w 1 t i[(w1 tit )+ w 1 ] i()w 2 t FFr1(()()()()()te,),r = 1 ,.:,.:.abs starts Þ FrFr te =1 abs ends Þ FrFr2 te, = 2

It is only during the absorption process when a positive imaginary part exists and there is the energy of a boson accumulating onto the surface of a fermion. Generally, the dispersion relation allows any combination of frequency and wave number. In our particle model we have only one frequency per energy state but a varying wave number. This is possible because in the model the wave number k2 is replaced with energy density, a.k.a. με. We must write the dispersion relation differently:

k× k (w2 - ) = 0. me

Now it is manifestly consistent with the particle model; a single frequency but a dispersion relation always satisfied, provided that the wave number and energy density change at the same rate, as we have assumed.

Let us now review Matt McIrvin's equations:

¥ 1 ikj x z (),().+ikj f x = å e 2 j=1

We identify the number kj as a variable wave number and the function f (x) as a wave structure, namely the field of a fermion composed of those wave numbers. A discrete spectrum is expected for a quantized structure and the dispersion relation above is satisfied by prime numbers. To the question, “what's doing the vibrating?,” we now have an answer. It is electromagnetic energy oscillating in a structure represented by a Fourier duality relation between prime numbers and the zeros of the Riemann zeta function. Assuming resonant repulsive forces, as we have done for this theory, the existence of solid matter (through the interleaved frequencies of fermions) is the physical proof of the Riemann hypothesis.

Connection Between the Emission Process and the Riemann Zeta Function

2 2 2 ¶ ¶ ¶ 2 E+ E + EkE ++2 g EE = 0. ¶x2 ¶ y 2 ¶ z 2

As noted earlier, for the nonlinear equation above Yang Yong and Yan Zhen-Ya have obtained a solution in terms of Jacobi elliptic functions of the form Exyz( , , )= E (x )exp( i h ), in which ξ and η are functions of x, y and z. For the electric energy density E2 the form of the solution is

E2= sech 2 (x )exp(2 i h ).

The wave equation above is the same from which we started to derive the “almost sech2 ” fermion model in polar coordinates. It turned out to be of stratified (or quantized ) structure. Yong and Zhen-Ya have shown that the exact solution is given in terms of Jacobi elliptical functions.

In their work the authors consider the three-dimensional nonlinear Helmholtz equation and find sixteen families of new Jacobi elliptic function solutions. They also find that as the modulus m→1, the doubly periodic solutions degenerate as solitary wave solutions including the known bright and dark solutions as well as the new complex line soliton solution.

77 From all this we conclude that emission of electromagnetic quanta is a surface phenomenon of a fermion. For a stable fermion the modulus m is zero. The emission process starts at the equator as an almost linear wave. As the disturbance nears the pole, surface conditions approach the “extreme shallow-water conditions” in which the modulus m→1 and the wave develops into a sech2 wave which takes off and starts to propagate as a boson.

Field Quantization The electromagnetic field is quantized because it is composed of solitons, and these are formed in the emission process on the surface of fermions. Researchers such as Toda, Boyd, Whitham, Korpel, Banerjee and Zaitsev have all proved that the cnoidal wave of the Korteweg- equation can be written–without approximation and for all amplitudes–as a sum of evenly spaced solitary waves plus a constant:

¥ A( x , t )=- 24 s /p +å 12 s2 sech 2 [ s ( x -- ct n p )]. n=-¥

(The constant merely insures that the x-average of A(x,t) is 0, as would be true of a sine wave, and has no further significance.) Even in the small-amplitude (s < 1) regime where A(x,t) is well approximated by a cosine function, the cnoidal wave is still the superposition of identical, evenly spaced solitary waves. Thus, solitons and cnoidal waves are not mutually exclusive, as Stokes believed; rather, the cnoidal wave is a sum of solitons. Nonlinear Equatorial Waves by J. P. Boyd.) A remarkable fact is that the superposition above is exact even in the limit of infinitesimal amplitude when the function A is a sine wave. In our particle model a stable fermion has a harmonic (sin or cos) wave on its surface, and its period is the reciprocal of the de Broglie frequency. The processes of emission or absorption can be triggered by perturbing a stable fermion by changes in energy density at its location. Most natural systems are nonlinear, and are therefore modelled by nonlinear systems of equations. The essential difference between linear and nonlinear systems is that linear systems satisfy a simple superposition principle and are thus incapable of expressing natural phenomena. As a summary, we argue that all electromagnetic waves in nature are superpositions of cnoidal waves, and the equation above constitutes a statement of fractality of the boson space.

Connection to Other Theories of Quantum Fields There is none. R. Feynman said the following in his Nobel lecture: We should look at the question of the value of physical ideas in developing a new theory. … I, therefore, think that a good theoretical physicist today might find it useful to have a wide range of physical viewpoints and mathematical expressions of the same theory (for example, of quantum electrodynamics) available to him...... for possibly the chance is high that the truth lies in the fashionable direction. But, on the off-chance that it is in another direction - a direction obvious from an unfashionable view of field theory - who will find it? Only someone who has sacrificed himself by teaching himself quantum electrodynamics from a peculiar and unusual point of view; one that he may have to invent for himself. This seems to be our case. It remains to be seen if theoretical physicists find our peculiar views worth considering.

Fritz Bopp’s Theory In his lecture Feynman also introduced a modification of classical electrodynamics by Fritz Bopp, the idea he used in his own theory. Bopp’s theory is that:

A(1,) t= j (2,)() t F s2 dV dt . m1ò m 2 12 2 2

Feynmans idea was to replace the delta function by another function f which is not infinitely sharp. He said: “You have no clue of precisely what function to put in for f, but it was an interesting possibility to keep in mind when developing quantum electrodynamics.”

78 In fact Feynman did more than just kept it in mind, he used it, but he also declared that “You have no clue of precisely what function to put in for f...”. In our theory, we have an obvious candidate for the function f , namely the sech2 function. Bopp's theory seems to stem from a desire to have a theory founded upon particles rather than waves, since it is this particle aspect (highly localized phenomena) which is most frequently encountered in present day high-energy experiments (cloud chamber tracks, etc.).

Bopp’s “field mechanics” fits perfectly into our theory for two reasons. Firstly, our particles are waves, but so compact that they can be described as particles. Secondly, the change from the infinitely sharp Dirac delta function to another function f which is not infinitely sharp, brings in the possibility of modulation.

2 A(1,)t=-j (2)(d trcdV12 /) 2 , Þ= A (1,) t j (2)sech( trcdV - 12 /) 2 , òV òV 2 frd(1,)t= (2)( trcdV -12 /) 2 , Þ= fr (1,) t (2)sech( trcdV - 12 /) 2 . òV òV

The equations above try to convey the idea that electromagnetic potentials are created at their fermionic sources in a scattering process, and then mediated by modulated (torus-shaped) electromagnetic solitons. A one-dimensional mod- lated soliton is featured below:

2 Axt(,,w )= A sech( xct - )cos( wB t ).

The angular frequency B refers to the de Broglie -frequency of the scatterer. It is the base frequency of the spectrum of scattered radiation. Moving from Dirac delta to the sech2 function in the equations above means inclusion of the fine structure of electromagnetic interaction.

[ For the reader interested in mathematical details of torus-shaped fields, we recommend the works of Pierre Hillion. He has analyzed the behavior of TE and TM electromagnetic fields in a toroidal space through Maxwell and wave equations, as well as sech-shaped soliton solutions. ]

The Electromagnetic “General Relativity” Constraints on the Motion of Particles in Space Feynman lecturing: “Is the same thing true in mechanics? Is it true that the particle doesn’t just ‘take the right path’ but that it looks at all the other possible trajectories? And if by having things in the way, we don’t let it look, that we will get an analog of diffraction? The miracle of it all is, of course, that it does just that. That’s what the laws of quantum mechanics say. So our principle of least action is incompletely stated. It isn’t that a particle takes the path of least action but that it smells all the paths in the neighborhood and chooses the one that has the least action by a method analogous to the one by which light chose the shortest time.”

Why this talk about particles looking, smelling and choosing? It is because the least action principle is teleological, literally. (Teleology is defined in the Encyclopedia Britannica as “explanation by reference to some purpose or end”.) It is also because Feynman's space is homogeneous (the same everywhere) and isotropic (the same in every direction). There are quantum fluctuations of the electromagnetic field, virtual particles constantly appearing and disappearing. This is the zero-point field. But one thing is missing: there are no gradients of the zero-point energy density.

In his Lectures (II) Feynman explains how a particle's true path is determined by Hamilton's principle:

“So every subsection of the path must also be a minimum. And this is true no matter how short the subsection. Therefore, the principle that the whole path gives a minimum can be stated also by saying that an infinitesimal section of path also has a curve such that it has a minimum action. Now if we take a short enough section of

79 path—between two points a and b very close together—how the potential varies from one place to another far away is not the important thing, because you are staying almost in the same place over the whole little piece of the path. The only thing that you have to discuss is the first-order change in the potential. The answer can only depend on the derivative of the potential and not on the potential everywhere. So the statement about the gross property of the whole path becomes a statement of what happens for a short section of the path—a differential statement. And this differential statement only involves the derivatives of the potential, that is, the force at a point. That’s the qualitative explanation of the relation between the gross law and the differential law.”

“So the statement about the gross property of the whole path becomes a statement of what happens for a short section of the path... that is, the force at a point.” Thus spoke Feynman. How can his teleological views be connected with this?

Anyhow, a force is a gradient of energy density. We bring in the concept of literally all-embracing energy density that replaces the concept of zero-point energy of space, and this density has gradients. The electromagnetic potentials V and A are components of the tensorial energy density in question. It also replaces the material parameters (permittivity ε and permeability µ) which describe the interaction between electromagnetic energy and matter.

From these parameters one can calculate the refractive index n. Currently, there is great vagueness around this concept: how many atoms constitutes the minimum number before you can apply the idea of a refractive index? We think that it reasonable to introduce a concept by which the sharp distinction between matter and space fades away. We start by reviewing the eikonal: ik0 S() r 2 2 E= e() r e, ()grad S = n = e m . r r

This equation is usually interpreted as a wave which is “locally plane.” S(r) is a function of position in space and it determines how the wave vector changes along the path of the propagating light. But there is another interpretation: the locally plane region can be seen as part of a closed surface.

The eikonal equation can also be regarded as the equation that describes the propagation of discontinuities of the solutions of wave equations of certain type, e.g. the one from which we derived the almost sech2 fermion model.

This subject is discussed in Principles of Optics by Born and Wolff, 7th (expanded) edition, appendix VI. The outcome is that a moving discontinuity surface must obey the eikonal equation. The subject was known already to H. Bateman. He writes in his book ELECTRICAL AND OPTICAL WAVE-MOTION: “...the boundary-conditions do not imply that the boundary moves normally to itself with the velocity of light; in fact, the motion of the boundary can be quite arbitrary.”

What this all means: we derived our fermion model by replacing the product εμ with energy density. Fermions are regions of very high energy density as compared to the surrounding space. Therefore, they are surrounded by a surface of discontinuity. It is a thin transition layer within which the value of εμ varies rapidly but continuously. Bosons are similar in this respect.

We can make the following conclusion: Particles are guided in space by gradients of energy density, but they also affect the energy density of space with their own fields. (We can equally well say that particles are guided by a force, because that is what a gradient of energy density is.)

We now have the condition that “matter tells spacetime how to curve, and spacetime tells matter how to move”, but in Euclidean space, time separated and quantum phenomena included. The discreteness of space and matter is measured by the gradient of energy density.

If it so happens that a “perturbation series” is needed, it can be constructed from the same general series expansion from which R. Feynman constructed his perturbation expansion in powers of the fine-structure constant, because this “constant” is central to our theory similarly as in QED. But there will be no divergent integrals as coefficients in the power series! [¦ (x )]2 [ ¦ ( x )]n e¦()x =+¦1 (x ) + +×××+ . 2n !

Our electromagnetic, nonlinear theory explains why “charge” and masses of particles are not constants but have “experimental values”, and why different kinds of “cutoffs” are physically justified.

80 The Jewel of Physics Content of Feynman’s Theory The first paragraph in the preface of Relativistic Quantum Mechanics' by Bjorken and Drell: “The propagator approach to a relativistic quantum theory pioneered in 1949 by Feynman has provided a practical, as well as intuitively appealing, formulation of quantum electrodynamics and a fertile approach to a broad class of problems in the theory of elementary particles. The entire renormalization program, basic to the present confidence of theorists in the predictions of quantum electrodynamics, is in fact dependent on a Feynman graph analysis, as is also considerable progress in the proof of analytical properties required to write dispersion relations. Indeed, one may go so far as to adopt the extreme view that the set of all Feynman graphs is the theory.” [original emphasis.] Feynman diagrams are based on virtual particles, and the existence of these is possible through the Heisenberg Uncertainty Principle. We explained earlier why this axiom is wrong. In our opinion, a probability theory based on virtual particles (and hence on Heisenberg's principle) has nothing to do with reality. Feynman himself considered his Nobel Prize-winning work more of a virtuoso technical performance than a meaningful insight into nature, and we agree with that. But the connection to reality can be restored, and this has been the general aim of our work. We have now shown how the raw diamond of QED can be cut, grinded and polished to make it shine.

Precision Tests of the Present-day QED According to Dirac's theory, the magnetic moment of the electron is 2. QED introduces a correction to this, caused by “vacuum polarization”. The calculation of this correction generates an infinite power series with infinitely many divergent integrals as coefficients. It is said that the measurement of the electron anomalous magnetic moment is the most accurate test of the theory of QED to date. These measurements are carried out by using Penning traps.

Experiments with individual electrons suggest stepwise (or quantized) behavior at the microscopic level. For this reason the Geonium Theory has been developed. Geonium is a highly artificial “atom”, and treating the trap and the electron as a bound quantum system opens the possibility to explain the discreteness of measured data by the claim that the motion of the electron is quantized. But this line of thought also brings about another consequence; the perpetual motion of electrons, similar to that in atoms. Before discussing the experiments we try to find out what is the picture of the electron that the researchers have in mind because they, too, speak of zitterbewegung.

The main idea comes from Kerson Huang: On the Zitterbewegung of the Dirac Electron, 1952 : “The well-known Zitterbewegung may be looked upon as a circular motion about the direction of the electron spin with radius equal to the Compton wavelength × (π/2) of the electron. The intrinsic spin of the electron may be looked upon as the orbital angular momentum of this motion. The current produced by the Zitterbewegung is seen to give rise to the intrinsic magnetic moment of the electron.”

Ampere's Theory. ( From NATURE, Dec. I, 1887. ) “The idea that magnetism was nothing more nor less than a whirl of electricity is no new one: it is as old as Ampere. Perceiving that a magnet could be imitated by an electric whirl, he made the hypothesis that an electric whirl existed in every magnet and was the cause of its properties. Not of course that a steel magnet contains an electric current circulating round and round it, as an electro-magnet has: nothing is more certain than the fact that a magnet is not magnetized as a whole, but that each particle of it is magnetized, and that the actual magnet is merely an assemblage of polarized particles. The old and familiar experiment of breaking a magnet into pieces proves this. Each particle or molecule of the bar must have its circulating electric current, and then the properties of the whole are explained. ….. How did these currents originate? We may as well ask: How did any of their properties originate. How did their motion originate? These questions are unanswerable. Suffice it for us, there they are.”

81 In our theory they are not there! Experimenters using Penning traps have adopted Huang's view. From Van Dyck, Schwinberg and Dehmelt; Experiments with single electrons. 1991:

“...What remains is a soft quasi-orbital structure of radius about one Compton wavelength×(π/2) formed by the circular Zitterbewegung of the hard “point” electron of dimension smaller than 10-16 cm. It is this structure on which measurements of the intrinsic magnetism of the electron provide information.”

(One of the most embarrassing claims of the present atomic physics is that the inner-shell electrons in large atoms have velocities approaching the speed of light, and that protons and neutrons inside nuclei move with speeds of the order 0.2 c … 0.3 c.) Bohr magneton plays the crucial role in the calculations, but quantum mechanics has no model for the origin of the Bohr magneton at all; It is just taken for granted. Adopting Huang's view one can derive from it the Bohr magneton:

eh m =AI = . 2mc

The I is the current caused by the motion of the electron, at the speed of light. A is the area circumvented by the circular orbit of the electron. (The derivation itself was published in The Hadronic Journal in 1985.)

The starting point of the derivation is the assumption that an electron revolves, at the speed of light, in an orbit whose circumference is the Compton wavelength, 2.42631×10-11 mm, but that is, of course, impossible. The fact that the Bohr magneton can be derived from such a setup should be seen as a warning that something is wrong.

The product AI (magnetic moment) is familiar from classical electromagnetics. It works well in engineering calculations but, as suggested in this article, the cause of magnetism is not the velocity of electrons but their orientation. In the calculations of magnetic moments it has become customary to break the magnetic moment into two parts:

eh g - 2 m =(1 +a ) , a = . 2mc 2

The first part, predicted by the Dirac equation is 1 in units of the Bohr magneton. The second part is the anomalous moment, where the dimensionless quantity a is the anomaly. We can see that the anomaly is a measure of how much the Bohr magneton differs from the impossible zitterbewegung condition. There is something fishy in the theoretical setup for the g-factor measurements and it forces one to seek explanations.

Some Questions and Answers About the One-electron Penning Trap Experiment

There are a countless number of articles on the physics of the Penning trap technique and the experimental setup; we don't repeat them here. The results of the experiment are explained by an analysis of the motion of the electron in the trap. The motion is composed of three harmonic oscillators: the cyclotron, the axial and the magnetron, which are well separated in the energy scale (GHz, MHz and kHz respectively). After the injection of electrons into the trap the radius of the cyclotron motion is damped quickly and the electron is brought towards the trap center, but it is claimed that the motion never ends.

Now the question arises: what prevents the electron from stopping completely in the trap? The answer is this: an electron in a homogeneous field B will move in a circular orbit with cyclotron frequency and radius of motion as follows:

wc=eBm,. r = v w c

If the velocity is lowered (by dissipative processes which, in reality, are always present), the radius of the orbit must decrease. In classical theories there is no limit to the decrease of r, but not so in modern physics. Why not? Because of Heisenberg's principle, what else? (This principle is certainly the most destructive idea of modern physics.)

It says: “The smaller the region to which the electron is confined, the smaller is the uncertainty in its position. There must be a corresponding increase in the uncertainty of its momentum. This is brought about by the increase in the kinetic energy which increases the magnitude of the momentum and thus the uncertainty in its value. In other words, the bound electron must always possess kinetic energy as a consequence of quantum mechanics.”

82 So it is claimed that at the lowest quantum state of the geonium atom the electron oscillates in the middle of the trap. This cyclotron motion is characterized by its frequency and radius, typically 150 GHz and 10 nm.

What are the conclusions that have been drawn from these experiments? This is best seen in the calculation of the g- factor. We take it from the article The Isolated Electron by Philip Ekstrom and David Wineland:

gm B2 m B n=B n = B nnn =- sh c h asc

=gmB B - 2 m B B

=(g - 2)mB B

(g- 2)mB B n (g - 2) a =h = 2m B n c B 2 h

(g - 2) -3 =1.1596522 ´ 10 []EIGHT SIGNIFICANT FIGURES 2 é(g - 2) ù g =2 + 2 = 2.0023193044 []11 SIGNIFICANT FIGURES ëê2 ûú

In the ratio νa / νc almost all quantities cancel exactly; what remains is the ratio (g-2)/2, which is equal to the ratio of two frequencies. (The experiment makes use of the fact that frequency is the quantity that can be measured most precisely.) What is the present scientific justification for these oscillations? “The spin of the particle, however, is intrinsically quantum mechanical; and the cyclotron motion at liquid helium temperatures is also quantum mechanical.” [ Monoelectron Oscillator; Gabrielse.] What and how do we know about these oscillations? “All information about the electron is gathered through the axial motion. [ νa ]” [Ibid.] How is it gathered? From L. S. Brown and G. Gabrielse: Geonium theory:

“The axial resonance is of particular importance in the experiments with one electron or positron. It is at a radio frequency, which is relatively easy to use in the laboratory, while the cyclotron and spin resonances are at very high microwave frequencies, which are much less accessible. As a result, all information about cyclotron and spin excitations is detected via small couplings, which produce corresponding small shifts in the axial resonance frequency.” [our emphasis.]

In our view, the physical reality of the experiment is as follows: The electron is trapped due to the magnetic bottle effect and the repulsive electric forces. Without excitation in the z-direction it moves only slightly, and the motion is Brownian-like. The polar orientation of the electron is controlled by the magnetic field of the trap. The electron has moment of inertia about the axis through its center, perpendicular to the spin vector. Therefore, it is possible to excite the electron spin resonance and the subsequent spin flip by the use of electromagnetic radiation having a Fourier component at the electron wobble resonance frequency. The spin flip is detected as a small change in the ~ 60 MHz axial oscillation frequency of the electron in the trap. The situation in the trap is clear: the small change of the frequency of the oscillation in z-direction is due to changing mass; there are two different masses for the same electron assigned to two different orientations, spin up and spin down in the magnetic field of the trap.

2 2eV0 2 eV 0 &&z+=wz z 0, w z1 =2 , w z 2 = 2 . m1 d m 2 d

From the point of view of our theory, calculations including νc (the frequency of cyclotron motion) are meaningless, because it is non-existent in the sense of perpetual motion.

83 On the Accuracy of the g-2 Measurements From an article Cavity Control of a Single-Electron Quantum Cyclotron by D. Hanneke: “For angular momentum arising from orbital motion, g/2 depends on the relative distribution of charge and mass and equals 1/2 if they coincide...”

How it is possible that the magnetic moment of the electron can be explained with such accuracy, when we do not possess an accepted explanation for the mass, charge or spin of the electron, not to mention distributions of such things?

The question is not answered but the math is given (see the calculations by Ekstrom and Wineland above): there are two frequencies from which the “anomaly frequency” is determined as a difference: spin frequency and cyclotron frequency. The spin precession is slightly higher than the cyclotron frequency. To get the value of (g – 2) /2 the anomaly frequency is compared to the cyclotron frequency: νa / νc = (g – 2) /2.

The problem is that νc does not exist at all. Its perpetual being is a free creation of the human mind. The 8-digit accuracy is achieved by making a comparison to a reference value which exists only as an idea.

Then an obvious paradox: If (g – 2) /2 is known to eight significant figures, how can the accuracy of the measurement be improved to 11 significant figures just by multiplying by 2 and adding 2? This question is best answered by the following analogy: A surveyor is asked to lay out a baseline of 2,001 meters. He has a one-meter measuring rod, accurate to 8 significant figures. The baseline could be measured meter by meter, but the surveyor happens to have a chain, known to be exactly two kilometers long. The measurement is easily done. The error due to the measuring rod is in the scale of tens of nanometers and comparing this to the total length of the baseline the surveyor can announce that the baseline is accurate to 11 significant figures. But how does he know that the chain is exactly two kilometers long? He knows, because he has a theory concerning the length of the chain and it tells him so.

What is the g-factor? It determines how large a magnetic moment will be generated by a given amount of spin, charge and mass, all without explanation in today's physics. However, its measurement is used to fortify the aura around QED.

“For the electron spin, the most accurate value for the spin g-factor has been experimentally determined to have the value 2.00231930419922 ± (1.5 × 10−12).” Wikipedia.

This accuracy is pure illusion as explained above, but the QED series expansion can of course be adjusted to fit the value. The thing to note, however, is that QED at no point is directly involved in the extraction of a g-2 value from the experiments. The familiar expression from the theory of renormalization fits perfectly with our theory:

mexp=( m 0 + D m ).

The experimental mass is a variable because a fermion never is isolated from its environment and is dependent on the local tensorial frequency-dependent energy density. The mass m0 is a reference value, measured e.g. in free space. Δm is a stepwise change of the mass, caused by emission or absorption of bosons. With this understanding, the observations of the precision test experiments can be understood and explained without any reference to the axioms and doctrines under the sign of “Copenhagen interpretation”, and the present form of quantum electrodynamics is not needed at all.

http://www.mpi-hd.mpg.de/blaum/

84 “Landau quantization in quantum mechanics is the quantization of the cyclotron orbits of charged particles in magnetic fields. As a result, the charged particles can only occupy orbits with discrete energy values, called Landau levels.” In light of our theory, this quantization should be thought to concern the structure of a fermion and not its orbit. With that revision the energy level sketch below is understandable without any reference to orbital speeds.

The results of Penning trap experiments, interpreted as above, renders meaningless Niels Bohr’s assertion, as formulated by Pauli, that “it is impossible to observe the spin of the electron, separated fully from its orbital momentum, by means of experiments based on the concept of classical particle trajectories”.

Some Related Phenomena In his book Electromagnetic Theory Julius A. Stratton speaks of one-to-one correspondence of electrical and mechanical systems and how they can be formulated by the same set of differential equations. He writes: “It would seem that the 'absolute reality', if one dare think of such thing, is an inertia property, of which mass and inductance are only representations or names.” We now understand that the root cause of inductance is the moment of inertia of the electron. It also has its own role in the oscillation discovered in the Davisson – Germer experiment. The exact mechanism behind electron pairing has been unknown. Now there is an explanation: a spin-up spin-down pair of electrons is possible in an external magnetic field, because the electrons have differing energies. Therefore, their spectra are somewhat interleaved (an effect known as level repulsion ) and the repulsive force is diminished. Magnetic force keeps the electrons together.

More Tests of QED

The second-most-accurate test of QED (after the electron's anomalous magnetic dipole moment described above) is the atom-recoil measurement. Their comparison is said to provide the most stringent test of QED.”

One can, for example, take a look at an article A new photon recoil experiment: towards a determination of the fine structure constant. From its section Basic Idea one can see that the recoil test is based on Planck–Einstein relations and conservation of energy and momentum as expressed by the Compton-Debye law. The authors introduce their work as follows: “This method does not rely on involved QED calculations. Thus, on the one hand, it is not limited by the accuracy of present QED theory. On the other hand, a comparison to α as obtained from ge−2 amounts to a test of QED.”

These experiments could all have been performed and yielded their results without QED having ever been formulated.

“Let me begin by noting that the high precision tests of QED is really a test of the Standard Model on systems in which the electromagnetic interaction plays the dominant role. Of course, non-QED effects cannot be ignored. Before going into detail, it must be noted that the theory at present has several parameters such as mass m of the electron and elementary charge e (or the fine structure constant α) which cannot be fixed within QED. These parameters must be found elsewhere. One problem is that no currently available value of α is accurate enough to test theory and the measurement of

the electron magnetic moment anomaly ae. From another perspective, however, this means that ae is the best source of α available at present. Once α is determined from ae, it can be used to examine how well α's based on other physics are measured. Comparisons of α's measured by various methods is really a test of the internal consistency of quantum mechanics on which all these measurements are based.” [Toichiro Kinoshita in The Gregory Breit Centennial Symposium.] [Our emphasis.]

85 Internally Consistent Theories May Be False “Most of the essay has argued that the arguments referring to consistency are valid, important, and theories that have passed some (or many) consistency checks should be taken (more) seriously because they are probably teaching us something new, something that we previously didn't know. However, it is also possible to overestimate the power of the consistency. The most fallacious abuse of the power of consistency (or the language that includes this word) is to pick a particular theory and to claim that it must be correct because every new insight or observation seems consistent with the theory while overlooking the existence of some competing theories that are equally (or more) consistent.” [ An excerpt from the essay Consistency Arguments in Theoretical Physics by Luboš Motl. ]

CONCLUSION

éæöΛ ù é æöL ù eec=+1 , mmc =+ 1% . ΛΛ0ê 0ç÷Λ ú 0 ê 0 ç÷ ú ëèø0 û ë èøL0 û

This is called the Renormalization Group (RG) approach. One can see that the charge and mass have “running values” (c0 and its tilde-counterpart are proportional to ln(Λ/Λ0)). The Λ's are cutoff energy scales. If the terms containing (Λ/Λ0) are replaced with a function expressing spatial energy density, the equations represent the basic idea of our theory. An Ideal According to Paul Dirac

The RG approach is saying that different theories are appropriate at different energy scales. We suggest one theory that is suitable at all energy scales; a theory that does not suffer from divergences. It satisfies Paul Dirac's philosophical wish: “From general philosophical grounds one would at first sight like to have as few kinds of elementary particles as possible, say only one kind, or at most two, and to have all matter built up of these elementary kinds.”

We have one kind ( ≡ derivable from the same equation) of particles. As a byproduct, our theory reduces the number of fundamental forces to one. Quantum phenomena are an integral part of the universal electromagnetic force field theory, gravity included.

We have shown that all the most important experiments that are considered to support relativity can be explained without using the four-dimensional framework, the geometrical space-time, which we consider a conceptual disaster.

We have suggested an alternative view of quantum phenomena. We came to this point by first answering “no” to Feynman’s question “Will the electron’s momentum still be the right thing with which to describe nature?”

We think that our 100% electromagnetic, nonlinear theory is able to overcome the Copenhagen quantum physics. Results of atomic spectrometry are what they are, we have only seen them from a completely different perspective, which is in principle not new. A hundred years ago the electromagnetic view of the world was seen as a substitute for the mechanical view, but it fade away due to its incompatibility with the theory of relativity.

The general features of the new (quantized) electromagnetic view have been introduced. We are now in the position to test these ideas. Theories cannot legitimately be judged by the standards of another theory. But they can be compared with respect to their ability to fit a phenomenon into a broad theoretical framework which brings together, under one set of fundamental equations, a wide array of different kinds of phenomena.

In the remaining half of the article we aim to demonstrate the large-scale applicability of our theory. We use it to explain phenomena in nuclear, atomic and molecular physics, in cosmology and biology. We also discuss some miscellaneous interesting experiments and other matters of scientific interest and give them explanations using only the concepts of our approach.

86 Applications

Atoms and the Nature of Chemical Bonds

In 1913 Niels Bohr introduced his planetary model of the hydrogen atom. He said that the electrons do not spiral into the nucleus because of loss of energy by radiation, and came up with some rules for what does happen:

RULE 1: Electrons can orbit only at certain allowed distances from the nucleus.

RULE 2: Atoms radiate energy when an electron jumps from a higher-energy orbit to a lower-energy orbit. Also, an atom absorbs energy when an electron is boosted from a low-energy orbit to a high energy orbit.

These rules are correct, provided that one word is changed; the word orbit must be replaced with the word stay.

Ze2 E= ke . 2rn

This equation is also correct, provided that it is interpreted as follows. We first review the Schrödinger atom model for hydrogen. The model is a product of azimuthal, polar, and radial wave functions: ψ = ψ(r,θ,φ) = R(r)Y(θ, φ), in which Y(θ, φ) are the spherical harmonic functions and R(r) is expressible in terms of the associated Laguerre function.

The model is a solution of the Schrödinger equation, which is a wave equation. In the present interpretation of this equation we meet the concept of atomic orbital, which refers to electron's constant motion. We have here a case of perpetual motion again, and such motion is not possible.

The reason why an electron stays at quantized radial distance from the nucleus is the repelling force, caused by multipole radiation from the constituents of the nucleus. As we have already stated in the section Enhanced Concept of Electric Charge: All particles and their aggregations are sources of multipole radiation. The structure of the radiated field is best seen from the definitions of electric and magnetic multipole fields:

()()M wt E wt ELlm=gkrY l()(,)()(,) lmq j e, or HL lml = fkrY lm q j e .

There are radial functions, the basic angular functions (spherical harmonics) and the time factor in separated variables. Due to multipole radiation, around every particle there is a space region in which a test particle can encounter a varying energy density and all kinds of forces, including torsion. The space around a particle must be described as a tensor field at a certain frequency band. This tensor replaces the electric permittivity and magnetic permeability tensors. Schrödinger was not able to interpret his equation; he couldn't tell “what was waving.” We can now see the reason to this by comparing the atom model ψ = ψ(r,θ,φ) = R(r)Y(θ, φ), and the multipole expressions above. Schrödinger's atom is not a standing wave at all; it is a representation of electromagnetic radiation from the nucleus. The “atomic orbitals” are radiation patterns, just those familiar from antenna analysis. It is the electron that is a standing wave, in the radiation field of a nucleus.

Multipole radiation from the nucleus determines how electrons are situated around the nucleus, so it seems indeed possible to acquire information about the electron cloud, using the techniques of measurement described in the SciAm article Observing Orbitals.

There is no motion as a source of energy, there is no uncertainty (!) about the positions of electrons, and there is no idea of electrons being all alike. The nearer the electrons are to the nucleus, the larger is their volume and energy, due to higher energy density. Therefore, the electrons of an atom live at different frequency bands. The spectra of electrons are interleaved, and this is the origin of the exclusion principle. Repulsive forces are diminished and the arrangement of electrons into several layers or shells becomes possible. So the idea of quantized radius in the Bohr atom model above is indeed correct.

87 The Atom The magnetic part of our atom model is similar to the following: A Physical Model for Atoms and Nuclei by Joseph Lucas & al: Abstract. A physical geometrical packing model for the structure of the atom is developed based on the physical toroidal ring model of elementary particles proposed by Bergman. From the physical characteristics of real electrons from experiments by Compton this work derives, using combinatorial geometry, the number of electrons that will pack into the various physical shells about the nucleus in agreement with the observed structure of the Periodic Table of the Elements.

Our atom model combines perfectly with Lucas atom; we just replace the Spinning Ring Model of Elementary Particles with our own fermion model. In the Lucas model it is assumed that the electrons do not orbit the nucleus, they rather come to some stable equilibrium distance from it due to the balance of electric and magnetic forces. But it is not explained in more detail how this balance comes about. Our model completes the Lucas model by adding the appropriate system of electric repulsive forces. The atom is a system of particles which is held together by compression from the outside. This is the same cosmic pressure that is responsible for gravity, only that in the case of atoms and molecules the force is frequency dependent. A system of fermions is subject to Gauss's principle of least constraint, where the entity Z is to be minimized:

2 N 2 drk F k 1 Z@ mk-,()(,). F k == F k n Iu r s s d W å 2 c ò4p k =1 dt mk

(Real applications of this principle may be somewhat involved, because the constraints are forces caused by beams of polarized and modulated bosons from all frequency bands and from all directions...) Particles move, until there is a three-dimensional structure in which the particle positions are determined by local minimums of force/energy. During the process the spectra of particles become interleaved. This is the “logical reason for the exclusion principle” that Pauli was missing.

Gauss's principle is a minimum principle that applies also to open systems which are in a state of dissipative equilibrium, in which a system is driven to order by a constant flux of energy. In a state of equilibrium the time derivatives have vanished, only the balance between forces is left:

N 2 Z@ - F, F == F (n ) 1/ c I (r , s ) s d W . å k k k ò4p u k =1

The constraints Fk are the forces that trap each electron into a “potential hole”. The constraints include the modulated boson beams from other electrons of the atom and those from adjacent atoms and molecules, as well as the cosmic pressure.

Chemical Bonds

What is a Chemical Bond? “The process of combination involves the reunion of two or more atoms through redistribution of electrons in their outer shells by the process of sharing of electrons amongst themselves so that all the atoms acquire the stable noble gas configuration of minimum energy.”

We just want to make the additions that in the process of atoms forming minimum energy structures ( = making bonds) the interleaving of electron spectra is the crucial factor, and that after the bond is completed the electrons do not move anymore, because each of them sits in a potential hole.

88 The Atomic Nucleus

There are more than 30 nuclear structure models reviewed in Nuclear Models by Greiner & Maruhn. Something is still missing from the list, namely models based on 's ideas, which he published in 1937 as On the Consequences of the Symmetry of the Nuclear Hamiltonian on the Spectrometry of Nuclei. A recent update about various models and particularly the missing cluster and lattice models can be found in Models of the Atomic Nucleus: Unification through a Lattice of Nucleons by Norman D. . In this book he demonstrates that the lattice reproduces the main features of the other, well-established nuclear models within it. He also demonstrates that the conventional description of the energy states of nucleons has a straightforward geometry in the form of a specific 3D lattice. It is the geometry of the sequential shells and sub-shells of the harmonic oscillator

Some excerpts from N. D. Cook’s book: For many decades, nuclear structure theory has been dominated by three types of model: the gaseous-phase independent-particle (or shell) model, the liquid-phase liquid-drop (or collective) model, and the semi-solid- phase cluster (or alpha-particle) model. Each model has proven useful in explaining certain phenomena, but clearly they are mutually contradictory in implying very different nuclear textures. As others have commented, the necessity of using many (a total of more than 30!) incompatible “models” is not the hallmark of maturity in a scientific discipline! The first [conclusion], with which most nuclear physicists would agree, is that fundamental problems have remained unsolved for many years and that nuclear structure theory has not yet reached a successful completion.

The independent-particle model (IPM) has been first among equals in nuclear theory for about 60 years because it is fundamentally quantum mechanical. As a realistic model of nuclear structure, the IPM has both strengths and weaknesses, but unlike the liquid-drop and cluster models, it is explicitly built from the Schrödinger wave equation and therefore has a theoretical “purity” that the other models do not have.

Unfortunately, the unarticulated “picture” in the back of the mind of nearly every nuclear physicist alive today is one of protons and neutrons buzzing around inside of the tiny nuclear volume – not unlike the random movements of electrons in overlapping electron orbitals (“Zitterbewegung”). The idea that the shells and subshells of the nucleus have a geometrical basis quite unlike electron clouds is therefore seen as “counter-intuitive.” For this reason, much of the following text is devoted to showing why, on the contrary, a gaseous-phase picture of the nucleus should be considered counter-intuitive, and how a lattice model can reproduce the exact same sequence and occupancy of nucleons found in the independent-particle model on a geometrical basis.

My main argument, in other words, is that nuclear structure can be understood on a geometrical basis that the traditional textbooks simply do not convey.

2 The Identity between Nuclear States and Lattice Geometry

The successes of the conventional IPM are based on the nuclear Hamiltonian, where all possible nucleon states are given by the nuclear version of the Schrödinger equation:

Ynjlsmi, (+ ), , = R njlsmi , ( + ), ,()(,) r Y mjlsi , ( + ), q f

Cook then demonstrates how the precise identity between the nuclear Hamiltonian and the fcc lattice is unambiguous and how, as a consequence, the Cartesian coordinates for each nucleon can be used to define its quantum numbers. For example: n=(| x | + | y | +- | z |3)/2, j=(| x | + | y |) / 2.

The structure of nuclei is the exact analogy to atomic structure. The nucleus can be described using a nuclear shell model in which the nucleons are placed like electrons in atoms and there are certain configurations of nucleons that have special stability. Further in analogy there are “valence nucleons”. In some nuclei, a core of nucleons is surrounded by a cloud of valence nucleons that are weakly bound. This “halo structure” may extend to great distances, analogous to electrons surrounding the nucleus in an atom.

89 The form of the wave function Ψ above is the same from which we constructed our electromagnetic fermion model. The Cartesian coordinates used in lattice models can of course be converted into spherical coordinates. There results a standing wave model of the nucleus in which nucleons occupy space regions determined by the nodal surfaces of the standing wave.

By the principle of self-similarity we may assume that inside a large nucleus, energy density follows the sech2 function, similarly as inside every individual fermion. The rules that govern the structure of atoms and nuclei are the same, only that the energies (frequencies) of nucleons are much higher than the frequencies of electrons. The mechanism of nucleon pairing is the same as in electron pairing (see above: Some Related Phenomena).

The Lucas atom model, the Cook nucleus model and our electromagnetic fermion model together form a visualizable picture of fermions, atoms and nuclei, in a way which is not in conflict with spectra and other information obtained from them.

This way of reasoning liberates physics from the foolish assumption that the inner-shell electrons in large atoms have velocities approaching the speed of light, and that protons and neutrons inside nuclei move with speeds of the order (0.2 … 0.3)c, perpetually. It also liberates physics from the irrational reign of the Copenhagen School of Bohr and Heisenberg.

Riemann Zeta Function and Nucleation

A great deal of what is known about the atomic nucleus has been deduced from the analysis of scattering experiments done with charged particles and neutrons. Some experimental results have attracted special interest. The scattering resonances in these experiments have a peculiar distribution, they obey a repulsion law; two adjacent resonances are not close to each other. This phenomenon of “level repulsion” is characteristic of all heavy nuclei and the distribution of levels is similar to that of prime numbers.

Now we see the reason for this peculiarity: it is that the de Broglie frequencies of the nucleons are prime numbers. The atomic nucleons are so close together and the particles so energetic that their spectra must be almost perfectly interleaved. The stability of a nucleus can be disturbed by a colliding neutron of proper energy (frequency). Resonant forces break apart the nucleus and a great amount of energy is released. Less closely packed aggregations of fermions (and perhaps local structures in large nuclei) may allow relatively prime frequencies. However, all these structures are minimum energy configurations.

Quasicrystals Roger Penrose described them as follows: “Ordinary crystals grow serially, one atom at a time, but the complexity of quasicrystals suggests a more global phenomenon: each atom seems to sense what a number of other atoms are doing as they fall into place in concert” Quasicrystals posses long-range order, which is confirmed by their diffraction patterns. A geometrical structure that can possess such characteristics is a fractal. One can assume that all quasicrystals are essentially fractal objects. In the following we suggest a general principle which turns out to be Penrose's “global phenomenon.” Quasicrystals are minimum energy structures. Researchers have noticed that “The quasicrystal spectrum may contain an infinite number of discrete components, possibly arranged arbitrarily dense forming a set.” (The Dynamics of Patterns, M. I. Rabinovich et al.) This is an exact description of what we have suggested in this article for all particles. The Cantor set is self-similar, just like the radiation spectrum of a fermion (see page 37). Considering nucleation and crystal growth processes one might ask why there seems to be an upper level for the number of nuclei in the nucleus of an atom. We answer this question by a general principle. It follows from the properties of Cantor spectra. The key concept is the Golden Cantor Set, dubbed as such by Roger L. Kraft. The details of aggregation/crystallization processes depend on balance of two contributions. A particle (or an atom) nearing the surface of an aggregate feels forces from two sources: the forces exerted by its nearest neighbors (near field) and the force caused by the radiation from the whole aggregate (far field). The far field may be highly non-isotropic in crystalline structures, due to (non-trivial) Bragg scattering.

90 An example is the crystal of water below, the so-called “capped column”. Why the steady growth of a straight column suddenly stops and crystallization continues in a completely different manner? The answer is that during growth the near field effects remain the same but not the far field effect. At some point the near field effects become overwhelmed by the far field.

Picture credit http://www.snowcrystals.com.

As a consequence, the minimum energy conditions on the surface require a new orientation for the incoming molecule. Then, crystallization may continue again, but we see an abrupt change in the structure. If the crystal growth has started from a six-fold structure, the far field quickly becomes very complicated but retains its six-fold configuration.

The continuously changing far field causes a feedback, or butterfly effect. The process is highly sensitive to its initial condition and is nonlinear as a whole. The spectra of particles and aggregates of particles are middle-α Cantor sets, to be precise. For these one can define the concept of thickness.

Mathematical analysis of the middle-α Cantor sets includes an important ratio β/α, the thickness of a set. ( Self-similarity, or How Even Things Like Cantor Sets and the Golden Ratio are Related by Anna Lakunina.) If the spectrum of the far field becomes too thick, there is no room for new particles or atoms (regardless of orientation) on the surface of the aggregate. This is the reason why large nuclei become increasingly unstable.

The growth principle based on middle-α Cantor sets of particle frequencies is very general, we can see it working in biological structures as a smoothed version. We see it as a fundamental principle. All kinds of nucleation and aggregation processes, including the structures of biological life, hover near the threshold value β/α = φ, the golden ratio. This leads to minimum energy structures and fractal geometries based on that ratio. This is because the energy (and thereby the spectrum) of a particle depends on its distance to other particles. Interleaved spectra and minimum energy structures emerge hand in hand.

The final result of a crystallization process follows from minimizing the energy between a particle and its nearest neighbors, under the influence of far-field radiation from the whole aggregate. Particles follow the gradient of energy density, into the direction of least resistant. This is how Fibonacci structures are born. S. Douady and Y. have shown this by means of a physical experiment and numerical simulation. [ Phyllotaxis as a Physical Self-Organized Growth Process ] The golden ratio φ arises in the geometry of pentagons and icosahedra, which are prototypes of minimum energy structures. Fibonacci structures are best seen in plants, but they appear visible also in microscopic structures. The following picture is taken from the article Fibonacci series on microstructures, Phys.org. We can see the typical spirals in a microstructure (core of silver and a shell of silicon oxide) about 10 micrometres in diameter.

91 The researchers conclude by saying “The various seemingly different patterns for botanic elements such as sepals, seeds, and florets can be explained by the unique mechanism to minimize the total strain energy under a given geometric constraint, without resorting to any genetic or biochemical factors.” A new cell is formed on the aggregate of other cells. After mitosis, it is an exact copy of the parent cell, so they must repel. To stay as a part of the organism and not to become repelled to a distance, the spectrum of the new cell must become modified (differentiation of cells). All motion during mitosis follows the path of least resistance, and so the Fibonacci structures emerge.

1/f noise The 1/f noise is a universal and unavoidable phenomenon in any electronic circuitry. The figure below shows the measured noise that originates within a common electronic amplifier. The flat section above 100 Hz is called white noise, and is “well understood”. However, the sloping portion below 100 Hz is not well understood at all. This is 1/f noise, a mystery that has resisted explanation for over 80 years.

The picture is taken from the article An Interesting Fourier Transform - 1/f Noise by Steve Smith. Most of the existing explanations assume local physical effects, such as trapping and release phenomena, electron scattering mechanisms and phonon processes. 1/f noise occurs in almost all electronic devices. It is always related to a direct current. 1/f noise is also found in electrolytic cells, electron beams in vacuum, thin metallic films... Electric 1/f noise is present whenever a current is carried by a few carriers, or when a current bottleneck exists in an electric circuit. It is noteworthy that the circuit must be prepared with direct current in order that it could feel the 1/f noise. This is the way how the winding of any electromagnetic actuator must be prepared to generate forces in a magnetic field. (Thick metal films show no 1/f spectrum because the preparing, i.e. orientation of electron spins is best carried out in some sort of bottlenecks. If the winding of an electromagnet is made of thin wire, only small current is required.) Bosons are carriers of magnetic force and they arrive to us from space. They scatter from fermions and get polarized. As a result, we observe a magnetic field around those fermions. But the bosons may well be polarized before they hit the electrons of a circuit. This is the case with the low-frequency 1/f noise. Sunspots cause magnetic disturbances on Earth. But, dwarfing sunspots, there are enormous sources of magnetic fields in space (about which we discuss later) and they can produce polarized clouds of bosons which finally arrive to our electric circuits. Disturbances from these sources occur across all time-scales, from seconds to aeons.

92 In measurements of 1/f-noise these pre-polarized bosons may profuse a seemingly constant pressure (voltage) on the conduction electrons. For this reason experimenters have reported that the 1/f behavior extends over more than 6 frequency decades and there seems to be still no flattening at low frequencies. The physical conditions are the same as in Hall devices. Conduction electrons are affected by polarized bosons, and it is all the same if the bosons got polarized in a nearby magnet or in outer space. The phenomenon is best described by speaking of the pressure of electron gas in a conductor, and how this pressure can be measured with a voltmeter. The 1/f noise mechanism has features of Brownian motion, which is known to be observable only in particles which are very small. The 'particle' here is a group of electrons who act as a single particle due to their magnetic interaction. If the group is too large (long in a narrow conductor), it absorbs higher frequency noises. The extrinsic origin of 1/f noise can be verified by using two separated 1/f noise measurement setups and determining the spatial coherence between their signals.

Spatial Correlation of 1/f Noise

These pictures are from the thesis of R. F. Voss. The experiment was designed to measure the spatial correlation of 1/f noise [C(f)] between two metal strips of length 2.5 or 7.5 mm and width 12 μm. From the resulting graph one clearly sees the high correlation of shorter strips. Extrapolating to 0.1 Hz gives 100% correlation, meaning that the cause of the noise resides in space around the strips, not in the strips themselves, as is presently assumed.

Only few experiments of this kind have been carried out. J. H. Scofield & al. experimented to show that temperature fluctuations cannot generally be responsible for the 1/f noise observed in substrate-mounted metal films. The results above support our considerations about the mechanism of 1/f noise. But there is still no explanation why the boson flow from space has this particular 1/f spectrum. It is the most ubiquitous form of noise in nature. Phenomena that have no obvious connection at all, exhibit fluctuations with a 1/f character. It is so omnipresent that it must be in some sense fundamental. We return to this subject later.

Sonoluminescence

For 70 years, physicists have puzzled over sonoluminescence, a process where sound waves create a luminous bubble in the center of a water tank. Sound waves cause bubbles that rapidly expand and then collapse. At the point of collapse, the bubble emits a short pulse of light. The spectrum of emitted light consists of a broad continuum extending from the near infrared into the deep ultraviolet. There are some main questions, for example: why is a small amount of dissolved noble gas essential to stable and strong sonoluminescence? “The first experiments in this line of research used pure nitrogen gas, since it comprises 80% of air. To our surprise neither pure N2 nor pure 02 nor an 80:20 mixture of the two yielded a stable or visible signal. After convincing ourselves that there were no problems with the vacuum transfer system, we realized that air is 1% argon, and indeed, as shown in Fig. 22, a small amount of noble gas is essential for the activation of stable bright (i.e. visible to the eye) SL (Hiller et al., 1994).” From the article Defining the unknowns of sonoluminescence by Bradley P. Barber & al. Why the process is so sensitive to temperature? What is the light-emitting mechanism? We answer these questions as follows: Due to the acoustic field, at the position of the bubble, water molecules oscillate inwards and outwards causing a region of compression, where molecules are pushed together, and rarefaction, where water molecules are in a state of tension. Water vapor invades the bubble during its expansion because the local tension (negative pressure) has triggered the transition to non-interleaved (repulsive) spectra.

93 The expanding water vapor comes not only from the surface of the bubble, but also from water surrounding the noble gas molecules inside the bubble. The atoms of noble gas act as nucleation centers for water clusters. These are assemblies that cannot exist without interleaved (non-repulsive) spectra of the constituent water molecules. Dissociation of clusters is a sublimation process. Sonoluminescence is very sensitive to temperature. If water is cooled from 30ºC to 0ºC, the intensity of emission increases by a factor of ~100. This fact suggests to us that water clusters are much more easily formed in cold water. As the clusters form, there is no force resisting the collapse of the bubble; effectively the interior of the bubble turns into an almost perfect void. In the final stage of the collapse, supersonic inward radial flow of water is abruptly halted. At this brief moment of extreme stress the elevated energy density inside the small bubble excites all the electrons below some threshold energy. These electrons belong to water clusters (or clathrate hydrates), and therefore represent a wide spectrum. The excitations of electrons are emitted as photons during a time interval of a few hundred picoseconds. The collapsing bubble is a fluid mechanical device, a “water hammer”. The Rayleigh–Plesset equation describes correctly the dynamics of sonoluminescence. The source of radiation is a group of excited electrons returning to their stable states.

Luminescence in General Luminescence (“emission of light by a substance not resulting from heat”) has always the same origin: electrons that are too energetic to be in equilibrium with their surroundings. They reduce their energy by emitting photons. The reason why electrons are excited is that they have been exposed to high energy density, at least for a short period. In an atom, an electron can be excited by forcing it towards nucleus as in the sonoluminescence case. Inside crystalline solids there are electrons situated in regions of high energy density. If the crystal is cleaved, some high-energy electrons find themselves at or near the newly created surfaces, where the energy density is much lower. The electrons cannot maintain their high energy and therefore emit bosons. In some cases these bosons can be observed as X-rays. Also, a number of electrons become ejected from the surface. Impacts of these electrons lead to emission lines of the surrounding gas. The barometer light and high-energy emission from peeling tape (sticky tape X-rays) are versions of the above. In both experiments, there are three-dimensional structures transforming into more or less two-dimensional structures, expanding their surfaces. In the barometer light experiment, dragging glass through mercury forms a thin film of mercury onto the glass. The mercury sticks to the glass, gets thinner and thinner and then drops off. It undergoes a repeated stick-slip motion. We explain this as follows: During the stick-phase there is adhesion due to non-repulsive spectra. At the moment of slip mercury loses its adhesive property, because there appears a sudden re-arrangement of spectra (at the picosecond timescale). In the formation of films and filaments from bulk material some high-energy vacancies for electrons are lost. We see the result as luminescence and as emissions of high-speed electrons.

Luminescence and the accompanying phenomena can be readily understood under the fundamental assumption that the source of photons is the electron which always stabilizes itself according to the local energy density.

A most direct example of this is a crystal of terbium complex. If it is dipped into mercury, it emits visible light. While submerged, the electrons on the surface of the crystal experience an elevated energy density. When withdrawn from the mercury, the energy density on the surface decreases and emission occurs.

Electronic Conductivity of Solids The Wikipedia article on “Nearly free electron model” starts with the definition: “In solid-state physics, the nearly free electron model is a quantum mechanical model of physical properties of electrons that can move almost freely through the crystal lattice of a solid.” Then it states that “Free electrons are traveling plane waves. ...These plane wave solutions have an energy of” h2k 2 E = . k 2m

94 This expression comes directly from the equation =ħk2/(2m) by multiplying with ħ. This equation is the dispersion relation constructed by Schrödinger. In the beginning of section What is wrong with quantum mechanics we criticized this relation as wrong, because it connects the velocity and the energy of a particle incorrectly. It leads to incredible results, such as the “ velocity” of electrons in solids: 106 m/s at temperature T = 0 K, not to mention the idea that inner-shell electrons in large atoms have velocities approaching the speed of light, and that protons and neutrons inside nuclei move with speeds of the order (0.2 … 0.3)c.

According to the classical understanding, at zero kelvin temperature the particle kinetic energy is zero. What are these perpetual motions and their unbelievable velocities? In our opinion, they are nothing but artifacts of badly flawed theories about particles and atoms. One must start by considering the most fundamental principles that govern all particles everywhere. They are the following: 1 1 energy= Iu (,),(,).rs d W dn force = Iu rss d W d n còòn4 p c òòn4 p

An individual fermion experiences a flow of energy and momentum from all directions, and responds to that by following the principle of least constraint (introduced by Gauss in 1829).

2 N d2r F Z@ m k- k . å k 2 k =1 dt mk

Since the “constraints” are nothing but forces of the Lennard- Jones form at different frequency bands, the principle of least constraint suggests that the motion of a particle in space depends uniquely on its interaction with all the other particles in the universe. Considering the conductivity of solids, everything depends on what kind of potential landscape the electron meets with as it moves inside the solid. When an electron follows the trajectory of the least resistance, the trajectory may prove to be a geodesic, meaning that the electron moves on a surface of constant energy (or frequency). It is in thermal equilibrium ( i.e. without the need to absorb or emit energy) and it moves due to an externally applied voltage. Solids that allow an electric current to flow when a very small voltage is applied are called conductors. In this case there are geodesics that stretch all the way through the solid. In the potential landscape of a semiconductor there are ridges and/or valleys between potential peaks. In a semiconductor solid a bias voltage is needed to overcome the effects of potential wells and barriers and thereby maintain an electron current. In our theory voltage means the pressure of electron gas. Atoms and molecules are energetically open aggregations of particles in a geometric configuration; they are not Hamiltonian systems. We must take a new-look at the following expression:

2 2 2 ép e ù H= m0 c +ê - ú. ë2m r û

The new interpretation is as follows. H is the energy content of an electron. The sum in square brackets [ E = T+V ] is constant only if the electron is performing geodesic motion; in other words, if the electron is in a state of free fall. Free electrons in a very good conductor move with varying velocity in varying potential but with constant energy. We have already discussed this in the section Particles in geodesic motion in the context of gravity. In quantum mechanics, one would say that in geodesic motion the radial quantum number of the electron is an adiabatic invariant.

The path of this kind of motion is determined by the Jacobi’s Principle of Least Action, in which the energy E is to be variationally constant. Most often isoenergetic motion is not possible. In that case the energy H of the electron changes during its motion and it absorbs or emits quanta.

95 The energy of an electron is determined continuously by the local properties of the electromagnetic field, given by the frequency-dependent energy-momentum tensor. The equation above concerns the motion and energy of an electron in an electromagnetic field and is not the energy of a closed atomic system, because such systems do not exist.

Superconductivity and the quantum Hall effect

Superconductivity is not a rare phenomenon. About half the metallic elements and many alloys are known to be superconductors, meaning that electric current can flow through them without any resistance. A common physical property of metallic superconductors is that they contain freely moving conduction electrons, whose spins can be oriented by an external magnetic field.

The phenomena are the same as in the flow of molecules, which we have already discussed in the section On Laminar and Turbulent Flows. What was said about molecules applies also to electrons. Superconductivity means laminar flow of electrons through the conductive material, a flow in which particles move smoothly without turbulence.

For superconductivity there are three factors: critical temperature (Tc), critical field (Hc), and critical current density (Jc). Below these critical values the flow is laminar, otherwise it is turbulent.

Superconductivity can be observed only at cryogenic temperatures. But how do we know that a sample is in superconductive state? The answer is the Meissner effect, which is the classic hallmark of superconductivity. A question now arises: what is behind the Meissner effect? Researchers have already understood the basic principle: “The conduction electrons themselves must be responsible for the superconducting behavior. A feature which illustrates an important characteristic of these superconducting electrons is that the transition from the normal to the superconducting state is very sharp: in pure, strain-free single crystals it takes place within a temperature range as small as 10–3 K. This could only happen, if the electrons in a superconductor become condensed into a coherent, ordered state, which extends over long distances compared with the distances between the atoms. … A superconductor is more ordered than the normal metal; ... A crucial conclusion follows. When a material goes superconducting, the superconducting electrons must be condensed into an ordered state. To understand how this happens, we need to know how the electrons interact with each other to form this ordered state.” Firstly: there is no perpetual motion of electrons in a superconductor. Not there and not anywhere else. The ordered state means that the electrons sit in three-dimensional potential holes. Each of them has taken the minimum energy position and orientation with respect to the tensorial forces. The region containing ordered electrons may be large at cryogenic temperatures. If a test magnet is brought into the composed field of the cryogenic electrons, these electrons and the test magnet form a minimum energy structure. If work is done and the test magnet is rotated to a new orientation, the cryogenic electrons also slightly move and rotate, and a new minimum energy configuration is set up. For this reason the position of the test magnet can be adjusted and every time it becomes “locked in space”.

The present understanding of these things is forced by the conviction that the magnetic field is created by the mere motion of electrons. The experiment Kamerlingh Onnes carried out with mercury at superconducting temperature was to make a ring out of it, set up a current of electrons with a battery, and then remove the battery. He observed that the strength of the magnetic field did not decrease in time, proving that the current did not decrease. This was taken as evidence that the ring was a superconductor with a persistent current flowing forever.

Quantum Hall Effect

96 A general picture of quantum Hall phenomena is best given by a gas/hydrodynamic analogy. The Hall device is composed of two electron gas containers at the ends, and a tank of electron liquid in between. An electron can penetrate the wall separating a gas container and the tank anywhere, provided a certain threshold pressure (voltage) is exceeded. Because there is gravity (the Hall force) towards the bottom of the tank and therefore higher pressure at the bottom, electron gas comes in at the upper left corner, and the electron liquid goes out at the lower right corner. These are the “hot spots” (orange circles above), in which the electrons dissipate most of their energy. It is natural (and verified by experiments) that reversing the magnetic field causes the hot spots to appear in the other two corners. The dissipated energy in question is the excess energy carried along by every electron in the red gas container, due to higher energy density (pressure). “Scans of the Hall potential landscape over an area of the Hall bar for different bias voltages VDS and a magnetic field near the high-B edge of a Hall resistance plateau. Obviously, the Hall potential drop happens in the bulk. From (a) to (f) the bias voltage was increased from 20mV to 120mV in steps of 20mV. For low bias, the Hall potential profiles are similar in the different cross sections. At high bias, the profiles change strongly along the sample. Small regions of enhanced electrical fields are visible.” Current distribution and Hall potential landscape at the breakdown of the Quantum Hall Effect. Authors: K. Panos, J. Weis, R.R. Gerhardts, and K. von Klitzing.

The electrons feel a force from higher potential (red) region towards the regions of lower potential (blue). They are forced to move (by the current source) and on the way they meet with resistance. The bulk resistance of the device consists of large number of weak potential barriers, or bottlenecks. (As is hopefully clear from the preceding section, in this theory we do not relate the electrical resistance of a quantum conductor to the scattering properties of the conductor.)

The potential landscape is obtained by an “almost non-invasive” scanning probe technique. A measured value of potential at a point reveals the pressure of electron liquid under the probe. In the landscape scans equipotential contours are clearly visible as borders between colours. From these one can derive the direction of motion; conduction electrons drift approximately perpendicularly to the equipotential lines.

The scans can also be interpreted with respect to the difference between laminar and turbulent flow. In the former case the potential drop happens smoothly as in the upper picture above. If the electrons are forced by elevated bias voltage, a nonlinear effect arises. The resistance-causing bottlenecks become narrower or totally blocked due to increasing energy density. The authors write: “We found that the Hall potential landscape remains stable up to high bias − the local Hall voltage drops are moderate, until a small further increase in bias dramatically changes the Hall potential landscape. Such an abrupt change creates abruptly locally enhanced electric fields that enhance locally inter Landau level transitions or heating, and give rise to an abrupt increase of the longitudinal voltage drop.” The disturbance seems to initiate in the middle region of the Hall bar. This is expected because the energy density caused by the substance of the bar itself is highest at the center. (We discussed this already in the section Luminescence in general.)

The effect is certainly minuscule but that is all a nonlinear process needs; a slightly narrower bottleneck initiates the transition from laminar to turbulent flow, in other words, the transition between superconductive and resistive state. The situation at an individual bottleneck may resemble that of a mercury sample at low temperatures. Temperature means energy density (in this theory). There is a “dramatical change” at some threshold energy density. (Picture: Dependence of Resistance on Temperature. Boundless Physics.)

97 The situation at the surface of the Hall bar is quite different. An electron can cross the potential landscape moving around potential peaks without drifting into bottlenecks. Most obviously the Hall bar is a “topological insulator”; it conducts electricity on the surface regardless of the state of the bulk.

The high-conductive state of the surface can be verified by measuring the longitudinal resistance RL between voltage contacts at one edge or the other (see the picture below), but only during the quiescent period of laminar flow in the bulk. During times of turbulence the delicate surface phenomena are overwhelmed and high values of RL are measured.

Picture: Electrons in few dimensions — Semiconductor Physics Group, University of Cambridge.

e2 e 2 e 2 Why the Sequence 1× , 2 × , 3 × , …. ? h h h

As the energy density felt by the electron increases (due to increased strength of the magnetic field), its energy increases in a stepwise manner. Consequently, the repulsive force at the bottlenecks also increases step by step.

Between the steps of increasing electron energy there are regions of B in which the stability of the electron is at its maximum. In other words, the derivative dE/dB is zero. This gives an opportunity for the conduction electrons to fine-tune and interleave their mutual spectra. The resulting state is that of laminar flow (or super-conductivity), but it doesn't help individual electrons to pass a bottleneck. More pressure (voltage) is needed.

At the plateaus Ohm's law is in effect in its simplest form: R=U/I = constant. The increasing energy density (B) finally destroys the condition in which the electron spectra are interleaved and the flow of electrons is laminar. The sequence can be repeated by increasing magnetic field to a level at which interleaving is again possible, but the pressure needed to push electrons through bottlenecks increases at every step.

In his Nobel lecture von Klitzing connects the numbers i (in the graph above) to cyclotron orbits of electrons. In our opinion these do not exist here either. So we connect the numbers i=1, i=2, i=3... directly to the structure and energy of electrons, as explained in this article.

The classical expression for the Hall voltage UH of a two-dimensional electron gas with a surface carrier density nS is B UIH = × , nS × e

98 where I is the current through the sample. A calculation of the Hall resistance RH = UH / I under the condition that i energy levels are fully occupied ( nS = iN), leads to the expression for the quantized Hall resistance B h R= =, i = 1, 2, 3... H iN× e ie2

In our theory “fully occupied energy levels” means interleaved spectra, so the two equations above are for the cases of turbulent and laminar flow.

Fractional Quantum Hall Effect In the integer quantum Hall effect, the longitudinal resistance tends to vanish while the Hall resistance shows quantized values. A few years after the discovery of the IQHE, researchers discovered quantization of the Hall resistance at fractional values of I. These experiments were performed at even lower temperatures and higher magnetic fields. In these extreme physical circumstances the electron liquid can freeze.

Picture: Electrons in few dimensions — Semiconductor Physics Group, University of Cambridge.

In the graph above one can see mixed states. Turbulence is manifest but plateaus appear all the same. This means that the electric current is composed of minimum energy electron structures, perhaps something like the ones below.

The resistance felt by a current of these molecules at a bottleneck can be thought to be composed of electric and magnetic units. When an electron is added to the structure, the net spin of the “composite fermion” becomes raised or lowered by one unit, but this is not necessarily true with electric units, because the new electron must adjust its frequency to the spectrum of the existing “flake of electron snow”. In any case the changes of resistance are quantized, but they can be observed only in the regime of laminar or only slightly turbulent flow. The width of a plateau is a measure of stability of the structure.

99 The Laws of Induction The phenomenon of induction is one of the fundamental concepts of electromagnetic theory. It is still a surprisingly controversial subject, considering the fact that Faraday carried out the basic research in the 1830s. His ideas were later developed by Maxwell, and expressed in the form of a mathematical field theory. There has been much debate about the range of validity of Faraday's Law, particularly in the case of unipolar motors and generators. Richard Feynman discusses the physics of induction in his Lectures on Physics, and finally gives the two basic laws for the correct physics:

¶B ¶ FEv×B=q( + ), Ñ´=- E , Û Es ×=-d B×n da ,the "flux rule" . ¶tÑÑòG ¶ t ò S

“We know of no other place in physics where such a simple and accurate general principle [the “flux rule”] requires for its real understanding an analysis in terms of two different phenomena. Usually such a beautiful generalization is found to stem from a single deep underlying principle. Nevertheless, in this case there does not appear to be any such profound implication.” Here Feynman refers to the two separate equations for the two cases of “circuit moves” and “field changes”.

In the following we explain how the problem manifests itself in our theory. We first define the limits of applicability of Maxwell's curl equations: Ñ´=-Eiw H,. Ñ´= HE i w

As argued in this article, electric and magnetic fields are coherent regions of (differently) polarized bosons. These bosons are solitons and, therefore, they can polarize each other. The existence of magneto-optic effects supports this assumption. (Maxwell was aware of the fact that magnetism produces a rotatory effect, i.e. the rotation of the plane of polarized light when transmitted along the magnetic lines (Faraday rotation)). So the Maxwell curl equations must be seen as local interactions of bosons, not as expressions for macroscopic phenomena. ( Antennas are of course macroscopic but in their design retardation effects are included.)

All observable things are solutions of the pair of equations above. The solutions in question are waves, and they all are extended structures. Bosons propagate with finite speed, and for this reason, retardation effects must always be taken into account. If one writes down the “integral forms of Maxwell's curl equations”, there results expressions (the laws of Faraday and Ampere) in which retardation effects are totally absent. These laws have their merits in engineering usage, but they do not explain the observed phenomena.

The basic axiom is that every fermion responds to tensorial energy density at its very position by accelerating linearly or rotationally. The components of the tensor are determined by the radiation from other fermions in the relevant region and these contributions are of course retarded.

This picture (from the book Antenna and Wave Propagation) refers to the present formulation of Faraday's law.

100 But there are several examples of experiments to which the flux rule cannot be applied. On the right we show the one from Andrè Blondel: In 1914, Blondel performed an experiment in which a solenoidal coil was coaxial with a region of static magnetic field, such that the magnetic field on the conductor of the coil was negligible. The conductor of the coil D could be wound onto an auxiliary drum

D1, such that the magnetic flux Φ through the coil D was time dependent. From the flux rule equations above it follows that the flux is proportional to the number of coil turns: F µ NI.

According to the flux rule some induced electromotive force should be observable. Despite this, the voltmeter V gave a zero reading. ( From Kirk T. McDonald's article Blondel’s Experiment. ) From this and from other experiments of the same sort we deduce that electromagnetic induction must be explained at the particle level without referring to macroscopic concepts such as the magnetic flux Φ.

This picture shows the same phenomenon based on local interactions only.

The formalism of calculations can be seen in the article The electromagnetic fields and the radiation of a spatio- temporally varying electric current loop by Markus Lazar, in which the electric and magnetic fields of a spatio- temporally varying electric current loop are calculated using the Jefimenko equations. “The Liènard-Wiechert fields produced by the electric current loop are calculated. The generalized Faraday law and the generalized Biot-Savart law for a spatio-temporally varying electric current loop are found.”

Circuit Moves Now we return to Feynman's concern about the two separate equations for the two cases of “circuit moves” and “field changes”. In the first case the electromotive force in the secondary circuit is due to the magnetic force: F=q v ´ B.

A noteworthy feature of this force law is that it is a force versus velocity relation instead of the force versus acceleration as in Newton's second law. We know another force versus velocity relation; it is the Magnus force. Presently, these two laws are used in totally different contexts. Magnus force is exploited in vortex dynamics. We speak of Magnus force when the moving object is a vortex. As a spinning ball speeds through the air, it creates a Magnus force due to a vortex produced mechanically by rotation. From empirical evidence we must draw the conclusion that there are two species of fermion fields: The ring form that experiences the Magnus force, and a true standing wave (superposed circulations), which does not deflect in a homogeneous magnetic field, but does deflect due to the gradient of a magnetic field, like the neutron. Circulation and vorticity are the two primary measures of rotation in a fluid. Circulation, which is a scalar integral quantity, is a macroscopic measure of rotation for a finite area of the fluid. Vorticity, however, is a vector field that gives a microscopic measure of the rotation at any point in the fluid. The Intrinsic Spin So far we have regarded the spin vector as the unit direction vector of the symmetry axis of the particle field, but now we associate to this vector not the angular momentum S, but something very similar to it.

101 In the fields of bosons and fermions (see the picture below) there is electromagnetic energy in motion in the equatorial direction. (As explained in the section Physics of the Riemann Hypothesis, for a stable particle the Poynting vector is purely imaginary, pointing along the equatorial surface). Assuming orderly motion of energy in a microscopic system is not anything new; it is the whirl-hypothesis of Ampere. It is also the Bohr – de Broglie quantum condition “The circumference of the circular orbit of the electron should be an integral multiple of the wavelength of de Broglie wave, otherwise the wave cannot be smoothly continuous.” For a particle's field in its rest frame we have stated the following: “The field is a standing wave structure that can be seen formed of two counterpropagating waves. From the requirement of single-valuedness of the wave function it follows that the toroidal wave must be a periodic function with period 2π.” But, as we concluded above, a unidirectional process is also possible, similar to that in ring lasers. In that case there is no standing wave interference pattern in the equatorial direction but a propagating wave. The speed of wave depends on the distance r to the spin axis. In other words, the optical path length in the toroidal direction (blue in the picture) must be constant at every point of the poloidal cross section. This means that the electromagnetic energy of the particle undergoes “rigid body rotation.” In this case the calculations of angular momentum and circulation are formally similar, but circulation has the advantage over angular velocity that no assumption of a solid body is required! In this theory we have developed a vortex model of particles. It is consistent to replace the concept of intrinsic angular momentum with intrinsic circulation. In macroscopic experiments of vortex dynamics the Magnus force has a key role, and it is always connected with circulation. So, we argue that the physical situation is the same for an electron propagating in magnetic field or a spinning ball propagating in air. The magnetic force on particles, F=q v ´ B, is the microscopic manifestation of the Magnus effect. This ends the “circuit moves” – part of the analysis of the flux rule.

Field Changes In the second case, “field changes”, the situation is more complicated. The force in question is gradient-based. When the switch is closed (in the picture on page 101) a transient period is initiated, during which the currents and fields are brought from zero to their final values. At t = 0 a compression front starts to propagate along the wire. Compression, the pressure of electron gas, forces the conduction electrons away from their minimum energy positions into motion and into a configuration such that the spin vectors are perpendicular to the motion and parallel to circles around the wire. Electrons are magnetically dipole-dipole coupled, and each electron has moment of inertia about the axis through its center, perpendicular to the spin vector. The propagation speed of the compression front is determined by the number of magnetic couplings in the relevant region. If the number of couplings is increased by placing soft iron near to the wire, the speed of the front is greatly decreased. We say that the inductance of the coil is increased. (We recall here the words of J. A. Stratton who wrote in his book Electromagnetic Theory: “It would seem that the 'absolute reality', if one dare think of such thing, is an inertia property, of which mass and inductance are only representations or names.”) The propagating front leaves behind a magnetized region which starts to act as a source of a magnetic field. The result is that during the transient period the elements dl of the secondary circuit are affected by a time-varying gradient of magnetic energy density. For this reason there arises an “electromotive force” which puts electrons into motion and causes pressure, just as voltage does. In this latter case (“field changes”) the macroscopic equivalence is a vertically falling, spinning ball. When the ball hits a flat surface, it acquires velocity in the direction of the surface. In the two cases in question the forces are of different nature, and neither of them is the electromotive force f = E/q, as defined by Coulomb's law.

102 The Lorentz force law has limited validity; it can be used in applications in which electrons are emitted into magnetic field “one by one”, such as electron optics, or, in situations in which a conductor moves in a magnetic field and the Magnus force affects all individual electrons equally and simultaneously. This is a way to exert pressure on the electron gas in the conductor. Another way to do the same is to expose the electrons to a magnetic gradient. A third way to generate collective motion of conduction electrons is to supply a voltage difference between the ends of the wire. So the phrase “electromotive force” is a general expression, and this is why the “flux rule” requires for its real understanding an analysis in terms of two different phenomena, as Feynman has noted. This has consequences.

On Some Aspects of Plasma Physics. “Plasma physics started along two parallel lines. One of them was the hundred-year-old investigations into what was called electrical discharges in gases. To a high degree, this approach was experimental and phenomenological, and only very slowly did it reach some degree of theoretical sophistication. ….. In short, it was a field which was not well suited for mathematically elegant theories. The other approach came from the highly developed kinetic theory of ordinary gases. It was thought that, with a limited amount of work, this field could be extended to include ionized gases. The theories were mathematically elegant and claimed to derive all the properties of plasma from first principles. In reality, this was not true. Because of the complexity of the problem, a number of approximations were necessary which were not always appropriate. The theories had very little contact with experimental plasma physics; all awkward and complicated phenomena which had been observed in the study of discharges in gases were simply neglected. ….. The crushing victory of the theoretical approach over the experimental approach lasted only until the theory was used to make experimentally verifiable predictions. From the theory, it was concluded that in the laboratory, plasmas could easily be confined in magnetic fields and heated to such temperatures as to make thermonuclear release of energy possible. ….. The result was catastrophic. Although the theories were generally accepted, the plasma itself refused to believe in them. Instead, the plasma showed a large number of important effects which were not included in the theory.” The excerpts above are from the book Cosmic Plasma by Hannes Alfvèn. The huge discrepancy between theory and experiment was named by Alfvèn as the thermonuclear crisis. The theory he refers to is the Bennet pinch which he introduces as follows: Consider a fully ionized cylindrical plasma column with radius r, in an axial electric field E, which produces an axial current of density i. The axial current is associated with an azimuthal magnetic field.

The current flowing across its own magnetic field excerts a force iB, which is directed radially inward, and causes the plasma to be compressed towards the axis (hence the name pinch effect). For the equilibrium between the compressing electromagnetic force and the sum p of the electron and ion pressures, pe and pi ,we have

Ñp=Ñ(pe + pi ) = iB.

….. By employing the Maxwell equation ÑB = μ0i, equation [above], and the perfect gas law, we quickly arrive at the Bennet relation

2. 2Nk(Te + Ti) = (μ0 /4π) I

103 Now, the problem is that the two vectors i in the two equations above are eliminated by assuming that they are the same vector, but they are not the same vector. In earlier discussions we tried to make explicit the difference: the first one is caused by the Magnus force, the magnetic force component of the Lorentz force, the second one is the current density in a conductor in which the electrons are forced to move into the direction of the vector i. Equating the vectors leads to a differential equation whose solutions are not models of any real physical phenomena. This is the reason why plasma refuses to believe the generally accepted theory, as H. Alfvèn put it. There is no such thing as the Bennett equilibrium, not in cylindrical coordinates, nor in Tokamak-coordinates. At the particle level such an equilibrium would require that an elastic cylindrical sheet is composed of free electrons and other electrons then bounce from it and stay inside the cylinder. But electrons are magnetic dipoles and such a sheet pinch is impossible. Hannes Alfvèn observed filamentary structures in many places. For instance in the Sun: “Prominences, spicules, coronal streamers, polar plumes, etc. In all these cases there are more or less convincing arguments for attributing the filaments to field-aligned currents.” But he was cautious: “However, in situ measurements have not yet completely clarified the relation between the visual structures and the electric currents.” Of course there are filamentary structures in Nature, but they are not maintained by the “Bennet equilibrium” There are also pinch effects caused by strong electric currents, demonstrated e.g. with pinched aluminum cans, but this is different from the “plasma pinch”; the current is constrained to the wall of the can.

A Generally Excepted Theory in Ampère's original circuital law can be expressed in equivalent differential form: Ñ´ H = J.

This is certainly true within the conductive material of filamentary wires, but is it true in situations in which the said constraints are totally absent? One of the basic laws of physics is the Lorentz force law:

F=q( E + v×B ).

In an experimental setup for validating this law the electron is free to orient itself as it wishes. But are we free to write that J = ρv , current density as a product of charge density and velocity? At least theoretical astrophysicists have taken this liberty; they write Ampere's law as

Ñ´ H = Jfree , and have ended up with “thin current sheets”, “plasma ropes”, and “frozen-in magnetic fields”, which in our theory are not possible, because we make a difference between the two vectors i and i as explained above.

Experimental Evidence for Ampere's Circuital Law?

Ñ ´ H = Jfree .

In an attempt to find experimental evidence for the differential form of Ampere's law it comes as a surprise that, besides Henry Rowland's experiment, there are very few. The experiments were repeated and confirmed by W. C. Röntgen and F. Himstedt. But what is proved by Rowland's experiment? Knowing now that electrons have axial symmetry instead of spherical symmetry, we realize that the Rowland experiment must be reviewed as follows: Rowland's experimental apparatus was a rapidly rotating charged disc. But a rotation cannot be considered as a translation. Due to gyroscopic forces, the spin vectors of electrons became aligned with the axis of the spinning disc. So the spin vectors were perpendicular to the velocity vector of the electrons. This interpretation supports our argument that magnetism is not caused by mere convective motion of electrons, but by collective orientation of their symmetry axes, i.e. spin vectors.

104 A Modern Version of Rowland's Experiment This was carried out by Stefan Marinov. He used a Hall effect detector to measure the magnetic field produced by a spinning charged disk. Marinov wanted to check the effect of relative motion between the detector and the disc.

(a) detector stationary, disk spins. (b) detector rotates, disk stationary. (c) detector and disk spin together. Marinov reported the following results for the three cases: (a): B field observed. (b): B field not observed. (c): B field observed. These results refute special relativity once again, equally well and from the same foundation as the Sagnac effect and Newton's bucket experiment. ( He noted that the absolute of rotation of water in a bucket was revealed by the observable curvature of the water's surface. ) From the following report (Project No. 279-00-16., founded by NASA) we can conclude that the level of spin orientation in a rotating wheel can be driven to the level of spin-flip. At some threshold limit the principal of local minimum energy takes over and the spins of conducting electrons start to flip:

An Experimental Investigation To Determine Interaction Between Rotating Bodies “To detect a possible but unknown field (gravitomagnetic), we used both the Hall probe and a much more sensitive giant magnetoresistive sensor. Within our sensitivity limit (10-4 G) and a rotation rate of 8,000 rpm, no signal was detected. However, with no magnetic shielding, a strong signal, its magnitude increasing as the rotational speed increased from zero, was measured at the rim of the wheel. Unexpectedly, the signal went through a peak value and began dropping down as the rotation rate increased further, even passing through the zero value.”

Potentials and Force Fields as Seen by a Non-Simplistic Electron In classical electrodynamics, the vector and scalar potentials were introduced as convenient mathematical aids for calculating the electric and magnetic fields. Only the fields, not the potentials, were regarded as having physical significance. However, the discovery of the Aharonov-Bohm effect (1959) and the recent research on the Maxwell-Lodge paradox reinforce the idea of vector potential as a physical entity rather than an artificial, auxiliary quantity. Presently, the electromagnetic potentials Φ and A are seen as being more fundamental, at least in quantum mechanics. We can completely eliminate the uncertainty between the importance of force fields versus potentials by reviewing our fundamental integrals, which describe the ubiquitous flow of background radiation from all directions, consisting of bosons at smaller and smaller scales, ad infinitum. 1 1 energy= Iu (,),(,).rs d W dn force = Iu rss d W d n còòn4 p c òòn4 p

The energy integral expresses a property of space, and if there is a fermion in a small volume at the point r, its energy is determined by the integrated specific intensity. The force integral shows that it is the distribution of radiation pressure which determines if the particle is accelerated or not. It is obvious that there may be regions in space in which the particle is exposed to intense radiation but is not accelerated. Energy density is high but its gradient is zero. We say that the particle is at a high potential P(r).

105 If the particle moves into a region of non-zero gradient of energy density, it starts to accelerate. We say that it feels a force; the particle is in a force field F(r).

Picture from the book Physics by Jim Breithaupt.

In the above setup of bar magnets there appears a special, neutral point between the two N-poles. Magnetic field is zero at this point because the curl of vector potential vanishes. It is because the spin-oriented electrons of the magnetic bars emit torque-carrying bosons and, due to symmetry, all torques on the test electron are canceled at the neutral point. The bars can also be thought to be similarly charged, instead of being magnetized. In this case it is the scalar potential that is zero at the neutral point and again the test electron feels no net force. But our test electron is not “so simple a thing”, it can make a difference between points like the one above and one in free space. Regions of higher energy density are detected by the increased de Broglie-frequency of the electron. This is the cause of the phase difference Δϕ between the two electron beams in the Aharonov-Bohm experiment. A region of space (as around the neutral point) into which the same number of modulated bosons flow from two opposite directions is different from free space; electrons feel an increased energy density. The same happens in a region in which half of bosons have spin up and the other half spin down.

On the Usefulness of the Vector Potential Magnetic field can be determined directly from its sources using the Biot-Savart law. It is natural to ask why the vector potential is necessary if the magnetic field is already known? It isn’t in engineering usage, but in understanding magnetic phenomena, it is.

Above we see a piece of wire and a current I flowing in it, causing a magnetic field around the wire. As explained earlier, closer look reveals that the flowing electrons in the wire have turned into the position that the spin vectors are more or less perpendicular to the motion and parallel to circles around the wire. The current vector I must be understood as a region of space in which electrons have orderly spins, and the degree of coherence depends on the strength of the current. On the right the thick loop represents a “necklace” of electrons and their spin vectors (the red region on the left). The thin loops around electrons represent the spin vectors of scattered bosons and this is the vector field A: If a field of spin-polarized bosons has curl, it is the magnetic field. B= Ñ ´ A. Potentials and force fields are of equal importance in understanding physical phenomena, because they are means to express the content of the two fundamental integrals above.

106 On Magnetic Forces Between Current-Carrying Wires In the early 1800s scientists (such as Laplace, Poisson, Biot...) were convinced of the complete independence between electricity and magnetism. But then came Hans Christian Ørsted and his experiment, in which a conducting wire was placed on top of the compass. When the wire was connected to a battery, the needle was deflected. Ampère was fascinated by Ørsted’s discovery and decided that he would try to understand this phenomenon. Ampère’s work resulted in a new branch of electricity: electrodynamics, in which effects are produced by electricity moving in conductors. He developed his own mathematical theory of electromagnetism that worked remarkably well. Ampere current element The current element ids is the short red region in the picture above.

i dl i dl dF =1 1 2 2 (2cos(e ) - 3cos( a )cos( b )). r2

“The experimental investigations by which Ampere established the laws of mechanical action between electric currents is one of the most brilliant achievements in science. The whole, theory and experiment, seems as if it had leaped, full-grown and full armed, from the brain of the ‘Newton of electricity’. It is perfect in form, and unassailable in accuracy, and it is summed up in a formula from which all the phenomena may be deduced, and which must always remain the cardinal formula of electro-dynamics.” This was said by no less a person than James Maxwell in his Treatise on Electricity and Magnetism, Volume 2. Besides Ampere, Jean-Baptiste Biot investigated Ørsted’s discovery. He also tried to find a mathematical law that would express the magnetic action. Biot and Ampere soon came into conflict. Whereas Ampere treated electromagnetic and magnetic forces in terms of interactions between current-carrying conductors, Biot conceived electromagnetism as a purely magnetic phenomenon, explained by forces between the tiny magnets that he supposed to be arranged in a circular fashion around any current-carrying wire. He proposed to reduce the interaction between a conducting wire and a compass to actions based on the principle that only similar entities can interact. If the conducting wire acted like a magnetic needle, it is because it temporarily became a magnet. He imagined that each “slice” of the conductor underwent “a momentary magnetization of its molecules.” Moreover, the action of the slice was a composite action. We may now notice that our theory incorporates the ideas of both Ampere and Biot. The current of electrons, which are tiny magnets, arranges them in a circular fashion around any current-carrying wire.

Biot-Savart Law established the fact that a current-carrying wire produces a magnetic field. Still, for Biot the problem of the action of a conducting wire on a magnet was far from being resolved: “It remains to be found how each infinitely small molecule of the connecting wire contributes to the total action of the slice to which it belongs.” Today, we know what has been put in the place of Biot’s “slice”, the current element ids. In search of a cardinal law of electrodynamics for individual charges, Hendrik Lorentz derived the modern form of the law based on the previous work of others, including Maxwell. In the derivation he made the following substitution: ids= q v.

In present physics its existence is forced by two reasons: the empirical evidence that magnetism is caused by electric current, and the assumption of point-like particles.

107 The current elements of Ampere and Biot are material pieces of conducting wire, while Lorentz’s current element is a mere expression of the velocity of a charge. The present understanding is that charged particles not only respond to the E and B fields but also generate these fields, and that a moving electron creates a magnetic field regardless of any constraints upon its motion.

q v´ r B=,. FvB =q ´ r3

Now, if we examine the forces in the two-electron system above (considered as current elements), there is no magnetic force on electron 1 as the magnetic field of electron 2 vanishes at positions along its velocity vector, while electron 1 causes a force on electron 2. The third law of Newton is violated! The basic reason to this discrepancy is the wrong idea about the origin of the magnetic field. It should be understood that electrons are sources of magnetic fields, be they in motion or not. A freely moving electron always orients itself according to the local electromagnetic field, but in the experiments used to validate the Lorentz force law, orientation cannot be observed. The meeting of two electrons is basically a nonlinear dipole- dipole interaction, usually repulsive. But it may also end up in an electron pair. In that case the energies of the two electrons must differ. (A lone pair.)

Ampere considered that the laws of electrodynamics should respect of action and reaction. The Biot-Savart/Lorentz law does not obey Newton’s third law, which is the key to understanding processes of nature.

Picture credit: MRIquestions.com. This detail is seldom discussed in textbooks, in which magnetostatics is introduced as the physics of closed-loop currents. In this way, the invalidity (or at least in-adequateness) of the Lorentz force law can be obscured.

Exploding Wires as the Experimental Verification of the Force Between Current Elements

Exploding a wire is possible by discharging a large amount of energy into the wire in a very short period. The explosion has been found to be a nonlinear phenomenon. No two wire explosions are the same, they are highly dependent upon the local experimental conditions, there is no absolute repeat-ability of experiments. Below we show some typical results of exploding wire measurements (resistance is a calculated value):

108 The graphs above are from the article Experimental Observation of Plasma Formation and Current Transfer in Fine Wire Expansion Experiments by Peter U. Duselis & al. The authors write: “During the expansion phase, while the currents through and voltage across the wire are increasing, a sudden voltage collapse occurs. After this collapse the voltage remains relatively small while the current continues to increase. This voltage collapse has been attributed to a plasma discharge forming around the partially vaporized wire, and a transfer of current from the neutral wire material to the plasma has been postulated. ...the photodiode was responding to ultraviolet light. The response time of the photodiode was < 1 ns, thus the ~8 ns risetime of the photodiode signal shown in Figure 1 is a real effect. After voltage collapse, low level UV radiation was observed for the remainder of the current pulse. The timing of the voltage collapse was coincident with the risetime portion of the UV light. The four signals in Figure 1 were synchronized to better than half a nanosecond and are typical of several singlewire shots.” We have added the red lines to show the exact timing of decreasing voltage (energy density) and the increasing luminosity (measured with a vacuum photodiode).

The picture is from the article State of the metal core in nanosecond exploding wires and related phenomena in Journal of , August 2004. “All metals show a sharp and narrow first emission peak concurrent with voltage breakdown.”

109 Here we see the same exact timing: The light emission peak starts at falling voltage. Why? What is behind a continuous spectrum which turns into an absorption/emission spectrum in few microseconds? In terms of our theory these questions can be answered as follows: First stages of the wire explosion Voltage Overshoot A noticeable feature in all experiments with exploding wire is the overvoltage peak, i.e., the voltage on the wire exceeds the magnitude of the initial capacitor voltage U0. If a usual LRC-circuit is broken during its discharge period, a high voltage builds up across the gap. This phenomenon is attributed to the inductance of the circuit. Inductance is caused by the inertia of moving electrons, and that motion can be rotational or linear. But it is not insignificant in which kind of geometry the motion takes place. The shock wave is a most basic and important hydrodynamic phenomenon in high-energy density physics. In converging geometries a shock wave is cumulatively strengthened towards the narrowing end. The route of electrons from the capacitor bank to the exploding wire certainly is a case of converging geometry. (The so-called X-pinch assembly includes an extreme end of a converging geometry.) We have already discussed an experiment including convergent geometry, the sonoluminescence phenomenon. There it was stated that the Rayleigh–Plesset equation describes correctly the dynamics of sonoluminescence. The picture below shows the radius of the illuminating bubble as a function of time (and thereby the velocity of the surrounding water) . If this graph is compared with the graph V(Volts) above, their association is obvious. In both cases high pressure followed by damped oscillations is generated by stopping a moving mass in a very short time. It is only that in electromagnetic theory the Newtonian concepts of inertia and pressure are disguised; they are called “inductance” and “voltage”. The hydraulic analog of the “voltage overshoot” is the hydraulic ram, a water pump powered by kinetic energy of water. If the water level of a pond is one meter above the pump, the ram can pump water up to ten meters above the pond level. Hydraulic rams, sonoluminescence, voltage overshoots, all of these processes utilize the water hammer effect.

The “Pinch” In general, electromagnetic waves reflect from discontinuities of the medium in which they propagate. In pinch experiments the most notable discontinuities are the physical connections to the wire. At t=0, the shock wave arrives at the wire. Energy starts to accumulate into the electrons at the shock front. At the same time they become oriented into the “necklace” form as required by the Ampere current element. As the shock wave reaches the other end, part of it is reflected. The energy density (voltage) inside the wire rises rapidly, but it does not distribute evenly along the wire. Due to the standing wave effect the energy density is highest at the antinodes of the wave. In those regions of high energy density the spectra of electrons become interleaved, repulsion decreases and magnetic forces take over. The spin vectors of electrons take the orientation as in the picture above and the electrons try to close the gaps between them. This causes magnetic pinching forces inside the wire, and, due to these pinches, the rest of the conduction electrons are prevented from flowing in the wire in the Z-direction. If the energy being supplied to the wire is exhausted at this moment of time, the wire survives but is slightly deformed; its surface takes an undulatory form.

The picture is from Jan Nasilowsky, Instytut Elektrotechniki, Warsaw, Poland.

110 Flashover Plasma There is something going on on the surface of the wire during the apparition of surface deformations. In their article quoted above Peter U. Duselis & al. write: “Figure 2a: A picture of the plasma surrounding the exploding wire taken by a framing camera. The camera responds to visible light and had a shutter that stayed open for 8 ns. This picture was taken when the voltage along the wire started to collapse, about 75 ns after the current started.” We can see the undulatory form taking shape, but what is the plasma surrounding the wire? The following has been suggested: “... it was found that nanosecond time-scale wire explosion in vacuum is accompanied by the generation of a plasma shell because of electrical breakdown in the vicinity of the wire surface in the vapors of desorbed gases and evaporated wire material...” But the suggestion is not compatible with the following: “A sudden flash of light is next observed, the spectrum of which is continuous and independent of the material used. Thereafter emission and absorption lines are observed depending upon the materials used.” (our emphasis). [ W.M.Conn, Combination of electrically exploded wires and electric arc. ] This phenomenon is known by the name flashover plasma, and it starts to conduct current as soon as it appears. This is no wonder because the plasma consists of conduction electrons squeezed out of the wire. Immediately after the voltage peak these electrons feel a reducing energy density and “flash”. This is the reason why the light emission peak starts at falling voltage in the two graphs earlier. The luminous phenomenon in question has been known for a long time: it is the St. Elmo's fire. It has been explained as “the corona discharge”, an unwanted side effect in high voltage applications. Electrons are forced out of the wire due to high pressure inside and because, as explained above, they are prevented from propagating in the Z-direction. This is the “anomalous resistance” and it is the cause of the voltage overshoot.

The picture on the right is from the article A contribution to the theory of skin-effect by Jan Nasilowski. It shows the “Internal structure of the wire after single pulse current flow. Melted outer layer, recrystallized core.” Nasilowski writes: “When transforming copper wires into undu- loids the skin melting was observed, confirmed by the exami- nation of the internal structure of the wire. Fig. 4 shows that the wire of the initial diameter of 0.75 mm. was not completely melted. The core of the wire was not melted but it was recrystallized from the liquid state (dendritic structure is seen).”

According to our theory highly excited electrons have been squeezed out from the pinch region to the unduloid region. The electrons emitted part of their energy inside the unduloid (due to decreasing energy density) heating it to the temperature required for re-crystallization. The same electrons delivered energy again at the surface in the flashover process and melted the surface of the wire. Finally, these electrons form the radiating cloud which is presently known as the coronal plasma. Thinking of a high-power version of this experiment, one would expect first a flash of light of continuous spectrum, then high-speed, radial showers of electrons from unduloid regions but not from pinch regions, (electrons are absorbing energy in these regions), and finally molten and vaporized metal following the route of these electrons, but at much lower speed. In some experiments, due to high pressure (voltage) and the Ampere tension (repulsion between current elements), the wire is cracked by brute force without any sign of melting. Electron microscope investigations confirm that the breaks are due to tensile stress. From the book Ampere-Neumann electrodynamics of metals by Peter Graneau:

111 “The fundamental difference between the two is that Ampère’s law predicts that electrodynamic force can have a component in the same direction as the current flow in a current element. The Lorentz force denies the existence of such a longitudinal component. Approximately 20 experiments were performed in the MIT laboratory and all confirmed the existence of the longitudinal force component. These experiments ranged from DC currents of hundreds of amps in liquid mercury and copper circuits to much higher pulsed currents of tens of kilo-amps in railgun and exploding wire configurations.” Graneaus’s experiments alone prove the Ampere tension. According to Maxwell, “Ampère's theory is summed up in a formula from which all the phenomena may be deduced, and which must always remain the cardinal formula of electro- dynamics.” Now we need to restrict this statement to “Ampère's theory is the cardinal law of electrodynamics in situations in which a current of electrons is forced into filamentary wires.

Plasma Beads Plasma beads are “beads of light” which form at cracks or fractures of the wire. We have already discussed a relevant issue in the section Luminescence in General. There it was explained that “Inside crystalline solids there are electrons situated in regions of high energy density. If the crystal is cleaved, some high-energy electrons find themselves at or near the newly created surfaces, where the energy density is much lower. The electrons cannot maintain their high energy and therefore emit bosons. In some cases these bosons can be observed as X-rays. Also, a number of electrons become ejected from the surface. … The barometer light and high-energy emission from peeling tape (sticky tape X-rays) are versions of the above. … there are three-dimensional structures ... expanding their surfaces.” Inside an exploding wire there certainly are high-energy electrons. They spread out through the cracks and become accelerated in the direction of least resistance, which in this case is the radial direction. As soon as they feel a reduced energy density, they radiate.

Restrike Only few authors have reported plasma beads. They can be found (by the name plasma spots) in an article by Michael J. Taylor.

Picture from the article Current diversion around a fragmenting wire during the voltage spike associated with exploding wires by Michael J. Taylor. The image shows plasma beads forming prior to the onset of restrike. The beads then expand and coalesce. Once a continuous string of beads is complete, the bright restrike channel forms. This means that conduction electrons have been forced out of the wire but they continue their normal duty as current carriers.

This is verified beyond doubt in the article Experimental Observation of Plasma Formation and Current Transfer in Fine Wire Expansion Experiments by Peter U. Duselis & al. :

112 The authors made use of a special wire holder that measures simultaneously the total current and the current conducted by plasma at distances from the wire. It is seen that high pressure is needed to transfer the current outside of the wire. According to the authors, the current was transferred “to this highly conducting coronal plasma”, which we identify as conduction electrons. The general trend of the current is to rise; due to inertia effect it cannot be stopped by the magnetic pinch. The conduction electrons just break out from the wire and cause a high peak of pressure at the moment of penetration.

Dwell Time In the picture below we see that a small amount of conduction electrons have penetrated the surface of the wire, but the current has then ceased. This is because the electric shock wave to the wire has not been strong enough for the plasma beads to expand and coalesce. During dwell time very little expansion of plasma is seen around the wire. But the wire still gains energy and melts. At this point the magnetic pinches have no effect and the conduction electrons are free to flow again. A high-energy shock wave creates immediately a conducting sheath of electrons around the wire, so there is no dwell time. The oscillogram is from the SANDIA REPORT SAND2015-1132, Understanding the Electrical Interplay Between a Capacitive Discharge Circuit and Exploding Metal by Patrick O’Malley and Christopher J. Garasi.

Striations

This is the reality of an exploding wire experiment, instead of a smooth cylinder as predicted by the Bennett equilibrium. Closer look reveals striated disintegration of the wire, associated with streams of plasma shooting out radially. Axially non-uniform ablation occurs in all wire explosion experiments but its mechanism has been unknown.

Axially non-uniform ablation This is an example of radiography of tungsten arrays on the Z-generator at the time of wire breakage. This image not only shows the clear gaps that have developed periodically along the wire cores, but also reveals that the ablation streams do not originate from the same axial positions as these breaks.

Picture (modified) and text above are from the thesis of Gareth Hall, in which he studied z-pinches in aluminum wire arrays.

113 Axial modulation of the ablation rate is seen in all exploding wire Z-pinch experiments, but the cause of the initiation of striations is not known. It is called the “Bennet pinch instability”. ( Wikipedia lists 57 other plasma instabilities. It seems that a whole branch of science is based on a non-existent equilibrium and all observations are categorized as instabilities of that imagined equilibrium. )

In our theory the explanation is not far-fetched; it is based on the same equation as the particle model. The initiation of the process of striation formation is subject to a nonlinear standing wave. If a normal standing wave is driven into high amplitude, as certainly is the case in exploding wire experiments, the nonlinear effects transom energy into higher harmonic components.

In the article Simulations of the nonlinear Helmholtz equation: arrest of beam collapse, nonparaxial solitons and counter-propagating beams the authors G. Baruch, G. Fibich and Semyon Tsynkov, discuss the properties of solutions of the nonlinear Helmholtz equation. In the case in which part of the forward propagating beam is backscattered, | E|2 exhibits fast oscillations. As explained in the section The “Pinch”, magnetic pinching is initiated in the high energy regions, at the antinodes of the wave. Ablation takes place at the nodal regions.

Radiation of the Z-pinch An exploding wire radiates neutrons. It also radiates at X-ray, UV and visible wavelengths. The neutron radiation can be explained by the ponderomotive force (‘a nonlinear force that a charged particle experiences in an inhomogeneous oscillating electromagnetic field’ — Wikipedia).

e2 F = - Ñ(E 2 ). p 4mw2

Including into it our idea of resonant forces replacing the concept of charge, there results strongly nonlinear, collective interactions and resonant dynamics. Neutrons are “shaken loose” from matter at resonant frequencies through an electromagnetic interaction. Electrons and atoms from high energy, pinched regions are forced towards regions of lower energy density. This is the condition which causes all fermions to radiate, and these sources of radiation are distributed along the wire similarly as the nodal regions of the standing wave. Hence, the “axial modulation of the ablation rate.”

Implosion dynamics of wire array Z-pinches

These pictures are taken from the website: http://www.lps.cornell.edu/arrays:

114 The present interpretation of images like the ones above is given in a nutshell by Gareth Hall in his Z-pinch PhD Thesis: “A sheath of current-carrying plasma begins to implode, ... As it moves inwards towards the axis, this sheath accretes plasma already inside the array; a mechanism referred to as a snowplough implosion. The sheath acts as a piston, both collecting and compressing plasma internal to its position as it implodes, shown in the laser probing images [as the ones above] in which the compressed plasma can be seen as the darker region between the precursor column and the initial wire positions [at the sides].” Our interpretation: Electrons shoot out from the wires and fill the interior of the array. Ablation of the wire material is going on in the background. Electrons are driven by the jB force towards the array axis (by the same mechanism as in Penning traps). At the axis these electrons form the “precursor column”. This is possible because the coulombic repulsion between electrons is diminished due to their interleaved spectra. (This is obvious from the continuous spectrum of the flash emission.”) The motion of electrons does not “snowplow” anything, all mass ablated from the wires is “trailing mass”.

The picture is an excerpt from the article The Physics of Wire-Array Z-pinches by Gareth Hall.

The 0D model is the equation of motion of a shell driven by the pressure of magnetic field. It says that there is always acceleration if the current flows:

2 2 2 d R Bm0 I m0 2 =-2p R × =- . dt2m0 4 p R

When does ablation stop and implosion begin? The picture modified from a picture by G. Hall.

Column

115 In this picture, just before implosion starts, the seeing is good because the clouds of electrons are removed; they are forming the precursor column and arcs across the gaps of the wire. Immediately as the current starts to flow along the paths composed of arc gaps and segments of wire, these remaining parts of the wire feel the Lorentz force and start to implode. To answer the question of the heading: we actually observe the fact that there is no pinch effect in plasma but in metallic wires there is!

On Newton's Third Law and Electromagnetics There are several physical experiments in which the action-reaction principle is clearly violated. Finding the members of the action-reaction pair is the prime key to understanding a phenomenon. Below we introduce the two devices which are at present the most clearly inexplicable in their workings, in the scope of physics today. They are the railgun and the homopolar motor.

Unipolar Motor The unipolar (or homopolar) motor is a ridiculously simple device and, for that reason, it so obviously reveals a major flaw in the foundation of physics. The reader may take a look at it run:

http://www.youtube.com/watch?v=QQRUclDiruw

Thomas Valone writes in his book The Homopolar Handbook: “...[researchers] have sought to explain the Number One Homopolar Mystery: the torque is created within the conducting magnet without an apparent equal and opposite reaction!”

The explanation is that there are fixed and moving electrons in the magnet. Fixed electrons have permanently oriented spins. Bosons scatter from these electrons and become linearly polarized, i.e., the magnetic field appears. Conduction electrons move in the field due to the voltage of the battery. They feel the Magnus force and create a torque on the disc. Conduction electrons do not know where the polarization of bosons took place, so the device works equally well with separate conducting disc and external magnets (see Faraday's disc). In a rotationally symmetric magnet the field is like an image reflected from a plane mirror. The mirror can be rotated about an axis normal to the plane, but the reflected image does not change.

For the interested reader we suggest the article Unipolar Induction: An Unsolved Problem of Physics and Scientific Method by Harry H. Ricker III, in which he discusses this problem and its history in length.

Railgun The inexplicability of the railgun is demonstrated beyond doubt in the following work: An Investigation of the Static Force Balance of a Model Railgun by Matthew K. Schroeder.

116 ABSTRACT “An interesting debate in railgun research circles is the location, magnitude, and cause of recoil forces, equal and opposite to the launched projectile. The various claims do not appear to be supported by direct experimental observation. The goal of this research paper is to develop an experiment to observe the balance of forces in a model railgun in a static state. By mechanically isolating the electrically coupled components of such a model it has been possible to record the reaction force on the rails and compare that force with the theoretical force on a projectile. The research is ongoing but we have observed that the magnitude of the force on the armature is at least seventy times greater than any predicted equal and opposite reaction force on the rails.” [Our emphasis.]

Momentum is Conserved in Electromagnetism, But in Which System?

According to classical electromagnetic theory, momentum is conserved in electromagnetic interactions. Why the theory shows this feature? Julius Adams Stratton in his book Electromagnetic Theory gives the answer:

A direct consequence of this hypothesis [ that there is associated with an electromagnetic field a momentum distributed with a density g ] is the conclusion that Newton's third law and the principle of conservation of momentum are strictly valid only when the momentum of an electro-magnetic field is taken into account along with that of the matter which produces it.”

We see that it is an assumption that matter produces the electromagnetic momentum!

1 g= () E×H . c2

Dimensionally g is a momentum per unit volume.

The conservation of momentum theorem for a system composed of charges and field within a bounded region is, therefore, expressed by

d (),.G+ G =×2 S nda G = g dv dt mech electromag òS òV

If the surface Σ is extended to enclose the entire field, the right-hand side must vanish, and in this case

Gmech+ G electromag = constant.

Stratton ends his discussion of electromagnetic momentum with the statement: “There appears to be associated with an electromagnetic field an inertia property similar to that of ponderable matter.”

If one clings to the idea that an electromechanical system, including the sources of fields, can reside inside a closed surface and be analyzed from the outside, the unipolar motor and the railgun will never be understood. But if one admits that the forces are caused by bosons passing through the surface, scattering and coming out again, there is no mystery.

We can now recognize the hypothesis on electromagnetic momentum density as our postulate: bosons have inertia. Since g is momentum per unit volume, it is directly compatible with our boson model, which states that the energy and momentum of a boson are proportional to its volume.

Conclusion If one, in the course of building up a theory, makes an assumption which later turns out to be wrong, one should correct one’s assumption.

117 Practical Electromagnetics

The everyday phenomena inside and in the surroundings of an ordinary electric wire can be represented as follows: In metals there are electrons that are free to move among the atoms of the metal. These electrons form a gas that conceptually does not differ from molecular gases. The pressure of the electron gas is called voltage. By altering the voltage it is possible to regulate the surface charge of the wire: The wire is negatively charged: On the surface of the wire there is an excess of electrons. They are distributed on the surface subject to the principle of minimum local energy. Their spins are more or less randomly oriented. The boson flow from the wire is dominantly modulated by electrons and it pushes free electrons away from the surroundings of the wire. The wire is positively charged: There are fewer free electrons on the surface than in a neutral situation. The wire forms a region from which a reduced amount of resonant background radiation comes into contact with the surrounding electrons. So they are pushed towards the wire. Resistive heating of the wire. Electrons drift due to the voltage difference between the two ends of the wire. The pressure difference is needed to overcome the resistive effects of the metal. But raising the pressure of the electron gas also effects the electrons themselves. They feel an elevated energy density and their volume increases. Along their way through the wire the pressure gradually decreases and the volume of the electrons return to their normal values. The chances of volume we can detect as quanta of thermal radiation. Magnetic field due to electric current in the wire: When the ends of the wire are connected to a battery, free electrons are forced into a collective motion. Randomness of the spin orientations disappears. The electrons turn into such a position that the spin vectors are perpendicular to the motion and parallel to circles around the wire. Background radiation scatters from these electrons and the final result is a cloud of linearly polarized bosons surrounding the wire, i.e. the magnetic field. q dli qu E=k rˆ,. H = kˆ l×r ˆ = k u×rˆ ˆ r2 r 2 r 2

According to classical theory, the sources of electric and magnetic fields are charges (q), and charges in motion (qu), which is the current i. As we have stated, these concepts are adequate for engineering use. But the simplistic equations above have a limited range of applicability, as we shall soon see.

Pseudoscience: Stretching and Reconnection of Magnetic Field Lines “Scientists have discovered that within violent solar flares, the principle [the classical concept of magnetic field] does not always hold true. Studies of these flares have determined that their magnetic field lines sometimes do break like stretched rubber bands and reconnect... releasing vast amounts of energy that power the flare.” Beginning in the late 1950s, several authors, including J. W. Dungey, introduced magnetic reconnection as the central process allowing for efficient magnetic to kinetic energy conversion in solar flares. Scientific hypotheses and theories are inevitably subject to the fixed mindsets of scientists. The manner in which James Dungey envisioned electromagnetism can be seen from a small sample of his book Cosmic Electrodynamics: The lines of force may be pictured as thin-walled elastic tubes with water running through them; when the water goes round a curve in the tube the centrifugal force keeps the tube stretched. ...Now Thomson’s Theorem, which is the hydrodynamic analogue of the ‘freezing-in’ theorem, states that Ñò u× d s, taken round any closed curve moving with the material, is constant for a model with zero viscosity. Consequently, stretching the tube causes a decrease in u and an increase in H and vice versa. If then initially u is parallel to H, but does not satisfy [ the equation u = H ], the tube will expand or contract towards the steady state... Picture credit: Brian Lundberg.

118 Our vision is that of a fractal quantum field theory, in which all forces are carried by bosons. Current physics doesn't give the structure of a boson, but in our model, due to its fractality, lines of force may be pictured, too: The smallest circles are bosons. In reality, they are toroidal structures. Large electric or magnetic fields are formed by superpositions of bosons, which always move at the speed of light. The field lines only describe the form of the field, which is determined by the sources of the field, the fermions from which the bosons have scattered. Bosons form a ubiquitous entity which might be called the fractal boson space. The Classical Electromagnetic Theory has been developed up to its present state by extensive works and efforts of a countless number of researchers and scientists during a time span of 150 years. The theory includes mature concepts of electric and magnetic fields, and these concepts have stood the test of time. The whole modern technology is based on that theory. How is it possible that something that was created, at least in principle, in accordance with the classical electro- magnetics, came to be a “generally accepted theory of magnetic reconnection”? Perhaps we should take a look at the ‘freezing-in’ theorem mentioned in J. W. Dungey’s book, because the reconnection theory is based on it.

The Flux Freezing Equation: E + v  B = 0.

Existence of Electromagnetic-Hydrodynamic Waves, H. ALFVÉN If a conducting liquid is placed in a constant magnetic field, every motion of the liquid gives rise to an E.M.F. which produces electric currents. Owing to the magnetic field, these currents give mechanical forces which change the state of motion of the liquid. Thus a kind of combined electromagnetic-hydrodynamic wave is produced which, so far as I know, has as yet attracted no attention.

These are the original equations and the attendant text from Hannes Alfvén’s article in Nature 150, 1942. There are the same two vectors, i and i, which we already encountered in the discussion of the Bennett pinch. There we stated that they are not the same vector. The first vector i (in the rot H equation) is the current density in a filamentary wire in which the electrons are forced to move into the direction of the vector i. The electrons turn into such a position that the spin vectors are perpendicular to the motion and parallel to circles around the wire. The B field loops around the wire in circles, and it has curl proportional to the vector i. The second vector i, ( if E = 0, as it must be in a conducting liquid ) is caused by the Magnus force, the magnetic force component of the Lorentz force. If electrons move together with a conducting liquid in an external magnetic field, they are forced into motion with respect to the liquid. But the spins of electrons maintain their orientation, which is that of the vector B, the external magnetic field. The magnetic field produced by these conduction electrons, independent of their velocity, is locally uniform and it has zero curl. So the two i vectors are, by all means, not the same vector! Similarly, as with the Bennett pinch, equating the i vectors leads to a differential equation whose solutions are not models of any real physical phenomena. This is the reason why plasma refuses to believe the generally accepted theory, as H. Alfvèn put it. r ¶B r rr r r r =Ñ´´(vB ), vBE ´= , in MHD. ¶t

This equation is seen to imply that field lines are frozen in plasma, which is nonsense.

In his Nobel lecture H. Alfvèn said the following:

119 The cosmical plasma physics of today is far less advanced than the thermonuclear research physics. It is to some extent the playground of theoreticians who have never seen a plasma in a laboratory. Many of them still believe in formulae which we know from laboratory experiments to be wrong. The astrophysical correspondence to the thermonuclear crisis has not yet come. With ‘thermonuclear crisis’ he refers to the failures based on the Bennett pinch. We are now dealing with the astrophysical crisis anticipated by Alfvèn, because the same flaw in theorizing is behind the Bennett pinch, flux freezing and magnetic reconnection.

If we start solely from the Lorentz force law and put F = 0, we get an equation similar to the “flux freezing equation”, FEvBF=+´q q , = 0, Þ EvB +´= 0, but this time it is the velocity selector. All the three vectors are independent, and they can be selected so that only charged particles of certain velocity are allowed to fly directly through the crossing of the fields. What is going on here is that we have returned to the dilemma of which R. Feynman said: “We know of no other place in physics where such a simple and accurate general principle [the “flux rule”] requires for its real understanding an analysis in terms of two different phenomena.” Then he gives the two basic laws for the correct physics: ¶B F=q(),. Ev×B + Ñ´=- E ¶t

Alfvèn combined these two equations, and it was not correct physics, and we have now shown why this is so. But perhaps he was deluded. Albert Einstein based his relativistic electrodynamics on the following reasoning: It is well-known that Maxwell’s electrodynamics—as usually understood at present—when applied to moving bodies, lead to asymmetries that do not seem to be inherent in the phenomena. Einstein illustrated the “asymmetry” with a magnet and a conducting loop. A magnet gliding through a conducting loop at rest drives a current thanks to Faraday’s law. But to an observer riding on the magnet, the current arises in the moving loop because the charges on it move in a magnetic field. In the first case the changing B induces an electric field E, and the electric force qE acts on particles of charge q. In the second case, the force is identified as qv×B where charge q is carried with velocity v past the magnet. Why would the same result arise from apparently different mechanisms? This equivalence could not be a coincidence, reasoned Einstein. But this is exactly the idea that is wrong by Feynman’s (and our) standards. Introducing relativity into electromagnetics messes up simple things completely. From Wikipedia: “An observer at rest with respect to a system of static, free charges will see no magnetic field. However, a moving observer looking at the same set of charges does perceive a current, and thus a magnetic field. That is, the magnetic field is simply the electric field, as seen in a moving coordinate system.”

The reason to this mess can be seen in the article On the Electrodynamics of Moving Bodies, in which Einstein introduced his relativistic electrodynamics. He wrote Maxwell’s curl equations in the following form (in modern vector notation) replacing the current density J by the product ρv: Ñ ´E = -¶ B /, ¶t 2 Ñ´=B(1/c ) ¶ E / ¶+ t 4p km r v , thereby setting physics on the course of increasing confusion!

[As an aside: there was another piece of electromagnetic reasoning behind the special relativity. Einstein tells in his Autobiographical Notes of a striking thought he had at the age of 16. While recounting the efforts that led to the special theory of relativity, he recalled “...a paradox upon which I had already hit at the age of sixteen: If I pursue a beam of light with the velocity c (velocity of light in a vacuum), I should observe such a beam of light as an electromagnetic field at rest though spatially oscillating. There seems to be no such thing, however, neither on the basis of experience nor according to Maxwell's equations.”]

120 Picture credit: Induamar Arma Lucis.

Above we have placed a boson into Einstein’s rest frame, for him to observe. It is an “electromagnetic field at rest though spatially oscillating”. Alfvén’s Reaction Magnetic reconnection is based on Hannes Alfvén’s flux freezing equation. Alfvén was an electrical engineer, who in 1940 became professor of electromagnetic theory. After realizing what was going on, he was explicit in his condem- nation of the reconnection concept, calling the formalism that had built up around reconnection pseudo-science. Alfvén even went so far as to call his own beliefs in the “frozen-in” concept “absurd” and “pseudo-pedagogical”. But Alfvén was too late... The solar wind is the continuous flow of plasma outward from the sun. The interplanetary magnetic field is the sun's magnetic field that is frozen into and is carried outward by the solar wind. http://meetings.aps.org/link/BAPS.2015.TSF.H1.25 Definition by American Physical Society.

H. Alfvén, Keynote Address, Proceedings from NASA Workshop on Double Layers, Huntsville AL, March 17-19, 1986 – Referring to “magnetic merging [magnetic reconnection]”: “I was naïve enough to believe that such a pseudo-science would die by itself in the scientific community, and I concentrated my work on more pleasant problems. To my great surprise the opposite has occurred: ‘merging’ pseudo-science seems to be increasingly powerful. Magnetospheric physics and solar wind physics today are no doubt in a chaotic state, and a major reason for this is that part of the published papers are science and part pseudo-science, perhaps even with a majority in the latter group.” The flaw in the MHD theory becomes apparent only if it is understood that magnetism is caused not by the motion of electric charges, but by collective orientation of particle spins. The whole theory of magnetohydrodynamics stumbles at the starting line because the sole source of magnetism, the particle spin, is absent from the theory. It cannot explain magnetic phenomena correctly. But who cares? Foundational physics is already ruined to the level of a child who does not know cause and effect. The time of Classical Electromagnetics has now come. Solar physicists currently teach basic principles of magnetism to their students, not by laboratory experiments but by rubber bands and a pair of scissors. http://solar-center.stanford.edu/magnetism/magnetismsun.html

The Absorber Theory After having been trained by John Wheeler, Feynman was ready to act. He took aim at the “radiation reaction”.

What is the problem of radiation reaction? In Feynman’s own words: When we accelerate a charge, it radiates electromagnetic waves, so it loses energy. Therefore, to accelerate a charge, we must require more force than is required to accelerate a neutral object of the same mass; otherwise energy wouldn’t be conserved. The rate at which we do work on an accelerating charge must be equal to the rate of loss of energy by radiation. … it is called the radiation resistance. We still have to answer the question: Where does the extra force, against which we must do this work, come from? When a big antenna is radiating,

121 the forces come from the influence of one part of the antenna current on another. For a single accelerating electron radiating into otherwise empty space, there would seem to be only one place the force could come from —the action of one part of the electron on another part. A theory was then created. commented on Feynman's theory in 1950: “The time itself loses sense as the indicator of the development of phenomena; there are particles which flow down as well as up the stream of time; the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past.” (Progress in Theoretical Physics 5, (1950) 82). The madness of Modern Physics culminates in this theory, which was based on the following postulates: “This description of nature differs from that given by the usual field theory in three respects : (1) There is no such concept as “the” field, an independent entity with degrees of freedom of its own. (2) There is no action of an elementary charge upon itself and consequently no problem of an infinity in the energy of the electromagnetic field. (3) The symmetry between past and future in the prescription for the fields is not a mere logical possibility, as in the usual theory, but a postulational requirement.” [Our emphasis.]

After the “wonder year” 1905 and the Solvay Conference in 1927, anyone would think that no more harm can be done to the classical ideas of physics, but John Wheeler and Richard Feynman found the way to overkill physics as a rational science. In our opinion, it is a laughable idea to create a theory as described by Y. Nambu above, just to solve a minor problem of radiation reaction. But no one is laughing. Why is that? Paul Feyerabend, philosopher of science, knows the reason: “Scientists have more money, more authority, more sex appeal than they deserve. The most stupid procedures and the laughable results are surrounded by an aura of excellence. It is time to cut them down in size, and to give them a more modest position in society” (Feyerabend 1975, quoted by Theodore Schick in Skeptical Enquirer.) Dr. Feyerabend challenged the notion that science is rational and progressive. If there was progress in science, it was because scientists broke every principle in the rationalists' rule book and adopted the principle that “anything goes.” By our reckoning, this is obviously true. (See John Wheeler’s motto above.) The following comments of Paul Feyerabend appeared in a 1969 letter to Feyerabend's Berkeley philosophy chair Wallace Matson ( For and Against Method, Appendix B): The withdrawal of philosophy into a “professional” shell of its own has had disastrous consequences. The younger generation of physicists, the Feynmans, the Schwingers, etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrödinger, Boltzmann, Mach and so on. But they are uncivilized savages, they lack in philosophical depth – and this is the fault of the very same idea of professionalism which you are now defending.

How about John Wheeler? Geons, Black Holes, and Quantum Foam: A Life in Physics by This idea of an effect that precedes its cause, fantastic though it seems, is an idea that Feynman and I found we had to accept as real, – if we were to work without fields to duplicate all the successes of past workers with fields. Yet, in nature, we don’t see any examples of effects that precede causes. The startling conclusion that Dick Feynman and I reached, which I still believe to be correct, is that if there were only a few chunks of matter in the universe – say only Earth and the Sun, or a limited number of other planets and stars – the future would, indeed, in reality, affect the past. What prevents this violation of common sense and experience is the presence in the universe of a nearly infinite number of other objects containing electric charge, all of which can participate in a grand symphony of absorption and reemission of signals going both forward and backward in time. The mathematics told the two of us that in the real world, the myriad of signals headed back in time, apparently to influence the past, miraculously cancel out, producing no net effect. Common sense and experience are saved. Here, indeed, is the power of mathematics.

122 The poor man doesn't even know what it means to use common sense. He thinks that a train of thought that includes anything — anything — like moving backwards in time, fulfills the requirements of common sense if the insanities are not showing in the result. Here, indeed, is the power of fiddling with mathematics. John Wheeler should be the first in Feyerabend’s list. (“common sense = purely physical causation throughout a process described with concepts of classical physics.”)

Today we are told that John Wheeler was one of the leading theoretical physicists of the 20th century, and Richard Feynman was a genius, undoubtedly one of the greatest scientists of all time, and a normal man cannot do anything but to throw himself to the ground before the Giants. All their works are based on theories of relativity, in which the picture of time as a line along which one might travel in one direction or the other, is an unparalleled conceptual disaster. The very thing that an “absorber theory” can be formulated on that foundation should be a sure sign that those theories are wrong from start to end.

Time Reversibility of the Wave Equation? From Wikipedia, we find the following: The Wheeler–Feynman absorber theory is an interpretation of electrodynamics derived from the assumption that the solutions of the electromagnetic field equations must be invariant under time-reversal transformation, as are the field equations themselves. Indeed, there is no apparent reason for the time-reversal symmetry breaking, which singles out a preferential time direction and thus makes a distinction between past and future. A time- reversal invariant theory is more logical and elegant. Advanced waves are solutions of the electromagnetic wave equation and other similar wave equations which contain only the second time derivative. ¶2E1 ¶ 2 E - = 0. ¶x2 u 2 ¶ t 2

This is the simplest possible wave equation. Its solutions (in 3D) are plane waves that are infinite in space and time, and their amplitudes never changes. These waves are the present model of non-localized photons. But we never see anything like this in nature. All waves are in some way local and they dissipate. Why should one base any hypotheses concerning the nature of time on some differential equation whose solutions do not represent anything in nature. This can be fixed by including a first time derivative term in the equation, which produces a dissipative solution. There is an additional exponential term in the solution, eat , which causes the free oscillation to decrease over time. This term constitutes the arrow of time. Parameter alpha in the exponent is the damping factor and is negative in value. Only positive values of time t make sense. Time is not a geometric entity. It is a fundamental tool of human understanding, and it must be used as it is used in the Classical Electromagnetic Theory. This ends the section on pseudoscience. In our view, it must include all direct violations of the fundamental concepts of the classical EM theory ( = mag. reconnection etc.), applications of noncommutative mathematics, and everything based on geometrical time. The sad obituary of rational physics is written by philosopher Simon Weil:

What is disastrous is not the rejection of classical science but the way in which it has been rejected. It wrongly believed it could progress indefinitely, and it ran into a dead end about the year 1900; but scientists failed to stop at the same time in order to contemplate and reflect upon the barrier, they did not try to describe and define it and, having taken it into account, to draw some general conclusions from it; instead, they rushed violently past it, leaving classical science behind them. And why should we be surprised at this? For are they not paid to forge continually ahead? Nobody advances in his career, or in reputation, or gets a Nobel Prize, by standing still. To cease voluntarily from forging ahead, any brilliantly gifted scientist would need to be a sort of saint or hero, and why should he be a saint or hero? Simone Weil (1909 –1943.)

123 Gravity as a Component of the Universal Force In our theory gravity is caused by radiation pressure, the cosmic pressure. This is a very old idea, discovered by many. Its viability is proven in the article Deriving Newton’s Gravitational Law from a Le Sage Mechanism by Barry Mingst and Paul Stowe. Here is the abstract:

In this paper we derive Newton’s law of gravity from a general Le Sage model. By performing a general derivation without a specific interaction process model, we can identify generic requirements of, and boundaries for, possible Le Sagian gravitational process models. We compare the form of the interaction found to the “excess” energy of the gas giants and find good agreement.

In the following we show that it fits perfectly with our theory, which is largely based on equation of radiative energy:

Equation of radiative energy transfer:

dIn ()s =-+an()(,)()'.s b n s s'I n s' d W ò4p Ls

As noted earlier, like Maxwell equations, this equation is of fundamental nature; it cannot be derived from any other physical principle. The left-hand side represents the rate of change of the specific intensity in the direction s.

The functions αν and βυ are known as the extinction coefficient and the differential scattering coefficient, respectively.

The αν -term represents the rate of decrease in energy due to absorption along the s -direction. The integral term represents the rate of increase in energy along the s -direction due to scattering from all s' -directions. In the case of gravity the scattering integral term, the electromagnetic part, drops off.

The equation of radiative energy transfer can be solved analytically only in some special cases. For a simple gravity model we put αν = constant and βυ = 0. Now we have the following equation as a solution:

-a L ILIn()().= n 0 e The specific intensity decreases following an exponential law along the line L.

[¦ (x )]2 [ ¦ ( x )]n e¦()x =+¦1 (x ) + +×××+ . 2n !

Using the series expansion above we approximate:

ILIn( )= n (0 )[1 -a L ].

According to this last simplification the specific intensity decreases linearly along the line L and the shadowing effect can be shown, by integration, to be equivalent with Newton’s law of gravity. This is shown by B. Mingst and P. Stowe in their article.

If gravity is attributed to the shadowing effect (via absorption) of the universal background radiation, there immediately appears a new effect, dipole background radiation from the rotating Sun.

124 As viewed from (northern hemisphere of) the Earth, the background radiation coming through the right side of the sun encounters more mass than the left side due to Sun’s rotation. The gravitational radiation from the Sun is multipole radiation, and the effect of rotation is the dipole term. ( Dipole gravitational radiation in General Relativity does not present because of conservation of the linear momentum of N-body systems, but in our theory we are not dealing with closed systems at all.) The issue of the long-term stability of the Solar System is of course one of the oldest unsolved problems in Newtonian physics.

This problem is readily solved when it is understood that the Sun drives the planets around their orbits. The Sun and the planets form a system, but it is not a closed system, and it is dissipative. This the reason why the system is stable. (The spacecrafts Pioneer 10 and 11 are not dragged by the sun, and therefore they suffer from deceleration due to a “viscous drag force”, proportional to the approximately constant velocity of the Pioneers.)

We have already discussed a similar electromagnetic problem in the section Momentum is Conserved in Electro- magnetism, but in which System? If one clings to the idea that an electromechanical system, including the sources of fields, can reside inside a closed surface and be analyzed from the outside, the unipolar motor and the railgun will never be understood. But if one admits that the forces are caused by bosons passing through the surface, scattering and coming out again, there is no mystery. We can now recognize the hypothesis on electromagnetic momentum density as our postulate: bosons have inertia. Since g is momentum per unit volume, it is directly compatible with our boson model, which states that the energy and momentum of a boson are proportional to its volume. The same flow of bosons that cause electromagnetic fields by scattering from particles of matter, causes gravity by becoming absorbed by matter. Both cases are included in the equation of radiative energy transfer.

The completing section of the physical part of this article was intended to be based on the work and observations of Maurice Allais and his followers, on the subject of the Allais effect. We had this aim because the effect is, in a way, an Experimentum Crucis for the proof of our theory.

But then came, like a gift from heaven, the article Gravitational Lensing of Spiral Vortex Solar Radiation by Venus, which introduces a measuring device called the Torsind (torsion indicator). The instrument, developed by Dr. Alexander Pugach, is a fantastic combination of simplicity and sensitivity. From the point of view of our theory it is indispensable, because it directly measures gradients of energy density in free space, the very thing that is the foundation of our theory, and which, even as a concept, is missing from both General Relativity and QED. The device is quite similar to the unipolar motor of which we said earlier that it is a ridiculously simple device and, for that reason, it so obviously reveals a major flaw in the foundation of physics. The torsind does the same. If it feels torsion, it rotates. And the torsion is delivered to it by a gradient of energy density of free space, because it is insensitive to electromagnetic fields. (There are no currents flowing in the rotor as in the unipolar motor.) The only thing to explain is what causes these gradients. Below we show three graphs, one of torsind measurements during the years 2009...2013, one of the sunspot number during the same years, and a sequence of torsind measurements on 25.12.2012., lasting four hours.

125 126 Dr. Pugach writes of these kinds of results: “...the torsind very often registers strong bursts, responding to some unknown astro-space phenomena. At these moments the torsind disk can make 5, 10, 20 or more revolutions in a row. Such a sharp reaction of the torsind we will conventionally call spike. ….. A double spike is a spike when the disc rotation in one direction is immediately replaced by rotation in the opposite direction. They are relatively rare. More often there are cases when the torsind disc turns round several hundreds or thousands of degrees then ceases its rotation and stops.” We interpret these observations as follows: The spikes are eclipses between sunspot fermions and the core fermion of the sun. The mechanism is exactly the same as in a normal eclipse, but the partaking objects are much smaller. The shadows are very sharp and so the gradients of energy density at the surface of the earth are strong. The graph above says that a large group of sunspot fermions is passing by in front of the core fermion of the sun, or behind it. Only twice a year, on December 7 and June 7, does Earth cross the Sun’s equatorial plane, so twice a year we have periods of fewer or no observations.

Gravitational Beams from the Sun Trigger Earthquakes

Article from Popular Science Monthly 1908: Coincident Activities of the Earth and the Sun by Dr. Ellsworth Huntington. The open circles indicate years when notable earthquakes or eruptions occurred, although not in large numbers, nor of exceptional severity. The solid round dots represent years of greater severity than the preceding. The solid squares indicate extreme severity. This data comes from Sayles (?). Jensen’s data appears in the small rectangles above the horizontal line, and eruptions below. The size of the rectangles indicates Jensen’s estimate of severity and frequency combined. The data of Sayles and Jensen supplement each other admirably.

127 Inspection of Fig.1 [ above ] shows that according to both Sayles and Jensen periods of minimum sunspots are times of maximum seismic and volcanic activity; whereas at periods of maximum sunspots, telluric activity almost ceases. It seems to be impossible to avoid the conclusion that the marked coincidence between telluric and solar activity indicates a relation of some sort between the internal phenomena of the earth and the sun. As to what that relation may be we have as yet no clue.

The motion of tectonic plates causes compression, tension, and shear between the plates. Sometimes the plates stick together, and energy is accumulated as elastic deformations. When the plates eventually move again this energy is released as shock or seismic waves through the Earth's crust causing vibrations, namely earthquakes.

The continuous rattling of the crust, caused by gravitational beams from the Sun, releases accumulated stress that could otherwise build up and result in a major event. This is the relation between seismic activity and sunspots.

Why the Torsind Rotates? In the picture below a longitudinal section of a short portion of a gravitational beam is shown. (The purple region in the small picture.) The green vectors depict the non-absorbed portion of the background radiation. The flow of energy and momentum in the beam has curl, and this is what makes the torsind rotate.

A “double spike” is observed if the axis of the beam sweeps near to the torsind. One might now ask why these waves are not observed by LIGO?. There are two reasons. LIGO measures oscillating strain and it is not sensitive to strain below ≈ 10 Hz. Secondly, signals from two sites 3000 km apart are compared to reduce the effects of noise. Effects from eclipses between the Sun’s core and the sunspot fermions are so local that they cannot be detected by LIGO. A Historic Anniversary “In 2015, the twin Laser Interferometer Gravitational-wave Observatories (LIGO) made the first direct detection of gravitational waves created by two black holes that were spinning rapidly around each other before colliding and merging to form a larger . The discovery was announced as unequivocal proof that Einstein’s theory of general relativity was valid.” This announcement has raised criticism. The technical issues at stake here have to do with the extreme difficulty of the measurements that LIGO attempts to make.

128 The noise at each detector should be completely uncorrelated, but if a swoops through, it should create a similar signal in both instruments nearly simultaneously. A team of five researchers — James Creswell, Sebastian von Hausegger, Andrew D. Jackson, Hao Liu and Pavel Naselsky — from the Niels Bohr Institute in Copenhagen, presented their own analysis of the openly available LIGO data. (See the article On the time lags of the LIGO signals) And, unlike the LIGO collaboration itself, they come to a disturbing conclusion: that these gravitational waves might not be signals at all, but rather patterns in the noise that have hoodwinked even the best scientists working on this puzzle. The main claim of Jackson’s team is that there appears to be correlated noise in the detectors at the time of the gravitational-wave signal. This might mean that, at worst, the gravitational-wave signal might not have been a true signal at all, but just louder noise.

Another article expressing criticism is On the Signal Processing Operations in LIGO signals by Akhila Ramana.: Concluding remarks Section 7 gives the reasons why weak signals GW151226 and GW170104 should be questioned. Section 8 gives the reasons why weak signals GW170814 and GW170817 should be questioned. It is possible that there was coincident false detection at H1 and L1, due to any combination of sine wave/noise burst/bogus chirp templates/detector noise, in both detectors. We do not know the probability of this false coincident detection, due to unknown external factors. LIGO’s false alarm rate calculation is not applicable for this case, which pertains to only coincident false detection due to detector noise. ….. We will soon continue to explain these puzzles. But before we can do that we need to make a note concerning the gravitational constant G. Why is the Gravitational Constant G so Imprecisely Known?

Among all the physical constants, the gravitational constant G is subjected to a particularly high uncertainty. At present the most accurate reported value of G is (6.673 _+ 0.003)•10 -8 cm 3 s -2 g-1 (Heyl and Chrzanowski, 1942), but a reliable error estimate has to be much more pessimistic, since the discrepancies among different series of measurements (performed at different times and/or with modified experimental apparata) are several times as large as the statistical error of each individual experiment (Stephenson, 1967; Sagitov, 1969). This fact suggests that the data have been always affected by uneliminated systematic errors, so that our knowledge of G is probably real only to within some parts in 103. (Paolo Farinella, Osservatorio Astronomico di Brera, Merate, Italy) All the above can be explained as follows. The ‘uneliminated systematic error’ disturbing the measurements of G is the puzzling ‘noise’ in the LIGO measurements, and its origin is the Sun. The main claim of Jackson’s team is that there appears to be correlated noise in the detectors at the time of the gravitational-wave signal, and that the gravitational- wave signal might not have been a true signal at all, but just louder noise. This is correct, and so is A. Ramana’s suspicion that “It is possible that there was coincident false detection at H1 and L1...” If it was real, then the most probable source of the chirp-like signal is the Sun in which two spot fermions rotate and form a spin up-spin down pair, as usual. There may be a short period of wobbling beams which transfer energy to the LIGO sensitive band (between 40 and 1000 Hz), perhaps from two pairs of sunspot fermions at the same time, thereby causing ‘coincident false detection at H1 and L1.’

“LIGO is a masterpiece of complex and sophisticated engineering. Super-stabilized lasers, enormous vacuum systems, the purest optics, unprecedented vibration isolation, and servo controls all work symbiotically for one singular purpose: To sense the ephemeral passage of a gravitational wave. …” www.ligo.caltech.edu Certainly LIGO is a masterpiece of complex engineering, but at the same it is highly unreliable considering the information present in the data. The unambiguousness of data from LIGO is about zero as compared to that of the torsind. The purpose in having two independent detectors is precisely to ensure that, after sufficient cleaning, the only genuine correlations between them will be due to gravitational wave effects. It is now believed that sufficient level of data cleaning has not yet been achieved. But it is clean enough. It confirms our theory, which can be further supported by the following.

129 A study of the seismic noise from its long-range correlation properties by L. Stehly, M. Campillo, and N. M. Shapiro. Results of our analysis show that while the sources of the secondary microseism remain stable in time, the sources of the primary microseism exhibit strong variability very similar to long period noise (hum) and well correlated with sea wave conditions.

The graphs above show the strong seasonality of the primary microseism. The azimuth graph shows also that the direction of the (assumed) background energy flow along the surface of the Earth changes (225 – 45) =180 degrees in a half-year period. Excitation of Earth’s continuous free oscillations by atmosphere–ocean–seafloor coupling Junkee Rhie & Barbara Romanowicz The Earth undergoes continuous oscillations, and free oscillation peaks have been consistently identified in seismic records in the frequency range 2–7mHz (refs 1, 2), on days without significant earthquakes. The level of daily excitation of this ‘hum’ is equivalent to that of magnitude 5.75 to 6.0 earthquakes, which cannot be explained by summing the contributions of small earthquakes.... Elucidating the physical mechanism responsible for the continuous oscillations represents an intriguing scientific challenge. ...seasonal variations in the level of energy present in the continuous oscillations have a six-month periodicity, with maxima in January and July...

As we noted above, only twice a year, on December 7 and June 7, does Earth cross the Sun’s equatorial plane, so twice a year we have periods of fewer or no observations caused by eclipses between the Sun’s core and the sunspot fermions. June 7 appears as a red line in the graph above. Primary microseism is at its maximum when the Earth is at the equatorial plane of the Sun. The period of the torsind oscillations is 3 minutes, meaning 5.6 mHz. We remind the reader that our hypothesis of the structure of the Sun (a core fermion and the sunspot fermions) s already explained in all detail the solar cycle. Now it explains the puzzling LIGO data, the relation between seismic activity and sunspots, the torsind effect and the origin of the gravitational noise. One thing more: “The complexity of the interplanetary magnetic-field polarity pattern shows a semiannual variation, with more sectors per solar rotation in May and June, and again in November and December, times when the earth is close to the sun's equatorial plane. Complexity varies also through the solar activity cycle, in a manner consistent with contribution to the interplanetary field from both the polar field and long-lived, large-scale fields related to active regions.” Finally, a knowledgeable reader may question all the shielding effects claimed above, knowing that these have been tried to observe during total solar eclipses. A review of literature shows that it is generally accepted that there is no gravitational shielding effect.

130 With or without general acceptance, solar eclipses can be observed with most simple devices, torsinds (see above), and pendulums (Maurice Allais & others). From the article by Stowe and Mingst we read that “...to date “the most carefully done of the dozen or so such experiments appears to be that of Slichter, Caputo and Hager. They used a LaCoste-Romberg gravimeter to search for gravity variations before, during and after the total solar eclipse of February 15, 1961. Power spectrum analyses of their data indicate that the mass interaction coefficient [ extinction coefficient αν in our theory] is less than 8.3·10-16 cm2/gm.” This is four orders below Majorana’s experimental results! It must be interpreted as a null result, and it is a mystery because very simple devices of other kind can detect the effect. Let us take a look at the LaCoste-Romberg gravimeter:

The basic mechanism is simple, a weight suspended from a string. The tension forces in the string are of electro- magnetic origin, and they are in balance with the gravitational force acting on the mass. But both of these forces have their ultimate origin in the universal flow of energy and momentum. Thus, the percentage changes of the two forces are the same. Maurice Allais emphasized the dynamic character of the observed effects: “The observed effects are only seen when the pendulum is moving. They are not connected with the intensity of weight (gravimetry)...” The effects are seen with pendulums and torsinds because they are integrators; weak effects become integrated into the motion of the indicators. In the light of all the above, the null results of LaCoste-Romberg Gravimeter in gravitational shielding measurements can be seen as the experimentum crusis for the proof of the Theory of Universal Force, because the main claim of the theory is that all forces in nature have a common origin, as explained in this article. One might think that the null result of an experiment tells one nothing important, but apparently there is an exception, the Michelson-Morley experiment. But the null result in that experiment was not expected, while in our theory the null result of the LaCoste-Romberg gravimeter measurement is expected. The Theory of Universal Force is thereby confirmed with an accuracy of 8.3·10-16 cm2/gm. This theory builds on gradients (not only electromagnetic but also on those detected by the torsind) of a universal flow of energy and momentum, and these, even as a concept, are missing from both general relativity and QED. A most telling thing is that gravitational beams can only be results of shielding effect, so the whole theory must be based on that effect.

131 Electromagnetics in Large Scale

1/f Spectrum of the Universe First a piece of relevant history: In 1903 mathematician Edmund Whittaker published an article entitled “On the Partial Differential Equations of Mathematical Physics” in Mathematische Annalen. In that paper he introduced the general solution of Laplace's potential equation. In the end of the paper he gave a number of deductions from the general solution. One of them was the Gravitation and Electrostatic Attraction explained as modes of Wave-disturbance:

The result... that any solution of the equation

¶¶¶2VVV 2 2 ¶ 2 V + + = k 2 ¶¶¶xyz2 2 2 ¶ t 2

can be analyzed into simple plane waves, throws a new light on the nature of those forces, such as gravitation and electrostatic attraction, which vary as the inverse square of the distance. For if a system of forces of this character be considered, their potential (or their component in any given direction) satisfies the equation [above] where k is any constant. It follows...that this potential (or force-component) can be analyzed into simple wave planes in various directions, each wave being propagated with constant velocity. These waves interfere with each other in such a way that, when the action has once been set up, the disturbance at any point does not vary with the time, and depends only on the coordinates (x, y, z) of the point. It is not difficult to construct, synthetically, systems of coexistent simple waves, having this property that the total disturbance at any point (due to the sum of all the waves) varies from point-to-point, but does not vary with the time. A simple example of such a system in the following. Suppose that a particle is emitting spherical waves, such that the disturbance at a distance r from the origin, at time t, due to those waves whose wave-length lies between 2π/μ and 2π/(μ+dμ), is represented by 2dm sin( m vt- m r ) , pm r where v is the velocity of propagation of the waves. Then after the waves have reached the point r, so that (vt – r) is positive, the total disturbance at this point (due to the sum of all the waves) is ¥ 2dm sin( m vt- m r ) ò . 0 pm r

132 2¥ sin y Take μvt – μr = y, where y is a new variable. Then this disturbance is ò dy; or, since p r0 y ¥ sin y p 1 ò dy = , it is . 0 y 2 r

The total disturbance at any point, due to this system of waves, is therefore independent of the time, and is everywhere proportional to the gravitational potential due to the particle at the point. It is clear from the foregoing that the field of force due to a gravitating body can be analyzed, by a “spectrum analysis” as it were, into an infinite number of constituent fields; and although the whole field of force does not vary with the time, yet each of the constituent fields is of an undulatory character, consisting of simple wave- disturbance propagated with uniform velocity. ….. Of course, this investigation does not explain the cause of gravity; all that is done is to shew that in order to account for the propagation across space of forces which vary as the inverse square of the distance, we have only to suppose that the medium is capable of transmitting, with a definite though large velocity, simple periodic undulatory disturbances, similar to those whose propagation by the medium constitutes, according to the electromagnetic theory, the transmission of light.

Whittaker was criticized for his suggestion. In the same year, in Monthly Notices of the Royal Astronomical Society, G. Johnstone Stoney published an article: Examination of Mr. Whittaker's “Undulatory Explanation of Gravity from the Physical Standpoint. The essence of the critique is the following:

This rather startling announcement seems to require further discussion than that given to it by its author, in order that we may understand what the proposed resolution means physically and what it does not mean...... each element [ = infinitesimal band of wavelengths] of the series represented by the integral is assumed to be proportional to 1/μ, the wave-length, and when this very special set of values are assigned to the co-efficients of the terms of the series represented by the integral, then r under the sign of integration, which is essential to the physical interpretation, is excluded from the resultant while appearing in the components. ….. The physical assumption to be made is that the attraction of each particle of ponderable matter can be of that extraordinary kind described above... But, unfortunately, the improbability of this assumption is so overwhelming that no physicist can feel himself justified in seriously arguing from the supposition that it represents the state of things which exists in nature, however much he may be impressed by the mathematical ingenuity of the investigation.

Here we notice that in the criticized factor 1/μ, the μ is a wave number, proportional to frequency. So the factor is of the form 1/f. Whittaker's result shows how the Universe hides its fundamentally wavy nature, provided that the factor 1/f is included in the integral. The calculation concerns the potential V, which we have already interpreted as energy density, so the waves are those of energy density. The destructive interference is almost perfect, only a small amount of noise is left over. How can the waves be observed? We can see the effect of these waves (composed of bosons) everywhere, just by dropping the impossible idea of attractive forces. But we cannot say much about the absolute values of energy density, because forces are derivatives of the universal energy density.

From the above we see that the factor 1/f is needed to explain the apparent emptiness of space, but it is not yet clear why it exists. The physical model of 1/f noise is known, it is the superposition of relaxation processes. The only problem here is to identify the process. In our theory all propagating waves in nature have a tendency to dissipate, and bosons are no exception. We have already introduced a mathematical model for the boson. It was of the form

iw t A(,)(()) zw = Asech z - c u t e .

133 But it is not complete. It must be supplemented with a damping factor, e.g:

iw t- a t Az(,)(())w = Asechz - cute e .

Bosons are born in all sizes in emissions. They start to dissipate immediately and this is the relaxation process. This process also marks the ontological “arrow of time”. In other words: photons are not immutable!

2 SI current noise power spectral density (A /Hz) http://hal.archives-ouvertes.fr/docs/00/76/72/84/PDF/Jangd.pdf 1/f-noise manifests the fundamental property of boson space, namely the dissipation of energy of bosons. On Catastrophes

2¥ sin y dy; p rò y 0 Whittaker's integral includes contributions from both k → 0 and k → ∞. In reality there is a cutoff. Bosons are born in emissions from fermions and these are not infinitely large, meaning that their frequencies are not infinitely high. In the former case, k → 0, the wave equation turns smoothly into Laplace's equation. This means that the surfaces of equal energy stop propagating but they do not disappear. The macroscopic wave stands still but bosons flow all the time causing pressure. To explain, consider the following: If we initiate a longitudinal (“compressional”) wave in a steel bar (by a hammer), a portion of the bar is compressed, and this region of compressed matter moves through the bar. The motion is described by the wave equation and to this we assign a wave number. If we want to initiate a wave of lower wave number, the stroke of a hammer must be replaced with a force slowly increasing and then decreasing. At the limit k → 0 a slowly changing force and stationary pressure cannot be separated from each other. This is the analog of the seemingly constant pressure (voltage) on the conduction electrons and the way to interpret the transition from wave equation to Laplace's equation, at least in the context of 1/f noise. We remind the reader that the macroscopic (electromagnetic) wave number is only a manifestation of the internal oscillation of the boson. Forces are spatial derivatives of energy density, but if the dimensions of waves are measured in thousands of kilometers there are no practical means to utilize them. Using wave numbers as a measure of energy without any reference to the volume of the wave leads to difficulties, familiar from QED. Quantum field theory is plagued with divergences. The most fundamental divergence concerns the energy density of the vacuum. “But the vacuum energy density calculated by adding field mode energies is much larger than the density observed around us through gravitational phenomena. This “vacuum catastrophe” is one of the unsolved problems at the interface between quantum theory on one hand, inertial and gravitational phenomena on the other hand.” Quantum vacuum fluctuations by Serge Reynaud & al.

134 The problems stem from the idea that “the physical vacuum is the ground state of a system of quantum fields on the space-time manifold.” If the standard procedure of field quantization, the “box quantization”, (explained in the section On the Current Ideas of Quantization) is used, one ends up with the following result: From the article Vacuum catastrophe: An elementary exposition of the cosmological constant problem by Ronald J. Adler & al. “We suppose that the universe is a large cube of size L, ...

3 4 wmax 4pwd w w r = = max . ò0 (2p )3 8 p 2

This is our main result, the energy density of the ground state of the electromagnetic radiation field, that is the electromagnetic vacuum.” If one specifies a cutoff at the frequencies of cosmic ray photons or higher, the integral constitutes the vacuum catastrophe. The problem disappears if it is understood that the vacuum is not a system itself, but only contains systems of high frequency, namely, fermions and bosons of gigantic size. But these are very rare.

All in all, we conclude that the infrared catastrophe of 1/f noise is only a mathematical catastrophe without any practical consequences, and the high-frequency catastrophe stems from a bad idea of how to incorporate quantum behavior into electromagnetic field theory. We also conclude that Whittaker’s integral, equipped with a proper cutoff at high frequencies, is a correct expression of the fabric of the Universe, and the 1/f spectrum, due to dissipation of bosons, is an essential part of that fabric. At the visible spectrum the dissipation of photons is seen as “the cosmological redshift.” The above is consistent with the following: 1. The space is not expanding. 2. There was no Big Bang.

Matter from Electromagnetic Chaos Today it is widely believed that the Universe is inhomogeneous — and essentially fractal — on the scale of galaxies and clusters of galaxies. However, most cosmologists believe that on larger scales it becomes isotropic and homogeneous. ( This is the 'cosmological principle'.)

The data now available give a quantitative picture of the gradual transition from small-scale fractal behavior to large- scale homogeneity. It is also believed that the fractality of nature at the smallest scale is at most approximate, being ultimately limited by the atomic nature of matter.

Our theory seems to remove this limitation by assuming that Maxwell's curl equations can be applied regardless of scale. In the following we show how fractality is influential to our theory.

The Iconic Equation of Fractal Mathematics

2 Zn+1 =+ Z n C, where Z n =+ a n b n and C =+ a0 b 0 i.

It can be separated into its real and imaginary components:

2 2 Real part: an+1= a n - b n + a 0 ,

Imaginary part: bn+1= 2 a n b n + b 0 .

135 The results of the iteration process are often shown as points on the complex plane:

Picture credit: Christian Feuersänger, Stefan Kottwitz.

The points (a,b) in the blue region belong to the Mandelbrot set. If the sequence Z n+1 .does not run away to infinity, then the point belongs to the Mandelbrot set. For the points that are not inside the set, colors are added according to how many iterations were required before the point was assumed not to belong to the set. The iterative sequences inside the main bulb converge to zero.

The general form of the sequence is the following:

zn+1 = fz(),,()(,)(,). n zxiy =+ fxiy += fxy1 + ifxy 2

xn+1= fxy 1( n , n ), y n + 1 = fxy 2 ( n , n ).

This function belongs to a special class of maps, it satisfies the – Riemann equations: ¶¶ff ¶ f ¶ f 1= 2,. 1 = - 2 ¶¶xy ¶ y ¶ x

These equations have the form of Maxwell curl equations. But there is something more: Harry Bateman wrote our fundamental equations (the Maxwell curl equations) compactly like this: i ¶M Ñ´E = m ,.where M=H+Ei c¶ t

Each solution M satisfies the equation

2 2 MMHE×=( - ) ± 2i ( EHI ×=± )1 2 i I 2 .

The vectors I1 and I2 Bateman called invariants, and defined: “When the invariants are zero over a given domain the field may be called self-conjugate for this region.” The self conjugate condition requires that the magnetic energy density is the same as the electric energy density, and the electric field is orthogonal to the magnetic field. Being an electromagnetic TEM standing wave, our fermion model fulfills the Bateman conditions at the surface. ( For a fully developed stable particle the invariants I1 and I2 are zero .)

136 It is not known if Bateman recognized that the constraint M·M=0 is sufficient to guarantee that the complex vector M generates a minimal surface. Anyhow, the result has its basis in the S. Lie’s theorem: every holomorphic function generates a minimal surface.

We can now compare the Bateman condition, which is the vector Dirichlet condition, and the Mandelbrot iteration sequence: MM× =( HE2 - 2 ) ± 2i ( EH × ).

2 2 Real part: an+1= a n - b n + a 0 ,

Imaginary part: bn+1= 2 a n b n + b 0 .

The relationship between these and the scalar components of the solutions of the Bateman-Maxwell equation above is manifest. As noted earlier (in the section Physics of the Riemann Hypothesis), if the vectors E and H are to form a wave, then the following dispersion relation must be satisfied:

(w2 me -k × k ) = 0.

Bateman’s construction is valid if μ and ε are constants. This is not a restriction in the forming of a three-dimensional particle. It can be composed of nested minimal surfaces because we can assign differing values of energy density to them, provided that the wave number k is selected accordingly.

So we have electromagnetic waves composed of vectors E and H ,with energy densities E2 and H2 , “guided” by a wave vector k whose real and imaginary parts depend on energy density. This is the hallmark of nonlinear electrodynamics. There is an intimate connection between wave number and energy density as can be seen in the equation below. It is the nonlinear vector Helmholtz equation from which we derived our particle model.

2 2 2 Ñ++E(k0 g E ) E = 0.

2 Zn+1 =+ Z n C, where Z n =+ a n b n and C =+ a0 b 0 i.

Considering all the above, if we look at the Mandelbrot iteration procedure we can see a scalar complex component of a 2 wave vector , namely the Z n+1. It depends on the energy density Z n . C is interpreted as the initial local energy density which determines the initial wave vector k0 and initiates a process which leads to formation of an electromagnetic particle, provided that C belongs to the Mandelbrot set.

Everything in Nature tends to a state of minimal energy. This tendency and Maxwell’s curl equations work together in shaping the fermion. We have already come to the conclusion that emission and absorption are chaotic events possessing bifurcation properties. As soon as a small “seed vortex” fermion has been shaped the process continues as absorption, which does not end until the energy density in the region decreases below the critical level. the bifurcation parameter (the triggering condition) is the local (tensorial) energy density.

All the above shows that matter, in the form of electromagnetic standing waves, emerges from electromagnetic chaos!

137 The Grand Cycle In the beginning there is a flash of light. An enormous explosion takes place at some location in space. The event may well be called the “primordial fireball”

The region of the fireball is rapidly expanding, but because the energy is of electromagnetic nature, its motion is slowed down in regions of high energy density. This results in longitudinal compression solitons, propagating outwards as portions of spherical shells or sheets. To these structures we can assign the concepts of gradient, divergence and Laplacian. At the same time Maxwell's curl equations come into effect and the sheets begin to evolve subject to the most fundamental principle of Nature.

Here we see the principle in question shaping the surfaces of a sample of water in free fall. Nature has the tendency to form minimum energy structures, including minimal surfaces. At the most basic level these are electromagnetic surfaces. Sheets aggregate into filaments, which in turn break into beads, i.e. particles. From this follows that mass becomes distributed in voids, sheets and filaments in space.

Soon after the explosion the new-born particles may have a fully developed electromagnetic minimal surface M·M = 0, but this does not guarantee that they are stable. On the contrary, they are highly unstable, and this has consequences. One thing is of great importance:a primordial explosion may leave behind a single large vortex of electromagnetic energy, a.k.a. a giant fermion. New matter now exists but in unstable form. This is the starting point of the Grand Cycle.

On the Origin of Small, Medium & Large Jets In the 1960s astronomers began to make images of extragalactic radio sources using interferometers. They soon learned that some sources are extremely broad and consist of two well-separated lobes of emission lying on opposite sides of a “host galaxy”.

As interferometers improved in resolution, finer details were revealed. Narrow jets were discovered. They extended thousands of light-years from one lobe toward the other. Today we know that the jets extend all the way from a tiny nucleus to two bright “hot spots” in the lobes.

Three questions arise immediately: what are those tiny nuclei, why the narrow jets can travel thousands of light-years without dispersing, and what happens at the hot-spots?

The three answers are the following: The nuclei are large-scale fermions, and the jets are beams of large-scale bosons, electromagnetic solitons that finally become unstable and disperse into a cloud of smaller-scale particles at the hot spot. New matter condenses from the energy present at the hot spots, similarly as in the “primordial fireball” described by the Big Bang theory. Matter first emerges as particles of high-energy plasma, which then cools down.

Mathematics of the Hot Spot We begin by identifying supernovas and the hot spots of jets as Rolf Hagedorn's “fireballs”. He worked with statistical models of particle production and came up with a novel idea: heavy particles are somehow composed of lighter ones, and these again of still lighter ones, and so on. And by combining heavy ones, one would get still heavier ones. The crucial idea was that the composition law should be the same at each stage. Today we call it self-similarity.

138 Developing Hagedorn's idea mathematically leads to the Tsallis distribution. It pops up nearly everywhere in high- energy physics and fits to almost all particle spectra from collision experiments. It applies to solar flares. It also fits to the CRAB pulsar; a supernova remnant which emits in radio, optical, x-ray and soft γ-ray wave-lengths. We argue that the fireball process is of universal nature and it works similarly at all scales. It is described by the Tsallis entropy subject to a single constraint; conservation of electromagnetic vorticity. In other words: when a large boson or fermion disperses into smaller particles, total volume is conserved.

Both bosons and fermions are electromagnetic “vortex rings”. For this reason one may well infer that there are the Helmholtz's conservation laws for localized vorticity in effect behind conservative processes, such as pair production, emission, absorption, and parametric conversion of photons.

In the opening paragraph of the article Generalizing The Planck Distribution Andre Souza and Constantino Tsallis write the following:

“Although the present generalization is mathematically simple and elegant, we have unfortunately no physical application of it at the present moment. It opens nevertheless the door to a type of approach that might be of some interest in more complex, possibly out-of-equilibrium, phenomena.”

The phenomenon anticipated by the authors is formation of new matter. Wherever the Tsallis distribution of radiation (or of energy of fermions) is observed, there is always the same process going on, regardless of the scale; large vortices of electromagnetic energy dispersing into smaller vortices, conserving the initial vorticity. This is of course an out-of-equilibrium phenomenon, and finally the Boltzmann-Gibbs equilibrium is reached. The deviation of the Tsallis parameter q from unity measures the degree of out-of-equilibrium of the process.

The idea of the Hagedorn-Tsallis process is already included in particle physics!

“The hypothesis of the existence of immutable elementary particles has been abandoned: elementary particles can be transformed into radiation and vice versa. And when they combine into greater units, the particles do not necessarily preserve their identity; they can be absorbed into a greater whole.” The picture and the quotation above are taken from the web page http://abyss.uoregon.edu/~js/glossary/elementary_particles.html. The Hagedorn-kind energy cascade is said to be the most fundamental dynamical feature of turbulence. So we can state that new particles are formed in turbulent regions of high electromagnetic energy density, such as the hot spots of the radio lobes of galaxies and the tiny hot spots of collider experiments.

Nucleons and electrons born in this process are in excited states and radiate as they cool down. Relativistic electrons and synchrotron mechanism (charged particles spiraling around magnetic field lines at relativistic speeds) are not needed to explain the radiation from hot spots (or non-thermal radiation in general).

Relation Between the Tsallis Parameter q and Temperature Fluctuations

“The parameter q plays a central role in the Tsallis distribution and a physical interpretation is needed to appreciate its significance. …..For q = 1 we have an exact Boltzmann distribution, for values of q which deviate from 1, we have a corresponding deviation. From this point of view the Tsallis distribution describes a distribution of (Boltzmann) temperatures. A deviation from q = 1 means that a spread of temperatures is needed instead of a single value.”

139 This suggestion is from the article Near-thermal equilibrium with Tsallis distributions in heavy ion collisions by J. Cleymansa & al. It is in line with our view. We define the thermal equilibrium for individual particles through their “willingness” to emit or absorb. This degree of instability must then be the temperature of an individual particle. So the spread of temperatures is possible in the dispersion processes, in the hot spots of jets. The final stage of a jetting emission is a region of space (the former hot spot region) in which a variety of fermions have reached a thermal equilibrium, expressed by Planck’s law. The variable q has attained the value q =1. The Hagedorn-Tsallis dispersion process is an integral part of the working of the Universe, similarly as the 1/f – noise and Riemannian particle spectra.

Jets at All Scales

Astrophysicists have already noticed that “jets are everywhere, on all scales”, but it is not yet understood that they exist under our noses in the sun, planets and .

As explained above, immediately after the primordial explosion is over, the newly born giant fermions turn out to be overly energetic and therefore unstable. They begin to emit giant bosons as sharp beams. These beams become turbulent after propagating a certain distance. In these turbulent regions smaller-scale fermions are formed. They are also unstable (or excited) and we can detect their gamma and X-ray radiation.

In the light of our theory we expect to see laser-sharp jets at all scales, due to the axial nature of the emission process and the non-spreading, soliton nature of bosons.

It must be remembered that everything concerning particles and the processes of emission and absorption we have founded on Maxwell's curl equations, which are scale invariant. We have a fractal theory; small scale events are repeated at a large scale. There is no clear distinction between microscopic and macroscopic events.

Jets are said to be the most common and most problematic objects in hadronic collider physics. Jets of particles emerge from collisions, flying away from the collision point. In proton collisions, jets usually appear in pairs, emerging back-to- back. We have already come to the conclusion that the emission process takes place along the symmetry axis of the fermion. If the energy density is uniform enough at the position of the overly energetic fermion, it emits symmetrically from both ends. On the right we see typical hadron tracks reconstructed by the tracking detector.

Physics news: The world's most powerful proton smasher is preparing for its biggest run yet which scientists hope will uncover new particles that could dramatically change our understanding of the Universe. “We are exploring truly fundamental issues, and that's why this run is so exciting … Who knows what we will find”

Our theory predicts the following: More and more intense jets will be observed. More and more unstable particles will be observed because the accelerated particles just get bigger as energy is increased. The Hagedorn-Tsallis process disperses them into stable particles and bosons as before, only starting from higher energies. ( But the presence of H-T process is perhaps not clearly seen, because in the analysis of the results of the collisions the simplistic concept of “charge” is used. ) How do Scientists Measure the Size of an Electron?

The urge to construct more and more powerful particle crashers and smashers originates from the erroneous belief that to probe structures at still smaller scales demands accelerators of still higher energy. It is believed that the resolution is directly related to the energy of the scattering process, i.e., for higher energies, one can observe smaller distances. But the concept of optical resolution cannot be applied to fermions. Considering the structure of the electron, the present understanding is that “...any structure [of the electron] is too small scale for us to probe at the energies available.” In our view this is like trying to resolve the structure of a fly by shooting it with a monster cannon.

140 Jet Quenching

If a jetting fermion feels a strong gradient of energy density, emission takes place at one end only. This is the “jet quenching.”

National Radio Astronomy Observatory/National Science Foundation

On the right: radio image of the galaxy M87 clearly shows jet quenching.

Ball lightnings are high-energy fermions. They are sometimes seen as twin fireballs that travel as pair, separated with a glowing filament. Ever since fireballs were first reported, witnesses described those that “spit out sparks and sprout fiery jets.” There is an ongoing scientific research program Project Hessdalen that is focused on “anomalous light phenomena.” But in our view they are not anomalous at all. If one takes a look at the Hessdalen data, it is all there: narrow jets, double-lobed sources of luminosity...There are images of extremely narrow jets from which the wobbling of the source can be seen, as below. (21 February 1984, Hessdalen, Norway.)

The character of the radiation source has also been studied by aiming electromagnetic radiation at it. Both the beam of a radar and a laser beam affect the fermion. Its discharging can be stopped for a moment by increasing the energy density at the fermion, exactly similar to the way the “quantum state” of an electron in the Penning trap can be controlled by the use of a laser beam.

The Hessdalen phenomena are an example of “earth lights”. The sources of these lights, large fermions, are generated in areas of high energy density ( = high tectonic strain) underground. These are often seismically active regions of the crust. (http://inamidst.com/lights/earth) Strong electric fields inside thunderclouds give rise to transient luminous events in the atmosphere, such as lighting-induced sprites and upwardly discharging blue jets. The sources are always the same: unstable fermions.

Formation of Spiral Galaxies

Although the formation of galaxies is not presently understood, some key processes are proposed, such as the primordial collapse: the collapse of individual gas clouds early in the history, and secular evolution, formation by internal processes, resulting in spiral arms and bars. It is also noticed that the bulge and halo of Sa and Sb galaxies are composed mostly of old stars. This indicates that the bulges and halos of spiral galaxies probably formed through the primordial collapse of individual gas clouds early in the history of the Universe. One more feature of spiral galaxies is the ongoing star formation evident in their thin disks. Presently it is assumed that all spiral galaxies have supermassive black holes at their center and jets of plasma are expelled from the central black hole. But what causes material to shoot outwards from the vicinity of a black hole? By definition, a black hole sucks material (and even light!) into its event horizon, so it is the worst possible explanation. After clearing the galaxy, however, the jets inflate large radio bubbles.

In short, astronomers have no idea how spiral galaxies form, nor can they explain how and why jets are formed. The underlying reason for this inability to explain is the belief that no new matter has been created since the Big Bang.

141 The basic mechanism is clear. A grand primordial explosion (super-supernova?) may leave behind a single large vortex of electromagnetic energy. New matter now exists but in unstable form. It is a supermassive fermion; a standing wave structure of electromagnetic energy subject to Maxwell's nonlinear curl equations, despite its size. After its formation it is immediately unstable and shoots out two beams of bosons, solitons, which do not disperse until they are far from the source of emission. In the picture below one can see the source of two jets. It is a gigantic fermion which radiates gigantic electromagnetic solitons. From the picture it is also obvious that the core fermion rotates.

In the beginning of a grand cycle emerges a giant fermion emitting large bosons in two opposite directions. The higher the energy of the radiating nucleus (and thereby the energy of bosons), the longer the “bar” (=2R). During the process the central fermion diminishes, likewise the energy (M) of the bosons. Cygnus A. Rotation of the source of jets is obvious in the picture.

Appearance of Spiral Galaxies

What is behind Edwin Hubble's classification scheme? In our view, the appearance of spiral galaxies depends on the size and angular velocity of the fermion in their center. If the central fermion is large, the turbulent regions are far from the center and the type of the galaxy is SB. SBa is rotating fast, SBb slowly, and SBc still more slowly. As the central fermion becomes smaller, the turbulent regions move closer and closer to the center. The bar (= jets) diminishes and the type of the galaxy becomes Sa, Sb, or Sc. Rotation Curves of Spiral Galaxies Let the function M(R) be the simplest possible: M = C∙R, in which C is a constant. The core fermion rotates with constant angular velocity and emits with a constant rate (quanta/time). This results in constant number of quanta on every full cycle of rotation. The length of the bar, 2R, shortens linearly with time. Because the energy of a quantum M = C∙R, this is also the mass distribution of our new spiral galaxy: M(R) = C∙R. So the rotation curves are flat. The jet-based theory of formation, morphology and mass distribution gives the basic features of spiral galaxies so easily that hardly any other explanation is simpler. The emission process is similar for all fermions regardless of their size, namely quantized. It is not hard to find evidence of the quantum nature of jets. Astronomers call these quanta “bullets” or “bright knots” There is a movie of HH 30 jet motions over the span of a year. It shows quantized emission from the object. Another video of this kind is the following: The sources of jets are often so unstable that their emissions seem continuous. But in fact they are quantized similarly as the emissions of atomic scale particles. http://hubblesite.org/newscenter/archive/releases/2011/20/video/f/

An interesting example of medium scale phenomenon in which a fermion emits a fermion (beta decay) is the Norway Spiral. The explanation of this episode is twofold. The source of the blue corkscrew beam was a wobbling fermion, but it is not visible in the picture.

142 The parent fermion also ejected a spawn fermion into the beam direction. It dissipated its energy quite rapidly through its two jets, but the blue jet of the parent fermion remained much longer. Videos of the phenomenon show the that the matter condensed from the jets has radial momentum. The mechanism here is that of beta decay, only the scale is larger.

https://youtu.be/Hfy2JMcyMyA https://youtu.be/KihpSkLMHr4 In these footages the same phenomenon is viewed from the ground and from an airplane. The parent particle arrives from space. It becomes unstable (perhaps due to lowering speed) and starts to output visible (faint blue) jets. The H-T process evolves explosively and leaves behind a spawn fermion which is also unstable. It has its own jets at a new plane of rotation. There are two sequential, explosive H-T processes and two sonic booms can be heard from the sky. ( Spiral over Western Canada ) Beams from wobbling sources have been observed at all scales. In the Hessdalen picture (p.141) we see a small-scale emission. On the left we have a larger-scale example with continuously shorte- ning jets. There are several sightings of “spiral ufos” from all over the world. They are all the same phenomenon: unstable, jetting fermions in rotational, wobbling motion. VLA Image of Microquasar SS 433. Credit: Blundell & Bowler, NRAO/AUI/NSF https://youtu.be/a3_4Bc6tRp4 This footage shows a most spectacular event, most likely triggered by close flyby of the rocket Falcon 9 SpaceX. A parent fermion emits several spawn fermions, all of them with two jets and rotating. A closer look of the emitted spawn fermions can be seen here: https://youtu.be/rsAg8MzwM9A?list=PLahanC- MblA5YVNc9Tx7Mm2eT-1RWC4xl, starting at 8:30. Charged particles are continuously created at the hot spots but not at the same pace. The hot spots evolve at different potentials and a continuous lightning discharge can be seen between them.

Starting at 0:50 a “black ring in the sky” can be seen. It is a hot spot producing magnetite (Fe3O4, a black iron oxide).

Why We Have the Science of Planetary Radio Astronomy?

In 1955 B. Burke and K. discovered bursts of radio emissions from Jupiter. This discovery marked the birth of planetary radio astronomy. Planets stopped being mere material bodies following Newton's laws of gravitational attraction. Today we know that all planets emit bursts and other radio signals at a wide band of frequencies. Why? Jupiter, Saturn, Uranus, and Neptune are the giants of the solar system. They contain more than 99 percent of the solar system’s mass, and it has been said that “the giant planet story is the story of the solar system.” http://www.nap.edu/catalog/13117/vision-and-voyages-for-planetary-science-in-the-decade-2013-2022 In our opinion the story is the “unstable stability” of giant fermions at the cores of giant planets and other solar system objects with hot poles. To understand the phenomena of our own planetary system, especially concerning the radiation of the sun and the giant planets, one must realize that a giant, unstable source of electromagnetic energy is hiding at the core of the “hot pole” planet, and its jets break near or through the surface. Jupiter Hot Spot “A pulsating hot spot of X-rays has been discovered in the polar regions of Jupiter's upper atmosphere by NASA's Chandra X-ray Observatory. Previous theories cannot explain either the pulsations or the location of the hot spot, prompting scientists to search for a new process to produce Jupiter's X-rays. Bright infrared and ultraviolet emissions have also been detected from this region in the past. The X-rays were observed to pulsate with a period of 45 minutes, similar to the period of high-latitude radio pulsations detected by NASA's Galileo and spacecraft.” Chandra X-ray Observatory.

143 Surface Phenomena Large bosons propagating inside a celestial object become unstable at the surface, because the energy density abruptly decreases. This causes explosions and craters that are not a result of meteor impacts. Continuous beams cause elongated craters, such as double ridges on the surface of Europa or sinuous rilles on the lunar surface. Jupiter is a gas giant so it has no solid surface but it is obvious that the hot spots are just below Jupiter's visual surface. On the surface of the Sun these appear as filaments. The cycloidal features of some ridges on Europa are the manifestation of the wobbling motion of the source of the beam. In Enceladus the same process is ongoing, as can be seen in the infrared images from the 's “tiger stripe” fractures. During the formation of the rille Vallis Schröteri in our own moon the source has lost a considerable part of its energy and, consequently, the energy of the boson explosions at the surface also has diminished. On the Earth, these surface phenomena are the cause of the “mysterious craters”. For the interested reader the article The Craters Are Electric by Michael Goodspeed shows clearly that all craters cannot be of impact origin. The Winds of Jupiter and Other Giants The striped cloud bands on the giant planets Jupiter, Saturn, Uranus, and Neptune, are divided into belts and zones. In these stripes winds flow at high speeds in opposite directions. These winds are called “zonal winds”. It is obvious that a zonal, high speed wind system needs a powerful source of energy and some mechanism to direct the flow of gases. Between the stripes there must be turbulent regions. Turbulence is a highly dissipative phenomenon and the opposite flows would quickly lose their kinetic energy without a high-power driver. On the right we see a visualization of the spherical harmonics function. It is the angular portion of our fermion model solution. Various modes of oscillation of striped appearance are possible, and the directions of electric fields are opposite on adjacent stripes. We don't try to figure out how, exactly, the electromagnetic angular momentum is transferred to the wind system. But we assume, based on the geometrical similarity of the two systems, that the underlying reason behind the zonal wind phenomenon is a giant vortex of electromagnetic energy, a.k.a. a fermion, at the core of the planet. The assumption draws strength from the fact that it explains other mysterious features of giant planets and Jupiter itself. Jupiter's Electromagnetic Emissions The picture below is selected from a larger sample of data in the article Planetary radio astronomy: Earth, giant planets, and beyond by H. O. Rucker, M. Panchenko, and C.Weber.

Jupiter's sporadic radio emissions can at times be very intense. They are exceeded in intensity only by strong solar radio bursts. Above we have examples of the periodic bursts observed by STEREO/WAVES, Cassini/RPWS and Wind/ WAVES. Periodic bursts are marked by arrows. First we notice that the bursts have the character of whistler radiation; rapid decrease in frequency. This can be connected to the Tsallis process described earlier in the section Mathematics of the Hot Spot. Secondly, the data on the picture represents a case of jet quenching. The last two bursts come from one jet only. Voices from Outer Space That is how “whistler radiation” was first reported in 1953. Whistlers are bursts of very-low-frequency (VLF) electromagnetic energy produced by ordinary lightning discharges. A whistler begins at a high frequency and in the course of time drops in frequency to a lower limit of about 1 kHz. Some whistlers are very short, lasting a fraction of a second; others are long, lasting two or three seconds.

144 Often whistlers occur in groups. In one type of group the whistler appears to echo several times with an equal time- lapse between different members of the train of echoes. In each whistler of the group the rate of decrease of frequency is less than that in the preceding whistler. These groups are called “echo trains”. Sometimes two or more distinct, similar whistlers appear overlap in time; these are called multiple whistlers. Many whistlers are preceded by a sharp impulse, produced by a stroke of lightning. The facts above are taken from the book Whistlers and Related Ionospheric Phenomena by R. A. Helliwell. Explanations of whistlers are based on the existence of “density ducts of plasma”, capable of guiding whistler waves in the earth's magnetosphere. These explanations are, in our opinion, wrong. To understand whistler phenomena one must start from nonlinear quantum electrodynamics. The features of whistlers, given by R. Helliwell above, are all expected consequences of the energy cascade which we described earlier as the Hagedorn-Tsallis process. Bursts of radiation, subsequent whistler phenomena and high- energy observations in general, continue to pose outstanding astrophysical problems. The reason is threefold: 1. Non-existent understanding of the structure of particles. 2. Non-existent understanding of the emission process. 3. The unquestionable belief that the fundamental mechanism of electromagnetic radiation is the acceleration or decele- ration of electrons. Modern astrophysics sees the working of the universe more complex than it is, because research is forced, by the said belief, to focus on finding a mechanism to accelerate electrons to relativistic velocities, and then to discovering why the region of radiation emission is occupied by a magnetic field, required for bremsstrahlung or synchro- tron radiation. In our theory fermions of all sizes radiate as they become unstable due to decreasing energy density at their positions. They emit at exponentially decaying rate until equilibrium is reached. In hot spots fermions are born highly excited. This is why the spectra (and their temporal evolution) of hot spots, solar flares etc. are as they are. Universality of the Whistler Phenomenon High energy solar flares are the most common cause of type III solar radio bursts. On the left we have an example of an interplanetary type III radio burst dynamic spectrum. It is composed of four different frequency bands, observed using four different receivers. Picture is from the article A Review of Solar Type III Radio Bursts by H.A.S. Reid and H. Ratcliffe. X-ray and gamma-ray emissions are frequently observed during solar flares. Terrestrial gamma-ray and X-ray flashes originate from thunderclouds. Below we have a spectrum of a terrestrial whistler, measured by the satellite DEMETER: Whistler phenomena are the radiative manifestation of the Hagedorn-Tsallis process, which is equally fundamental and ubiquitous as the 1/f-noise.

Whistler Radiation from the Sun It has been observed that the X-ray and the centimetric bursts are best correlated for the first events in a flare; for subsequent events they are not well correlated. This type of correlation between the two phenomena can be understood if both emissions are assumed to come from a common source. But this leads to

145 difficulties because, presently, it is believed that cm wavelength emission from flares is through synchrotron radiation, while the X-ray emission is from bremsstrahlung. In our theory we have only one mechanism for emissions. The diagram below fits perfectly to the Hagedorn-Tsallis process, starting from a singe point:

Picture credit: New Culgoora Radio Technical Report. IPS- TR-93-03, June 1993. The figure is a typical diagram showing the dynamic spectra in frequency vs. time of five different types of solar radio bursts associated with a large solar flare, which are visible manifestations of the process in question.

Whistler Sounds from the Interior of the Earth

The Slow Down spectrogram. “The name was given because the sound slowly decreases in frequency over about 7 minutes. It was recorded using an autonomous hydrophone array. The sound has been picked up several times each year since 1997.” Wikipedia. The explanation of these sounds will be given in the following section.

Strange Sounds from the Atmosphere of the Earth One can spend hours at youtube listening recorded “mysterious/eerie” sounds from the sky, from all over the world. They may be booms or continuous sounds. The latter voices have one thing in common; they all sound like a large column of air vibrating in a cylinder. They are sounds that resonate almost like an organ pipe.

The sounds are mysterious because their source is never seen. But sometimes it can be seen. On the left we see the expected guilty of all mystery humming, booming and whistling, in earth, sea and air. (https://www.youtube.com/watch?v=LsgrRmHW8ro starting at 9:16.) By mere picture this would be explained as a tornado, but it has the peculiar mystery sound. The physical explanation is as follows: Later in this article it will become clear that a sunspot can be seen as a window into the interior of a jet. According to Stanford Solar Center “the magnetic fields in sunspots are extremely strong.” So the jets are narrow columns of extreme magnetism. Nitrogen is slightly diamagnetic and oxygen is slightly paramagnetic. A strong enough magnetic field will repel nitrogen and attract oxygen. The molecules of air become accelerated according to the strength of the magnetic field. They vibrate inside a long, narrow column of air, which becomes a modulated source of sound. If the beam is stable, meaning that it does not lose energy on its way, the only effect is a hum or a sound of an organ pipe from the sky, without visible source. All instability manifests itself as sonic booms and flashes of light, which are attendant to Hagedorn-Tsallis processes. Here without sounds: https://www.youtube.com/watch?v=EP8V10OiXE0

146 Stability of Saturn's Rings

James Clerk Maxwell wrote his Adams Prize essay On the Stability of Saturn's Rings in 1859. He investigated the stability of various configurations of solid, liquid and particulate rings orbiting around Saturn. Maxwell’s conclusion was that “the only system of rings which can exist is one composed of an indefinite number of unconnected particles, revolving around the planet with different velocities according to their respective distances”. Things haven’t changed. “Unconnected particles” is of course an nonphysical hypothesis, material particles cannot be completely neutral with respect to each other. Waves in Saturn’s rings

Saturn's rings are decorated with several kinds of waves. In the picture the little moon Daphnis propagates in the Keeler gap in Saturn's rings. It perturbs the orbits of the particles forming the gap's edge and sculpts the edge into waves having both horizontal (radial) and out-of-plane components. From this image one can deduce many things. One can see waves that are quickly damped, which means dissipative interactions between the particles of the ring. Thinking of the obvious stability of the rings, dissipation rules out all explanations that are based solely on gravity. It must be that every particle sits at the bottom of a potential well from which it can be lifted for a moment, but it soon returns to its stable position and the surface of the ring becomes calm again. To be exact, every particle of the ring resides in its own potential groove which stretches around the equator and its bottom is towards Saturn. On the right is a possible electric multipole field of a fermion. We assume that particles of the ring system are charged and repelled by the core fermion of Saturn, by means of a field as featured in the picture. The field in the picture also possesses sources of two different jets of delicate balance. “Astronomers originally believed that there wasn’t a hexagon at Saturn’s south pole, but new research found one there too. So why is the hexagon there?” “The movement of the hexagon could therefore be linked to the depths of Saturn, and the rotation period of this structure, which, as we have been able to ascertain, is 10 hours, 39 minutes and 23 seconds, could be that of the planet itself.” This we agree and support by explaining that the structure of the field of the core fermion manifests itself not only at the equator but also at the poles.

As we stated in the section Emission Process as the Origin of Field Quantization, the emission process is described by the Kadomtsev–Petviashvili (KP) equation. We assume that the emission wave (the jet) is a KP solution of genus 2, because the special feature of these solutions is a spatial hexagonal structure.

Density Waves in Saturn's Rings

Credit: NASA/JPL/Space Science Institute

147 Our first impression of this picture is laminar flow. At least in larger scale the flow is very predictable, which is the hallmark of laminar flow. In the section On Laminar and Turbulent Flows we already discussed these things: “In laminar flow these structures are maintained in a way that prevents molecules of same energy from coming to close interaction.…. This means that the electrons in that region have interleaved spectra, required for laminar flow.” Charge is a surface effect and gravity is a volume effect. This offers an endless variation of repulsive and cohesive forces between the snowflakes, grains, and larger icy chunks of the rings. These forces are components of our universal, frequency dependent and quantized force. Nature’s tendency to form minimum energy structures is manifested in the process of interleaving the spectra of particles, and this we have explained to be the prerequisite for laminar flow. The most delicate features of the universal force become visible in the ring system. The unexplained gaps in the rings are quantum effects. They follow from mathematics reminiscent of Bohr's model of quantized radius.

Origin of Saturn’s Rings Universe Today, 2013: “Rain is Falling from Saturn’s Rings.” “Astronomers have known for years there was water in Saturn’s upper atmosphere, but they weren’t sure exactly where it was coming from. New observations have found water is raining down on Saturn, and it is coming from the planet’s rings.” This must be the other way around. There is a class of planetary objects (including the giant planets) known from their hot poles. In the section Mathematics of the Hot Spot we concluded that in hot spots there is always the same process going on, regardless of the scale; large vortices of electromagnetic energy dispersing into smaller vortices, conserving the initial vorticity. This means formation of new matter. Similarly, as in the case of Jupiter, there is a giant unstable source of electromagnetic energy at the core of Saturn, and its jets break near or through the surface. At the hot spots (a.k.a. hot poles) new matter is continuously created. Saturn has fabricated its own rings.

Spokes of the Rings “Spokes are an aspect of the rings of Saturn not discovered until recently, when advanced pictures of the ring structure were seen. The spokes within the rings are faint, dark areas perpendicular to the rings that seem to grow and shrink. Little is known about the spokes, but they may be charged particles that float above the actual ring plain. The spokes rotate at the same speed as the magnetic field of Saturn, so they are thought to be associated with the planet's electromagnetic forces in some way.” Planetary Geology for Teachers, module 12. On this we agree.

“...we find that spoke activity observed on both sides of Saturn’s rings occurs with a period equal to, within all uncertainties, the period of the SKR [Saturn’s kilometric radiation] emissions arising from the northern SKR source, though a period equal to that of the southern SKR source seems also to be present.” The Behavior of Spokes in Saturn’s B Ring by C.J. Mitchella & al. In the section Physics of the Riemann Hypothesis we discussed waves existing only during emission and absorption, the nonlinear Stokes or cnoidal waves. During emission these waves propagate on the surface of the fermion and are finally focused at the pole region to form a jet. The perfect form of a stable particle is disturbed during emission. This means that the electric field at the equator plane becomes disturbed and the charged particles become accelerated perpendicularly to the ring plane. This explains why the occurrence of spokes appear to be correlated with periodicities of the SKR emissions.

148 Saturn’s Kilometric Radiation

Saturn has hot poles and gives off six times the heat she receives from the Sun. Present theories cannot explain this fact. In our view Saturn is one of many planetary objects that have an unstable source of electromagnetic energy at its core, and the jets from the source break near or through the surface at pole regions. Picture from the article Possible influence of GWS on SKR periodicity by G. et al.:

Normalized peak-to-peak power of Saturn kilometric radiation modulation as a function of time from 2004 until early 2013. The ordinate shows the angular velocity (rotation rate) in degrees per Earth day (left side) and the corresponding rotation period in hours (right side). The normalized peak-to-peak power is given by the color bar... The duration of the Great White Spot (GWS) event is indicated. There are two periods related to SKR from different hemispheres until shortly after equinox [when the sun is directly over Saturn's equator], but there is mainly one period until 2013 (black dashed line) except for a few months in early 2011. The upper panel shows the planetocentric latitude of Cassini as a function of time. Narrowband emissions at 5 kHz are more frequently observed from high latitudes, leading to strong modulation signals during Cassini’s high- latitude orbits. This is expected because the sources of SKR are the radio lobes of the jetting source at the core of Saturn. Scientists Find That Saturn's Rotation Period Is a Puzzle

Scientists use the rotation rate of radio emissions from the giant gas planets such as Saturn and Jupiter to determine the rotation rate of the planets themselves because the planets have no solid surfaces and are covered by clouds that make direct visual measurements impossible.

149 “Saturn is unique in that its magnetic axis is almost exactly aligned with its rotational axis. That means there is no rotationally induced wobble in the magnetic field, so there must be some secondary effect controlling the radio emission.” (Space Physicist Don Gurnett.) One more puzzle is the discovery of a difference in apparent rotation rate between Saturn kilometric radio emission from northern and southern hemispheres. It has also been confirmed that the auroras and the radio emissions are physically associated. The explanation of all this is that the radio bursts from Saturn’s hemispheres are jetting, quantized emissions from the core fermion. The period of the rotation of Saturn's core is the same as the rotation period of the polar hexagons. The emission rates of quanta and the rotation of the core field are almost, but not quite connected. The differences depend on the orientation of the field of the central fermion with respect to the local external magnetic field, which is mostly due to the Sun. These things are shown in the picture (Saturn’s kilometric radiation) above. The picture also suggests that the two emission graphs may cross, which is obvious in our model. This has been proved in the article The reversal of the rotational modulation rates of the north and south components of Saturn kilometric radiation near equinox, by D. A. Gurnett & al. It is interesting that the SKR was observed to disappear in the 2 to 3 days following the Voyager 1 encounter. SKR was observed to disappear every 66 hr, coincident with the period of revolution of Dione. Moreover, the Voyager -2 observations showed an increase in SKR activity every 66 hours, rather than a decrease. Saturn as a Radio Source by M. L. Kaiser & al. These observations prove the extreme sensitivity of the central fermions of planets to the tensorial energy density at their positions. Surface of the Sun

“Jets are transient, collimated and fast with respect to the thermal speed of the plasma in the jet. They are seen with innumerable different forms, scales and spectral signatures on the Sun.” Observations of solar X-ray and EUV jets and their related phenomena “Bright points (BPs) and their sometimes associated jets have recently been the subject of numerous papers due to the availability of new observations from the Hinode spacecraft.” Bright points and jets in polar coronal holes observed by the extreme-ultraviolet imaging spectrometer on Hinode. “We now see that jets happen all the time, as often as 240 times a day. They appear at all latitudes, within coronal holes, inside sunspot groups, out in the middle of nowhere–in short, wherever we look on the sun we find these jets. They are a major form of solar activity” – Jonathan Cirtain, Marshall Space Flight Center.

Sunspots and Coronal Loops

The schematic picture of Hale-Nicholson’s law. Our own sun has a core fermion similarly as the giant planets. It emits big fermions that are repelled towards the surface. On the way they feel a decreasing energy density and therefore become unstable. Sunspots are surface regions which are hit by the jets from these submerged sources. Usually the jets cause two spots which are located on different sides of the equatorial plane. Contrary to the present belief, sunspots are not relatively cool regions on the surface of the sun. Instead, high energy density of the umbra prevents particles from radiating until they have flown out from the umbra region. Near the surface local effects become dominant and the existing sunspot sources determine the orientation of the newcomer. It rotates and its jets show prominences and filaments. Sunspots come in pairs with opposite magnetic polarity. This is a case of electron pairing in large scale. The pairing itself is a realization of local energy minimum, but the pair is also affected by the core fermion. In the picture above we see a pair in two different setups: one with the core fermion spin up and the other spin down.

150 The lone pair fermions have differing energies and they therefore act differently when the energy minimum positions are reached. One of the pair is nearer the core than the other, and this has consequences, as we will soon see. ( It also gives rise to the “butterfly”' pattern.)

Picture modified from the web page: http://ircamera.as.arizona.edu/NatSci102/NatSci102/lectures/suninterior.htm

Above left we see a piece of the surface of the Sun. It is composed of Benard cells. We discussed a relevant subject earlier in the section On Laminar and Turbulent Flows: “Fluid flows can be divided into two distinct categories: smooth layered flows known as laminar flows, and chaotic flows, also called turbulent flows. Application of the ideas above leads to the following picture: in a quiescent liquid, molecules have formed clusters, which in turn have formed structures of minimum energy. In laminar flow these structures are maintained in a way that prevents molecules of same energy from coming to close interaction. [These structures can actually be seen in infrared spectrum. The transition region between two Benard cells has a continuum of temperatures. This means that the electrons in that region have interleaved spectra, required for laminar flow.]”

We already came to the conclusion that solar flares are visible manifestations of the Hagedorn-Tsallis process on the surface of the Sun. Flares are often, but not always, accompanied by a coronal mass ejection. The reason to this is that the H-T process is explosive in nature. In the process large particles (e.g.nucleons) of same energy are born in a close region and repel violently. A solar flare is a sudden flash of brightness observed near the Sun's surface. If the associated H-T explosion occurs below the surface, a coronal mass ejection follows. The explosion lifts and accelerates an amount of mass, a piece of the surface of the sun as in the picture above. The Benard cell structure remains as the mass is stretched during its motion, which is governed by the strong magnetic fields of the jets, and gravity. The result is a laminar flow of plasma in the form of loops. Laminarity is visible for the same reason as in the rings of Saturn: interleaved spectra of particles.

The Sunspot Cycle

151 The Astronomical Journal published a paper in 1965 by Paul D. Jose. He noted that the Sun and planets orbit around a point called the barycenter of the solar system. He discovered that the Sun returns near its starting position every 179 years, which is 9 times the synodic period of Jupiter and Saturn. In the sunspot record from 1610 to 1954 he discovered the same 179-year modulation of the amplitudes of the 11-year cycle. This modulation matched the time derivative dP/dt (the upper curve), which is the rate of change of the angular momentum about the instantaneous center of curvature of the orbit. Jose concluded that “The relationships set forth here imply that certain dynamic forces exerted on the sun by the motions of the planets are the cause of the sunspot activity.” In our view the cycle is as follows: The Sun has a single core fermion, which is the source of Sun's permanent dipole magnetic field. Similarly, as all fermions, it is sensitive to the energy density at its position, which is affected by the shadowing effect of the planets. If the energy density felt by the core fermion decreases, it becomes unstable and begins to emit both bosons and fermions.

The sunspot number for cycles 21 and 22 and the line-of-sight component of the solar magnetic field. (Benevolenskaya, 1998.) The appearance of polar faculae marks the onset of a new cycle, because the faculae are caused by boson beams from the core fermion or from the newly-formed spot fermions, which start to drift towards the equator region. It takes five to six years before the first jets reach to the surface of the Sun. During times of quiet Sun there are no sunspots and the magnetic field originates from the core fermion alone. As sunspots come into view, it means that there is another dipole field forming. It is the sum of the fields of individual sunspot fermion pairs, whose magnetic moment is not completely eliminated in the pairing process because the energies/fields of the two members of a pair are different. Finally, the sunspot sum field is so strong that the global magnetic system, two dipoles of same polarity, becomes unstable and the core fermion rotates 180 degrees, due to the fundamental drive towards minimum energy configurations. As this happens, the core fermion feels reduced energy density and immediately stops emitting. This is clearly visible in the graph above. The quiet period begins during which the core fermion gains energy until it feels the energy density decreasing again. For the interested reader, there are two special works on this topic: the dissertation of Oleg Okunev: Observations and modeling of polar faculae on the Sun, and Variations of the Dipole Magnetic Moment of the Sun during the Solar Activity Cycle by I. M. Livshits and V. N. Obridko. From these works one can see that the results of sophisticated data collection and analysis support our explanation.

Temperature of the Solar Corona One of the major unsolved problems in astrophysics is to explain how the solar corona is heated to temperatures of million degrees, while the temperature at the surface of the sun is less than 6000 K. The alleged high temperature of the corona can be measured in two ways: Fe13+ ions suggest 1.8 MK, but the broadening of their spectral lines suggests more than 7 MK!

152 The paradox stems from the idea that atoms formed about 50,000 years after the Big Bang. All the nuclei of atoms received their electrons at that time. Now, afterwards, to remove a number of electrons from an atom, high temperature is required. But there was no Big Bang. Instead, matter is continuously created in the hot spots. Among other particles, newly born iron nuclei become ejected into the corona. The atoms fill their electron shells if they come in contact with free electrons, but it takes time in the thin electron gas of the corona. There is no mystery of high temperature but lack of electrons. Temperature of Sunpots The apparently lowered temperature of sunspots is an illusion. Energy density is high in the spot and prevents the electrons of plasma from emitting until they move away from the spot. The sharp boundary between umbra and penumbra is a manifestation of two different energy levels of electrons. In the penumbra region, farther from the region of the highest energy density, some emission can be seen but the energies of electrons are still higher than normal. Both coronal holes and sunspots are regions of high energy density. Electrons are less unstable in these regions and therefore radiate less. For this reason, they appear as dark.

Solar Flares and Their Energy Solar flares are tremendous explosions on the surface of the Sun. They originate from sunspots and sunspot clusters. A flare appears, when a jet of a fermion pierces the surface of the sun. The spins of the bosons in the jets are oriented along the jet, resulting in strong magnetic field. The transient outbursts of energy, ranging from major flares down to microflares and even nanoflares are all produced by the same mechanism, high-energy bosons dispersing into smaller particles. From these hot spots energy is emitted in all possible forms: electromagnetic (Gamma rays and X-rays), and energetic particles (protons and electrons). As explained earlier, the turbulent regions of jets, the hot spots, produce new particles of matter. At some stage of the dispersion process there will be newly born fermions of same energy. Due to resonance, repulsive forces appear. This results in “nuclear explosions” and mass ejections.

Astronomers using Japan's Hinode spacecraft have discovered that the sun is bristling with powerful “X-ray jets.” They spray out of the sun's surface hundreds of times a day. Astronomers have also noticed that the sun is a variable X-ray star. At the wavelengths of extreme ultraviolet light the Sun’s emissions dimmed by 30% during the last solar minimum, a 300% greater dimming than in visible light. This is understandable because the sources of X-rays are highly excited electrons. These are created in Hagedorn- Tsallis processes in hot spots of jets of all sizes, and there are fewer jets during quiet sun.

Magnetic Storms The basic problem of solar-terrestrial physics is to understand the interaction between the Sun and the Earth, through the emission of particles, magnetic fields and electromagnetic radiation. and Ferraro (1930) proposed that the Sun periodically ejects huge clouds of plasma which produce magnetic storms when they reach Earth. The current belief is that during the development of so-called geomagnetic storms, charged particles are injected into the Van Allen belts from the outer magnetosphere, giving rise to a sharp increase in the ring current, and a corresponding decrease in the Earth’s equatorial magnetic field. All this is based on the flux freezing equation, ¶B =Ñ´(),v ´ B ¶t which we have shown to be incorrect. All explanations of physical phenomena based on this equation are wrong, including the Interplanetary Magnetic Field. ( see, e.g. the Wikipedia article of this name.) These clouds are accelerated by explosive H-T processes. The jets on and below the surface are highly magnetic and the clouds of particles in many cases leave the surface as magnetized. The magnetism of the cloud is Stoner magnetism. The solar wind clouds become magnetized at the region of ejection, meaning that the spins of the particles of the clouds become oriented in the same direction by the local magnetic field. This, in turn, is determined by the highly magnetic jets of the region. (See the article Magnetized Gas Points to New Physics by Adrian Cho.)

153 The particles of the clouds are hot, but (in our theory) this doesn’t mean that they are in violent random motion. As the clouds recede from the sun they expand and get cooler, but they maintain their spin structure, because it is a minimum energy structure. The cloud has a magnetic field of its own, it is not that “magnetic field lines dragged out by the solar wind” (as phrased by a NASA source). As the cloud slams into the atmosphere of the Earth, the magnetic field of the cloud, the collective spin orientation, disappears. This changing magnetic field induces an electromotive force in all conductors at the surface of the Earth, as stated by Faraday’s law. Sunspot Number vs. Interplanetary Magnetic Field

The graph shows the result of research by Leif Svalgaard. The estimation of IMF strength is made by a method which is not dependent on the sunspot number. http://www.leif.org/research/The%20IDV%20index%20-%20its%20derivation%20and%20use.pdf

In our view a 100% correlation is not surprising, because the solar wind clouds carrying the interplanetary magnetic field are ejected into space by the energy released in the hot spots of jets, and there are no jets if there are no sunspots.

Redshift in Quasars and Galaxies One more unexplained phenomenon is the intrinsic redshift effect. Halton Arp, G. Burbidge, E. M. Burbidge, W. M. Napier and others have been pointing out for many years that there are interacting celestial bodies with widely disparate redshifts, and that there must be a cause for redshifts that do not arise from cosmological expansion. (e.g. Intrinsic Redshifts in Quasars and Galaxies by H. Arp.) Arp writes in his book Seeing Red (which favors a steady-state model of the universe and undermines the accepted big bang model):

“…all the empirical evidence…establishes a pattern whereby a large, old galaxy has ejected younger material which has formed younger smaller companion galaxies around it. The younger galaxies in turn eject material, which forms even younger quasars and BL Lac objects. The age hierarchy is evident from the properties of the objects in the characteristic groups in which galaxies occur. The ejection origin of the younger objects is evident from their pairing across active nuclei, their luminous connections back to active centers… In one-to-one correspondence to the age hierarchy, there is a redshift hierarchy. Every testable line of evidence shows that the younger the object is within the group, the higher its intrinsic redshift is. ….. If the mass of an electron jumping from an excited atomic orbit to a lower level is smaller, then the energy of the photon of light emitted is smaller. If the photon is weaker it is redshifted.”

Arp has also argued that the Active Galactic Nuclei (AGN) gradually lose their non-cosmological redshift component and eventually evolved into full-fledged galaxies. All these observations are explainable by our theory.

The capability of acting as a source of intrinsically redshifted radiation is the basic feature of our fermion model. All fermions adjust their energy according to the local energy density. If an electron in space is shaded by a celestial object, its emissions are “weak” as compared to the emissions of an electron in free space. We have already encountered the “intrinsic redshift” in the section named Particles in Geodesic Motion. It is the shadowing effect in the vicinity of a massive body. Redshift indicates reduced intensity of background radiation.

154 Quasars are Ejected from Active Galaxies

Astronomer Halton Arp’s model of galactic ejection: High-redshift quasars are ejected from an active galactic nucleus—often in pairs in opposite directions along the galaxy’s spin axis. We interpret Arp’s theory as follows: all emitting objects are large fermions. Earlier we remarked that in beta decay a larger fermion ejects a small fermion. We have shown that this mechanism explains all the main features of the sunspot cycle. In our view, Arp argued that this process, a fermion ejecting a smaller fermion, can happen in still larger scale. Needless to say, our theory gives 100% support to his measurements and his proposals.

The intrinsic redshift reveals the age hierarchy of giant fermions, because they are born naked. They have very little material around them, the molecules have access near to the object and become strongly shaded.

(James W. Brault measured the gravitational redshift of the sun using optical methods in 1962.)

The mass of the electrons in the shadow has considerably diminished, so the photons emitted are redshifted. An older giant fermion has a thick cover of molecules. The radiating surface is far from the core fermion and the shadowing effect is moderate, likewise, the redshift. This is why celestial objects may be situated in space near each other even though the redshifts are quite different.

In our theory the observed redshift value of any object consists of two components: The initial component, and the “natural aging” component, meaning the exponential damping of the photon energy. Redshift is a measure of distance, but in calculations of distances one must take into account the initial energies of the photons, namely, the intrinsic redshift.

The End of the Grand Cycle: Eternal Return

At some stage of the grand cycle the central supermassive fermion becomes exhausted. The matter in spiral arms loses its kinetic energy. It begins to contract and heat up due to the pushing gravity effect ( See the article Deriving Newton’s Gravitational Law from a Le Sage Mechanism by Barry Mingst and Paul Stowe). The surrounding cosmic gas clouds also contract and create a large section of empty space, a void, around the mass concentration. The growing mass (perhaps a “galaxy merger”) absorbs more and more energy from background radiation. Its interior region becomes extremely hot and it finally includes a number of giant fermions, in a resemblance of a large uranium atom or a group of them. It now only awaits the cosmical triggering event for an atomic explosion of cosmic magnitude. As it occurs, an enormous explosion and flash of light takes place at some location in space. This is the end and the beginning of a grand cycle.

Geology I love geology: one of the things I love the best is...No real laws or rules...We borrow from the real sciences and use as we see fit...But none that are really ours...Actually there are several laws, one of which is helpful in this case...Strickler's 1st Law of GeoFantasy..."All good regional theories break down at the local level" ...This covers a lot of territory...No time to explore all possible ramifications, but...Part of what this means is that we are free to...Look at the "big picture" and...Wave our arms as much as we want... "GeoFantasize" at will w/o fear of serious contradiction...Only requirement is that we recognize the limits of our interpretations...And accept them for the "progress reports" that they are. Originally, we didn't intend to write anything about geology; it seemed so foreign to the electromagnetic theory. But, after reading Professor Strickler’s encouraging words about the essence of the science of geology, we decided to view it in the light of our theory. We found quite a lot of things that geology might borrow from our electromagnetic world view, including some big issues.

155 Geological Structures Without Explanation

The jets are apt to become unstable at strong gradients of energy density, for instance, at the surface of the earth. Three years ago, a mysterious crater suddenly opened up on the Taimyr peninsula in Siberia. The huge explosion that created the giant crater was heard 100 km away and caused a “clear glow” in the sky. There are numerous newly born craters in Siberia.

In the crust of the earth there are geological structures that are created long time ago by the jets of giant fermions. These are the pipes. Diamonds are formed in the heat and pressure of the hot spots below ground. The craters like the one in the picture are probably newly formed kimberlite pipes. Siberia is a region of kimberlite pipes, diamonds, and seismic activity.

“The Richat Structure in the Desert of is easily visible from space because it is nearly 50 kilometers across. ….. The image was captured by the orbiting Landsat 7 satellite. Why the Richat Structure is nearly circular remains a mystery.” This is formed not by a single blast as the one in Siberia, but by several in sequence, and with much more energy. Furthermore, the hot spot of the jet has stayed above ground for a while, forming the extrusive igneous rocks. The Richat Structure is very slightly oval and its major axis is in line with two other but smaller structures. The jets that formed these three “craters” were emitted from the same, slowly rotating source of decreasing energy. Probably all mysterious deposits of vitrified and rocks (including many of the “impact craters”) have this same origin. Impact Craters Presently, it is believed that collisions of solid bodies played a crucial role in the formation of Earth. This is based on the assumption that minerals show a unique behavior when subjected to shock waves. The ultradynamic loading to high pressures and temperatures causes deformation, transformation and decomposition phenomena in minerals that are unequivocal indicators of impact events. Scientists have developed diagnostic criteria for identifying and confirming impact structures on Earth. In the absence of the extraterrestrial projectile or geochemical evidence thereof, the following characteristics are deemed most important for confirming asteroid impact: Evidence of shock metamorphism, crater morphology and geophysical anomalies. Of these three, only diagnostic shock-metamorphic effects can provide unambiguous evidence of impact origin. Transformation phenomena include phase transitions to glass and/or high-pressure polymorphs, such as , stishovite and ringwoodite. These are liquidus phases, which form upon decompression by crystallization from high-pressure melts. We claim that the impact criteria equally well confirms that the crater was formed by a large, intense flash of electromagnetic energy. Probably all craters without evidence of extraterrestrial projectile are produced by a Hagedorn- Tsallis process near ground surface.

156 Fulgurite Tubes

© 2016 The Thunderbolts Project™ Ironstone fulgurite tubes. Blue Mountains, Australia.

On the right is a sample of a large deposit of fulgurits in Uzbegistan (Youtube, “Not only Sodom and Gomorrah...”). They are of the size of a tree trunk. The largest one of the Blue Mountains fulgurites is almost 500 millimeters in diameter. Fulgurites are formed by electromagnetic jets from underground. Unstable fermions of moderate size with their jets create straight, large vitrified pipes, not like those produced by ordinary lightning, which are quite small with internal diameters in the mm to cm range, and usually branched and forked.

Moqui Marbles. Credit: Brenda Beitler, University of Utah.

Almost everyone has seen the physical experiment in which ping pong ball floats in a stream of air; the Coandă effect. We assume that if a hot steam of gas blows through an orifice of a fulgurite pipe from under a bed of iron-bearing dust, assisted with a very strong cylindrical magnetic field, it can create a ball. The ball is sintered together. The big ball above rests on its construction site, on the orifice at the top of a hollow column. The one below left almost certainly is still on its place of origin. The photograph is taken on the surface of the planet Mars. Moqui Marbles are similar to Mars blueberries because they are formed by the same process.

157 Picture credit: University Of Utah. The picture above is from NASA's Mars Exploration 1984. The picture on the right is the side view of the same thing. (A sample of Utah Navajo Formations.) Contrary to the present belief, water is not needed in the formation of magnetite or hematite balls. Craters… They Just don’t Look Like They Should:

Ceres, Dione and Mimas. Photos courtesy of NASA and JPL.

Who Or What Made Polygon Features On Moons? By Ted Twietmeyer: Logically one would think that craters on celestial bodies in our Solar System are usually round. Our moon has shown this to be true with hundreds of round craters, and laboratory experiments using miniature meteors blasted into dry sand also produce round craters. This would make sense, since the shock wave from an impact perpendicular to a celestial body's surface by a meteor or asteroid would radiate outward in a 360 degree pattern. Even if the impact isn't perfectly straight down, material will still be displaced in all directions to some degree, even creating a tear- drop shaped craters. Today we know that out there in our Solar System on some of our moons, craters and other features are not round but polygon-shaped, usually with six sides. At first glance this may seem insignificant, but it is not. Iapetus is another moon has numerous craters which clearly appear to have polygon shapes.

Above: A crater chain on Jupiter's moon Ganymede. The hexagonal craters and chains of them are formed by electromagnetic jets from below the surface. These jets have the special feature of spatial hexagonal structure. All larger craters in the picture are hexagonal!

158 https://commons.wikimedia.org/wiki/File %3ATaiwan_wanli_queenshead_altonthompson.jpg All columnar, pipe-like or fin-like structures that are produced by differential from seemingly homogeneous sedimentary layers, like in Bryce Canyon, have one thing in common. They are harder than the surrounding stone because they have been sintered. The required local heating was produced by a jet of electromagnetic energy from underground. A hoodoo is formed as follows: first a jet pierces the ground. A hot spot forms above the ground and gives a heat treatment to everything that lies below. The jet shortens and gets weaker but at the surface it can still form a little crater. A nearby stone then slides into the crater. The sandstone column is now ready to carry the weight of the cap stone because it has become hardened by the sintering process. There are versions of the hoodoo formation process. If the jets form a field, i.e. there are dozens of jets on a small area, the earth underground becomes heated. The pipes become emptied by hot, pressurized air. If a stone from the rim of the crater falls onto the orifice of the pipe, it redirects the stream. If the stream is hot enough, also the bottom part of the crater becomes hardened. The final result is “a stone in a bowl”. These and results of turbulent phenomena can be seen in Bisti Egg Garden in Bisti Wilderness Area. There one can see creations of slowly cooling turbulent hot gas and partly melted sand. Petrified Wood

These samples are from the same Uzbekistan site, introduced above under the title Fulgurite Tubes. The ones on the left are presently identified as petrified wood. In other words, tree trunks have become silicified, meaning that there is no carbon content.

159 Petrified wood is a beautiful example of an ad hoc explanation. In reality, these “trunks” are made directly from non- carbon bearing soil, by electromagnetic energy.

Beams of electromagnetic energy are presently well understood. On the left are graphs of radial intensity of some modes of Laguerre-Gaussian (LG) , Bessel- Gaussian (BG), and helical Mathieu–Gaussian (HMG) beams, (d,h,l), respectively. Red lines refer to theoretical models, blue to experimental results. These models we consider as good approximations for the real, high-power jets. The soil is of course heated most at the regions of high intensity. If the process is over in few seconds, there results e.g. a sandstone pipe. If the process takes a little longer, the content of the jet region becomes glassified. Outcome variations exist. The graphs are reproduced from the article Generation of cylindrically polarized vector vortex beams with digital micromirror device by Lei Gong & al. Article in Journal of Applied Physics,· November 2014. Used in accordance with the Creative Commons Attribution (CC BY) license.

There are some other misleading signs of life: . They are laminated, accretionary structures, which are commonly regarded to have formed by the -binding or precipitating activities of ancient microbial mats or . Stromatolites are thus considered to be a proxy for early life on Earth. But the “”, a concentric-circles structure from Morocco on the right is a sample of sand formed and concretized by an electromagnetic beam.

The “Twin ” from Sahara proves that these structures are not formed by water or gas bubbling up from the sand. The azimuthal structure of the beam can be seen in this concretion. It is hexagonal, the same form that can be seen at the poles of Saturn and in a number of “impact” craters.

Permineralization “Permineralization commonly proceeds according to the permeability variations, causing color patterns to reflect the tree’s original anatomy. (A) Miocene wood from Saddle Mountain, Grant County, Washington, USA.” Text and figure taken from the article Origin of Petrified Wood Color by George Mustoe and Marisa Acosta.

It’s perfectly understandable that seeing a structure like that in stone, one immediately decides that this must be a trunk of a tree turned to stone. But then a second thought comes and one realizes that if the tree grew in tropical climates there should not be annual rings. (In the tropical rain forest, relatively few species of trees, such as teak, have visible annual rings. The difference between wet and dry seasons for most trees is too subtle to make noticeable differences in the cell size and density between wet and dry seasonal growth.)

160 The alternative conclusion would be that the “annual rings” represent the energy density of an electromagnetic beam in radial direction, at the moment of birth of the “trunk”, and that it has followed the cylinder function of sufficiently large index. This is the line of thought we mean to develop in the following. The process of petrification still remains somewhat of a mystery and scientists are not in agreement as to what actually takes place. A number of mineral substances can cause petrification, but the most common is silica. Silica in ground water infiltrates the buried wood and by some complex chemical process is precipitated within the plant tissues. Cellulose and lignin are often still present in the silicified wood. If this is the case, then the process is called permineralization. But we want to call into question the idea that the process can continue to the point where cellulose and lignin, the two components of wood, are completely replaced with silica, so that the resulting “petrified wood” is extremely hard ( >7 on the Mohs scale), and can only be cut with specially designed diamond saws.

“Petrified wood can preserve the original structure of the stem in all its detail, down to the microscopic level. Structures such as tree rings and the various tissues are often observed features. Petrified wood is a in which the organic remains have been replaced by minerals in the slow process of being replaced with stone.” (Wikipedia).

In a more detailed study, Scurfield and Segnit (1984) examined 75 fossil wood specimens from Australia using X-ray diffraction, differential thermal analysis, electron probe techniques, optical and scanning electron microscopy. Their study found that replacement of the cell walls of tracheids and vessels occurred in addition to permineralization.” (our emphasis.) “Studies of silicified wood from Florissant and Chemnitz demonstrate that cell walls and lumina can be preserved in different forms of silica. Thus, forms of silica that serve as the initial replicating material for cell walls and open spaces may be different from what initially permineralized the wood.” It says above that petrified wood includes cellular details, but they are not similar to those observed in real trees. Petrified wood also contains opal form of silica, which is gemstone. what does it look like?

(c) euhedral microcrystals of common in unaltered [innermost] zone and (d) spherical opaline bodies characteristic of altered [outermost] zone. Replacement of Quartz by Opaline Silica During Weathering of Petrified Wood by A. L. Senkayi & al.

161 Thanks to the electron microscope, we now know that opal is made up of small spheres of silica. Opal is an amorphous form of silica, but so is glass. Small spheres of glass are manufactured in commercial quantities using flame pyrolysis, but how about opal? “Opal is formed from a solution of and water. As water seeps through sandstone, it picks up tiny particles of silica. Millions of years ago, the solution flowed into cracks and voids in sedimentary as well as volcanic areas inland Australia. Estimates suggest this solution had a rate of deposition of approximately one centimetre thickness every five million years at a depth of forty metres. Over a period of approximately 1 to 2 million years after this period solidification occurred as the climate changed. The opal therefore remained soft and un-cemented for long periods before becoming hardened. As the silica in solution was deposited, and the water content gradually decreased, spheres formed in the gel. The spheres are formed by the particles of silica spontaneously adhering to other particles which form around it.” http://www.opalsdownunder.com.au An interesting question is, of course, how uniformly the opal spherules are distributed through the petrified wood. This is shown in the picture above.

All the above boils down to this: geology has the formation of opal completely wrong. Microspherules of opal are produced by heat, similarly as those of glass.

On the right is a boulder opal in its ironstone matrix. Its cross section matches perfectly with Mathieu–Gaussian (HMG) beams. Opal has formed in the region of highest energy density.

There is one thing in excess, as compared with formation of glass spherules. Opal is formed in extreme magnetic field. Large spherules of molten silica are dispersed into small ones due to high-frequency electromagnetic forces and become arranged in regular, closely packed structures for the same reasons.

Andy Knoll’s Law The value of a biomarker is directly proportional to the difficulty of making it through inorganic processes. Petrified wood is formed, if some regions of a cylindrical column of sandstone, occupied by high-energy electromagnetic field, become melted. This results in a foam, but not in resemblance of the Kelvin structure. The foam is ordered in the directions of cylindrical coordinates (due to electromagnetic forces on diamagnetic silica), and thereby resembles the structure of wood. But this “wood” contains no organic matter. It has never had anything to do with wood. Petrified trees are always cut into logs, without exception. That is what happens to a hot glass bar if it’s cooled down starting from one (surface) end only.

Permineralization is possible but petrification is not. “Petrified forests” are forests of fulgurites.

The term stromatolite means “layered rock”. In our theory they are cylindrical samples of sedimentary soil, solidified by heat as explained above. Stromatolite textures on surfaces seem to be cross sections of two types of three-dimensional objects: the concentric-circles structure and the stromatolite column.

Picture above: silicified ring-in ring structures on a bedding surface in the Dales Gorge Member BIF, Hamersley Basin Western Australia. Wavelet width about 1 cm (photo by H. Dalstra). These have no value as biomarkers. International Journal of Geosciences, 2013, 4, 1382-1391.

162 Thrombolites

A single large thrombolite (stromatolite) formed around a tree tramp. On the left we see a fulgurite with molten interior. On the right is a round hole left behind by a beam of electromagnetic energy. It has vaporized the rock on its way and deposited some molten material on the rim. There are variations in which molten rock has filled the hole, resulting in a pillow thrombolite. Sandpipes, ring-in-ring structures, fulgurites, stromatolites, thrombolites... all they have the same origin. They result from interactions between high-energy beams of electromagnetic energy and the surface of the crust. These structures cannot be considered as biomarkers, because they can be produced by inorganic processes.

Tafoni, Still Not Understood

Glass foam Tafoni. Credit Wikipedia.

Tafoni are also known by the name honeycomb weathering. The formation of tafoni has been ascribed to the following physical processes: scouring by wind, beating of rain, insolation, freezing and thawing, differential expansion and contraction of curved surfaces, differential humidity between exterior and interior of cavities, and burrowing by animals. Foam is a minimum energy structure. Honeycomb is a minimum energy structure. The only thing that prevents one from making further geological conclusions from this simple recognition is that the science of geology lacks the concept of fast, local heating of sand and rock. This should not be limited to surface phenomena, heating of internal structures of the crust and rocks must be included. (A closer look at the picture of the Queen's Head hoodoo (on the page 159) reveals that the top stone has partly melted from hot gases from the orifice it rests on.)

163 On Some Big Issues in Geology

The science of geology is subordinate to the theory of Big Bang, because it deals with matter, which, according to reputable editions, started forming about three minutes after the bang, through the process called Big Bang nucleosynthesis. We prefer the suggestion of Fred Hoyle. He explained the production of all heavier elements starting from hydrogen. Hoyle proposed that hydrogen is continuously created in the universe from vacuum and energy, without need for universal beginning. Supernova nucleosynthesis is in our theory replaced with the Hagedorn-Tsallis process which, due its fractal nature, works at all scales. It produces all matter, including hydrogen.

Scale Invariant Distribution of Matter in the Solar System

All started with a supernova explosion. A single, large, rotating fermion was left alone in space. It was immediately unstable and developed two jets and their hot spots. In these regions large amounts of iron was produced because large hot spots, by the Hagedorn-Tsallis process, create a lot of neutron/proton -size stable particles. These particles nucleate into compositions as large as possible, which practically is the iron nucleus. Compositions heavier than 56Fe are possible but are produced in far less amounts. The limits are set by thickening of the spectra of the nuclei, as discussed earlier in the section Quasicrystals. The large parent fermion also ejected a number of spawn fermions which, directed by the magnetic field of the parent, finally settled at the equatorial plane. The parent was loosing energy all the time and the spawns had to follow. Some of them reduced their energy by repeating the same sequence; by emitting a spawn fermion. The solar system was now in place. ( This is exactly what Halton Arp saw in still larger scale and was punished because he said what he saw.) All this happened inside a dense, rotating cloud of gas and dust, rich in iron oxides. (Oxygen is also formed in hot spots.) The emitted giant fermions started to act as nucleation (or condensation) centers. As mass is gathered around an emitting fermion, it becomes less unstable, because it feels increasing energy density. So it is possible to establish equilibrium in a system composed of a core fermion and the surrounding mantle. Formation of the cores of a planetary system is a scale invariant process. For this reason the condensation process, i.e. developing a mantle, becomes scale invariant. This leads to the Titius-Bode “law”.

In the book Non-extensive Entropy: Interdisciplinary Applications, edited by Murray Gell-Mann and Constantino Tsallis, the authors show that scale invariance can be seen in all structures of the universe, from molecular clouds through galaxies to superclusters of galaxies. At some time, nearing the state of equilibrium, Earth was a ball of molten iron, covered with lighter material. Still, there were large jets piercing the floats of solid core. The hot spots of these jets rained down matter in pulverized form. These rains covered areas of continental size. Mudrocks make up fifty percent of the sedimentary rocks in the geologic record, and are the most widespread deposits on Earth. Mudrocks are made of fine-grained clasts ( and sized).

Above: a vertical section of the Earth's crust. On the right: Surface of the Mars as seen from above. It has first sedimented similarly (and from the same minerals) as on Earth, and then suffered from wind erosion. (Ceti Mensa region.) One hexagonal crater can be seen in the picture.

164 Banded iron formations are distinctive units of that are almost always of Precambrian age. they consist of repeated, thin layers (a few millimeters to a few centimeters in thickness) of iron oxides, either magnetite (Fe3O4) or hematite (Fe2O3), alternating with bands of iron-poor shales and , of similar thickness, and containing microbands (sub-millimeter) of iron oxides. layers are dominated by cryptocrystalline granular quartz (opal-CT). In our theory opal_CT is not formed in a solidification process lasting millions of years, but from aggregations of silica dust in a flash of electromagnetic energy. The 3800 million years old banded iron formations are presently seen as the oldest direct evidence for the presence of water. In our theory sedimentation does not need water, and fine-graded rock is not necessarily produced by erosion.

Steve Smith: Mars -- The Great Desert in 3-D | EU2014, ThunderboltsProject. Ceti Mensa region in Mars. Hematite dust on the surface acts like a ferromagnetic fluid. As a jet penetrates the surface, its extreme magnetic field arranges the dust particles into hexagonal structures, as can be seen in the magnification. See Parametric stabilization of the Rosensweig instability. First and Second Sedimentation

During the first sedimentation water existed, but it was in the form of gas. The first seas formed from rain that began when Earth had cooled enough for water in the atmosphere to condense. At this point water erosion and the second sedimentation started. Oxygen was present in the atmosphere and also in seas, from the beginning. Oxygen-based lifeforms started to evolve immediately, but the Species Geologies emerged so late that all large jets were ceased long time ago. Instead, there were rivers with sedimentary deltas. Currently, all sedimentation is believed to take place only in water. Geology studies water erosion, which presently is the most dominant surface process. Material broken loose by erosion grinds into smaller and smaller pieces. Rivers then carry away the smallest bits (clay and silt) in suspension leaving sand and deposits in calm waters. Banded iron formations are worldwide and can be hundreds of meters thick. They are 1.5 to 3.8 billion years old. The formations in Western Australia are the thickest and most extensive, some 150 000 km2.. These cannot be explained by the concepts of water-based geology. But by realizing that there first was the major dry sedimentation in the time of great electromagnetic jets, and after that the second, water based sedimentation, large scale geological formations become understandable. Upheaval of the seabed is evident only if of marine life are present in the .

Out-of-Place Artifacts (OOPARTS) “All over the world, enigmatic artifacts have been found that do not fit the accepted geologic or historical timeline. do they offer a radically different view of our world?” Yes, they do. Shales and mudstones make up about 80% of the sedimentary record, but are still the most poorly understood sedimentary rock type. They are made up of very fine particles, only microns in diameter. Geology starts at the time when the rocks of Earth first became solid, and then started to erode by the effects of weather and water. It is believed that extreme erosion, during hundreds of millions of years, leads to shales and mudstones. According to our theory, these fine-grained minerals rained down from the sky, from the hot spots of jets. Large jets have become fewer and fewer, but the time elapsed since the latest jet eruptions must be so short that ooparts can be explained. All modern geochronology is based on measuring the stable daughter atoms produced during the radioactive decay of naturally occurring elements. Dating a rock means dating some event or process that the rock experienced, but direct dating methods for mudstones do not exist.

165 On Physical Phenomena Behind Indian and Other Mythologies Probably the first person who saw a jetting fermion in the sky (the fermion and the two hot spots, on the left below)) and had in mind the shape of sumerian chariot, interpreted the vision as an “aerial chariot”. Like the one on earth, it had two supporting wheels and the chariot-box in between. But these supports worked in air, and that is why those vehicles had to be “vimanas”, cars or chariots of the gods. From this observation started the flight of fancy. Vimana in the sky of Afganistan.

In the following, we quote an original Sanskrit text (as given in the standard Ganguli translation):

When the next day came [during the Mahabharata war], Camva actually brought forth an iron bolt through which all the individuals in the race of the Vrishnis and the Andhakas became consumed into ashes.

Indeed, for the destruction of the Vrishnis and the Andhakas, Camva brought forth, through that curse, a fierce iron bolt that looked like a gigantic messenger of death. The fact was duly reported to the king. In great distress of mind, the king (Ugrasena) caused that iron bolt to be reduced into fine powder. (Mausala Parva, sec. 1) [ An event as described above took place and exposed the Vrishnis and the Andhakas to burning radiation. Camva (son of Krishna?) is credited for that. After the emission, the site was covered with black dust. It was probably examined and found to be magnetic. Thereby the story of an “iron bolt”. King Ugrasena was credited for pulverizing the iron bolt. Those divine deeds are component parts of a physical phenomenon described in the geology section above. Jets produce oxides of iron in pulverized form. ] Day by day strong winds blew, and many were the evil omens that arose, awful and foreboding the destruction of the Vrishnis and the Andhakas. The streets swarmed with rats and mice. Earthen pots showed cracks or broken from no apparent cause. At night, the rats and mice ate away the hair and nails of slumbering men. […] That chastiser of foes commanded the Vrishnis to make a pilgrimage to some sacred water. The messengers forthwith proclaimed at the command of Kecava that the Vrishnis should make a journey to the sea-coast for bathing in the sacred waters of the ocean. (Mausala Parva, sec. 2) [ The jets left craters at the site. Rats started to eat the dead bodies of those who died of radiation sickness. ] The triple city then appeared immediately before that god of unbearable energy [Maheswara, or Siva], that Deity of fierce and indescribable form, that warrior who was desirous of slaying the Asuras. The illustrious deity, that Lord of the universe, then drawing that celestial bow, sped that shaft which represented the might of the whole universe, at the triple city. Upon that foremost of shafts, O thou of great good fortune, being shot, loud wails of woe were heard from those cities as they began to fall down towards the Earth. Burning those Asuras, he threw them down into the Western ocean. (Sec. 34) [ In the section ‘Strange Sounds from the Atmosphere of the Earth’ we introduced a video clip of a “triple city” (a large fermion with two jets) and a recording of its loud wail.] We beheld in the sky what appeared to us to be a mass of scarlet cloud resembling the fierce flames of a blazing fire. From that mass many blazing missiles flashed, and tremendous roars, like the noise of a thousand drums beaten at once. And from it fell many weapons winged with gold and thousands of thunderbolts, with loud explosions, and many hundreds of fiery wheels (Mahabharata). Translation by Protap Chandra Roy.

166 [ This is an eyewitness account of a Hagedorn-Tsallis process. The event seen in the sky included hundreds of fiery wheels. The earlier video clip ( Spiral over Western Canada ) shows two such fiery wheels. One just has to multiply the event by hundreds to get an idea of what was witnessed. ]

Sodom and Gomorrah Between Mt. Masada and the coast of the Dead Sea there are regions covered with white ash. They can be clearly seen on Google Maps. What is not visible is the following:

The picture on the right is a crop from a larger image in the article Strike-Slip Basin – Its Configuration and Sedimentary Facies by Atsushi Noda. It shows the border between two tectonic plates. The Dead Sea is a “pull-apart basin” formed by the relative motion of two active fault segments, the Araba Fault and the Jordan Valley Fault. The pull-apart effect means lowering energy densities deep inside the crust, and this is the triggering condition for emissions from fermions, be they large or small.

In this theory it is assumed that the fault regions between tectonic plates form and harbor large fermions, and these stay calm until the energy density around them changes.

The regions on the surface, “burned to ashes”, have suffered from emissions originated from these fault fermions. From what can be seen at the burned sites, one can infer that a Hagedorn-Tsallis process can proceed relatively calmly. A surfacing fermion emits several smaller ones and these again decay in a similar manner in a cascade. The event in front of Mt. Masada was in fact the same as described in Mahabharata. The only difference is that in this latter case the event took place at the ground surface.

Ron Wyatt found these sites during the 1980s. He was aware of the biblical story that “God rained fire and brimstone from the sky and destroyed Sodom and Gomorrah. He immediately assumed that they were the lost cities. Throughout the sites he found the remnants of that rain.

The sulphur found at these sites is white, monolitic form, not the usual yellow, naturally occurring rhombic form found elsewhere around the world. X-ray fluorescence analysis on a sample has revealed the composition to be 98.4% sulfur, combined with 0.22% magnesium. The findings are usually in the form of a ball, and surrounding each one is a shell of vitrified ash. The normal fulgurite structure can be seen better in other pictures from the site. It is inevitable that this becomes interpreted as rain of brimstone. The Mahabharata fermion (or vimana) surfaced from the deeps and rose up to the skies until it became seriously unstable. Near the cities of Sodom and Gomorrah a similar event took (partly) place just below the surface. Thousands

167 of emitting fermions (probably of the same size as those sometimes seen as “ball lightnings”) spread around the site and, by their emission, caused the small structures shown in the pictures above. Finally, the parent fermion itself rose above the site and exploded. It happened high enough that no crater was formed. Instead, the extreme heat vitrified the ground below (by the process known from nuclear explosions) and burned everything into ashes.

Here we have an object that was boiling on top, due to immense heat from above. These are not found in any other areas of the country. Also, the wall-like structures surrounding the regions of ash-like material have signs of ultra-high temperatures. This picture and the two above are from the video “Ron Wyatt - Sodom and Gomorrah”.

Lot and his wife and daughters were on the way out of the city. Lot had seen the first symptoms, the earthquake lights. He understood the dangers of radiation (from similar but smaller, earlier events around the Dead Sea region) and warned his wife. But she looked back at the moment of a major flash. Immediately her face turned white as salt. (The skin turns white and loses sensation with third degree burns.)

TheTunguska Event

In the following we quote two sources: The Tunguska Meteorite: A Dead-Lock or The Start of a New Stage of Inquiry? by Academician N.V. Vasilyev., and The Tectonic Interpretation Of The 1908 Tunguska Event by Andrei Yu. Ol'khovatov. There are over 900 testimonies given by witnesses of the Tunguska phenomenon. They were received from people of almost all the parts of Central Siberia. In many cases they are so controversial that N.V. Vasilyev had to conclude: On some peculiarities of the evidence of eye-witnesses who were close to the Tunguska explosion epicenter. As regards the testimonies of the Evenks who had been on the Avarkitta and the Yakukta, those were published much later and contain some strange details that seem to deserve special attention. These details are definitely queer, and the reader finds himself before the alternative: either deny them as obviously absurd, or - be they believed if to a certain degree - assume that our ideas of the physics of the Tunguska explosion are wrong. Academician N.V. Vasilyev. We select some observations which show that the Tunguska event cannot be explained by simply stating that “a meteor exploded above Tunguska in 1908.” Three Versions of Trajectory

(V): The first investigators of the Tunguska meteorite who analyzed comparatively fresh evidences of the flight of the TSB on the Angara river did not doubt that it had moved generally from the south to the north, though there were three versions of its trajectory (the southern one, proposed by L.A.Kulik, the south-eastern by E.L.Krinov and the south-western by I.S.Astapovich). By the early 60s it was Krinov’s trajectory, namely 135º east of the true meridian, that was considered the most realistic.

Strange Effects of Pressure Waves

The structure of the forest fall area in the immediate vicinity of the epicenter also proved strange. Firstly, the assumption of the absence of radial tree falling here is not true. Surface observations evidence that there are some leveled trees in this area as well, and the general radial character of the forest falling is seen up to a “special point”, viz. the geometric center of the fallen forest area, as calculated by V.G.Fast. Secondly, Kulik’s interpretation of the fallen forest area on the basis of the large-scale aerophotography of 1938 not only corroborated the complex vector structure of the epicentral area, but also enabled assumption of the existence there of at least two or three subepicenters. [ Here (V) probably refers to the following observation. Text and picture below are taken from (O): ]

168 (O): Another puzzle for the meteorite interpretation is the area of the forest fall on the ridge Chuvar (23 km, 279), which according to the local Evenks have formed the same morning as the general (Kulikovskii) one. It was discovered by the 1959 expedition. Its square is 30-40 sq. km. and the trees damage was found to occur in about 1908. The peculiarity of that forest fall is that trees were uprooted with their tops to the east (i.e. in the opposite of what expected from the meteorite fall direction).

(V): Thirdly, the vector structures of the forest falling on hill-sides facing the epicenter and the opposite ones are essentially different, which is in poor agreement with the assumption of the center of generation of the blast wave located high above the earth.

Luminous Phenomena The explanation of the “light nights” of the late June and early July 1908, with recourse to dispersal of particles of a comet’s tail in the upper atmosphere of the Earth is not at all convincing: Firstly, at least at 10 places of Eurasia there were anomalous light effects on the night of June 29-30, 1908. i.e. practically simultaneously with (and even somewhat before) the Tunguska explosion, which makes it impossible lo explain the optical effects of June 30, 1908, as due to mechanical transport of space aerosols from the site of the Tunguska event. Secondly, in discord with this explanation is also the sharp exponential decrease in the intensity of the atmospheric anomalies after the 1st of July which well conforms to the assumption of the dominating contribution of photochemical reactions to formation of these. If, alternatively, the main contribution were given by refraction and scattering by aerosol particles, it would be more reasonable to expect gradual decrease in the effects, as in the case of volcano-induced optical anomalies.

Geomagnetic Effects Before the Event

Professor Weber of Kiel University observed unusual regular periodic deviations of the compass needle. This effect was repeated each evening from 27 June through 30 June 1908. The recordings looked like geomagnetic storms, usually associated with solar electrical activity.

Several Explosions at Different Spots There were five explosions, the second seeming to have been the most powerful. Light flashes followed at an interval of a few seconds and were seen at different spots of the sky. The last, fifth explosion took place far in the north, somewhere near the Taymura river. Trees began to fall and the fire began after the first explosion, while the Evenks were in their “chums” (tents of skin or bark), the latter being thrown down. ….. The data communicated by I.M.Suslov are quite detailed and enable the whole phenomenon to be estimated as lasting no less than 20-25 seconds. “Vimana Observation” “As I came to myself, ….., I saw it was all falling around me, burning. You don’t think, ….. that was god flying, it was really devil flying. I lift up my head - and see - devil’s flying. The devil itself was like a billet, light color, two eyes in front, fire behind. I was frightened, covered myself with some duds, prayed (not to the heathen god, I prayed to Jesus Christ and Virgin Mary).”

169 The unusual glow in the sky was first observed days before the event. Beginning on June 23, 1908, atmospheric optical anomalies were observed in many places of Western Europe, the European part of Russia and Western Siberia. They gradually increased in intensity until June 29 and then reached a peak in the early morning of July 1st. ….. Later on, after July 1, these effects decreased exponentially. The area involved in these phenomena was limited by the Yenisey river in the East, by the Tashkent - Stavropol - Sevastopol - Bordeaux line in the South, and by the Atlantic coast in the West. For nine days Kulik explored, finally realizing that the prostrate forest was splayed out radially from an epicenter that the guides referred to as the Southern Swamp. It was here Kulik believed that he would find his crater. What he found instead was a bizarre landscape where, in his words, “the solid ground heaved outward from the spot in giant waves, like waves in water.”

In addition, he saw numerous depressions in the peat marsh that he described as “peculiar flat holes.” These holes, (each some 10 to 50 metres), he believed, must have been formed by pieces of the meteorite striking the ground. In a later expedition Kulik undertook the excavation of one of these holes but was disappointed to find no trace of a meteorite. Kulik had expected to find the evidence of a giant meteorite in the central part of the basin, but found that the area was dotted with dozens of holes “exactly like ”. These funnel-shaped holes ranged from 10 to 50 meters across and up to 4 meters deep. Their edges were mostly steep, the bottoms flat and swampy. Observations From Far Distances The general features of the event were best described by those who saw it from far. Around 1935 Kulik received a letter from an eyewitness who observed the event from the village of Kezhma, about 140 miles south of the blast. The day was unusually clear and not a single cloud was to be seen. No wind stirred, and there was absolute silence. Suddenly, far off, still hardly audible, was heard the sound of thunder. It made us look up involuntarily in every direction. The sound seemed to come from beyond the River Angara and became louder rapidly. There was something extraordinary about it. The first fairly faint crash resounded, but when I turned quickly in the direction of the crash I saw that the Sun’s rays were crossed by a broad fiery-white band on the right side of its rays. On the left side, towards the north, an irregularly- shaped brilliantly white somewhat elongated mass was flying into the taiga…with a diameter far greater than the Moon’s. Approximately two to three seconds, maybe that generally heard during a storm. After the second crash, the “ball” was no longer visible, but its tail, or rather the streamer, was now completely on the left side of the Sun’s rays . . . Then, after a shorter interval of time than that between the first and second crashes, the third thunder crash occurred. This was so loud (as though there were several crashes all mingled together within it) that the whole ground trembled. An echo, like a continuous deafening roar, resounded through the taiga, indeed it seemed through the whole taiga of vast Siberia.

People 465 km from the site saw quite high above the horizon a body shining very brightly with a bluish-white light. The body was in the form of a pipe and too bright for the naked eye. It moved vertically downwards for about ten minutes before it approached the ground and pulverized the forest. A huge cloud of black smoke was formed and a loud crash, not like thunder, but as if from the fall of large stones or from gunfire was heard. All the buildings shook and at the same time a forked tongue of flame broke through the cloud. (Also described as “a fire of indefinite shape gushed out...”).

170 People in a village 420 km away saw “a fiery body like a beam” shoot from south to northwest before they heard the thunder. The fiery body disappeared immediately after the bang and a “tongue of fire” appeared in its place. Observations Near the Epicenter An eyewitness (S. Semyonov) 70 km south-southeast from the site was facing north. “Suddenly the sky in the north split apart, and there appeared a fire that spread over the whole northern part of the firmament. … I felt intense heat...at this moment a powerful blast threw me down from the steps. I fainted... After that we heard very loud knocking...” Two Evenk brothers, still nearer the epicenter, missed the main event because they were, luckily, sleeping in their chum (a tent of skin or bark). They woke by tremors and noise of the wind. Then a great clap of thunder was heard. It was followed by a strong wind which leveled their chum. The brothers crawled out from under the chum. After that, they saw another flash of light while thunder crashed overhead followed by a gust of wind that knocked them down. This was followed with still more lightning and thunder, but with decreasing strength. Explanation The reason for the Tunguska event is that the wobbling motion of the Earth went into fast acceleration. In our opinion, A. Ol'khovatov’s basic idea of geophysical (tectonic) interpretation of the 1908 tunguska event is correct. In the following we give details of the event as they follow from our theory. The two graphs (left, below) are from Ol'khovatov.

The time-sequence of reports about earthquakes (vertical lines) in the Lake Baikal region from March 1908 (3) to November 1908 (11) taken from a catalog of earthquakes. The seismic manifestations attributed to the Tunguska event are marked by enlarged height of the vertical line. Graph credit: Smylie & Mansinha 1968.

The correlation between discontinuities of the rotational pole path of the Earth and large earthquakes is shown in the graph above. The authors (S&M) showed that large redistributions of material can excite the wobble. On the left is the “momentary polhode radius”, which in the graph above is “the momentary radius of curvature.” The time span of the polhode graph is the year 1908 and the time of the Tunguska event is shown by a vertical line. The maximum time derivative of the polhode radius during the time 1907 to 1910 was at the exact time of the event.

From these data we draw the conclusion that there was redistribution of material inside the crust and thereby tension, and it started before the event itself. We already have from Read Sea tectonics (Sodom & Gomorrah): “The pull-apart effect means lowering energy densities deep inside the crust, and this is the triggering condition for emissions from fermions, be they large or small.” The pull-apart effect is in this case caused by acceleration of major tectonic plates. The effects started to show up as earthquake lights and magnetic phenomena, because the beams that excited electrons in the atmosphere are highly magnetic. Prof. Weber observed magnetic anomalies starting from 27 June through 30 June 1908. For a detailed description of the event we need one more collection of observations, namely the sound record: Krasnoyaretz newspaper, July 13, 1908. Kezhemskoe village. On the 17th an unusual atmospheric event was observed. At 7:43 the noise akin to a strong wind was heard. Immediately afterwards a horrific thump sounded, followed by an earthquake which literally shook the buildings, as if they were hit by a large log or a heavy rock.

171 The first thump was followed by a second, and then a third. Then - the interval between the first and the third thumps were accompanied by an unusual underground rattle, similar to a railway upon which dozens of trains are traveling at the same time. Afterwards for 5 to 6 minutes an exact likeness of artillery fire was heard: 50 to 60 salvoes in short, equal intervals, which got progressively weaker. After 1.5 - 2 minutes after one of the “barrages” six more thumps were heard, like cannon firing, but individual, loud, and accompanied by tremors. “The sky, at the first sight, appeared to be clear. There was no wind and no clouds. However, upon closer inspection to the North, i.e. where most of the thumps were heard, a kind of an ashen cloud was seen near the horizon which kept getting smaller and more transparent, and possibly by around 2-3 p.m. completely disappeared. The Event One large fermion started to surface from below ground in Tunguska. On its way, near the surface, it felt decreasing energy density, became unstable and emitted three times in sequence. The hot spots, high above ground, were explosive and the forest below was flattened. The number of these major flashes is in line with what was heard, and the following. Kulik’s interpretation of the fallen forest area on the basis of the large-scale aerophotography of 1938 not only corroborated the complex vector structure of the epicentral area, but also enabled assumption of the existence there of at least two or three sub-epicenters. After the major emissions, the surfacing fermion continued emitting at a lower power. Most of the time the jet was stable and extended high above ground, but in 50...60 occasions energy was released at the surface of the ground. These bursts of energy created the “peculiar flat holes” which Kulik found instead of a meteor crater. Kulik’s flat holes are “mystery craters”. See the picture on page 156.

The hot spot in the sky formed black smoke (magnetite, Fe3O4, black iron oxide). The jet then shortened and the smoking region was seen as moving vertically downwards for about ten minutes. As the body neared the ground (forest), the bright body seemed to smudge, and then turned into a giant billow of black smoke. ( Expeditions sent to the area in the 1950s and 1960s found microscopic silicate and magnetite spheres in siftings of the soil. Here we have a small-scale example of how banded iron formations (BIFs) are created. It is obvious that magnetite falls down much faster than silicates. After they both have reached the ground, they are “sedimented”.) Then, inside the black cloud, the fermion rose above ground. The extreme magnetic field of the jet had momentarily magnetized the wet soil, which now had the magnetic field of a long solenoid. Orienting itself along these field lines the fermion rotated into almost horizontal direction. At the same time it became highly unstable due to the abruptly decreased energy density and lengthened its jets immediately. The jets shoot out from the black cloud almost horizontally. They had their own hot spots, and these are the ones from which the Evenki brothers suffered. These jets also explain the forest fall on the ridge Chuvar, far from the epicenter. The jets shortened quite rapidly and that is why eyewitnesses closer to the explosion reported the source of lightnings and thunder moving from east to north. These are the six more thumps that were heard, “like cannon firing, but individual, loud, and accompanied by tremors.” The jetting source quickly diminished out of sight, but it didn’t vanish completely. It was later seen in some 40 km to the south from the catastrophe epicenter, flying down the river Chamba. The eyewitness called the object a “devil”. “...two eyes in front, fire behind” While flying, the devil was saying “troo-troo...”. In the section Strange Sounds from the Atmosphere of the Earth we introduced a video clip of an object like that. It is a jetting fermion. To explain the Tunguska event we didn’t have to make a single additional assumption. It fits seamlessly into our theory, which explains the event in all its details. Earthquake lights are a perfectly natural phenomenon. Piezoluminescence occurs when certain materials are mechanically stressed but not broken. Rubbing together two quartz stones causes light emissions. Similarly, when two tectonic plates rub together, there are excitations of electrons and emissions from electrons, only that everything is taking place at a larger scale. In high energy densities of tectonic faults large fermions are formed, and their emissions we see as earthquake lights. The most obvious small scale “earthquake light” is triboluminescence. In the picture a hard candy is broken by a hammer. Blue emissions come from regions in which a crack opens a few tens of microseconds later.

172 The New Madrid Event After first understanding what happened in Tunguska and then reading the report The New Madrid Earthquake by Myron L. Fuller, similarities of the two events are clear. He reports the observations made by L. Bringier, a well-known engineer and surveyor, who was in the midst of the disturbance. He saw something acting on the ground surface from below: It rushed out in all quarters, blowing up the earth with loud explosions, bringing with it an enormous quantity of carbonized wood, reduced mostly into dust, which was ejected to the height of from 10 to 15 feet, and fell in a black shower, mixed with the sand which its rapid motion had forced along; at the same time, the roaring and whistling produced by the impetuosity of the air escaping from its confinement, seemed to increase the horrible disorder of the trees which everywhere encountered each other, being blown up, cracking and splitting, and falling by thousands at a time. In the meantime, the surface was sinking, and a black liquid was rising up to the belly of my horse, who stood motionless, struck with terror. These occurrences occupied nearly two minutes; the trees, shaken in their foundation kept falling here and there, and the whole surface of the country remained covered with holes, which, to compare small things with great, resembled so many craters of volcanoes, surrounded with a ring of carbonized wood and sand, which rose to a height of about seven feet. I had occasion, a few months after, to sound the depth of several of these holes, and found them not to exceed twenty feet; but I must remark the quicksand had washed into them [In Tunguska these were called “peculiar flat holes”]. The country here was formerly perfectly level, and covered with numerous small prairies of various sizes, dispersed through the woods. Now it is covered with slaches [ponds] and sand hills or monticules [subordinate volcanic cones]. ( We have emphasized the words which are frequently missing from quotations.) Similar observations of black smoke were made from far and near: “Dense black cloud of vapor overshadowed the land” after the severe shocks. The water of the river, after it was fairly light, appeared to be almost black, with something like the dust of stone coal. Besides the darkness observed in the area of principal disturbance similar manifestations were recorded in other localities. For instance, at Columbia, Tenn., a very large volume of something like smoke was declared to have risen in the southwest, from which direction the sound appeared to have come... Speaking of the New Madrid shock Bringier said the following: Several authors have asserted that earthquakes proceed from volcanic causes, but although this may be often true the earthquake alluded to here must have had another cause. Time perhaps will give us some better ideas as to the origin of these extraordinary phenomena. It is probable that they are produced in different instances by different causes and that electricity is one of them... The main difference between the events in Tunguska and New Madrid is that in the latter case surfacing of a large fermion didn’t happen.

Conclusion From similarities of all the natural catastrophes discussed above, starting from Sodom and Gomorrah, it is obvious that the same physical reason is responsible for these events. Scientists have taken the first steps towards a mathematical model which in future can be used to predict these kinds of events and earthquakes. See the article Gravitational body forces focus North American intraplate earthquakes. Abstract Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard.

173 Levandowski and colleagues used gravity data together with seismic images of the earth’s interior to estimate the density of the crust and upper mantle across the central U.S. from the surface to 100 miles deep, and then calculated the pressures and stresses associated with this 3-D model. Their calculations show that the majority of natural earthquakes happen in areas with unusually high crustal pressures and tensions, and that these pressure variations appear to be the controlling factor in where natural seismicity occurs on the Great Plains. Pressure and tension are what determines the local energy density. The core claim of our theory is that the changes of local energy density triggers fermions to emit or absorb, be they large or small. So the success of this model, for its part, proves our theory correct.

Life: Almost Impossible

Firstly, there must be a planet composed of a large variety of stable atoms and their aggregates. Secondly, there must be water because it is the “universal solvent”; it dissolves more substances than any other liquid. Thirdly, there must be a stable source of radiation which keeps the temperature on the planet near the triple point of water. This requires a very thin atmosphere (low pressure). However, this atmosphere must be able to stop the radiation affecting DNA-molecules. The source of radiation must be stable enough; it must not send out huge blasts of energy which would destroy the fragile seeds of life. Scientist have listed other kinds of restrictions: “Planets like the earth, with large amounts of both water and land, are virtually impossible to form. Large planets do not form continents because the increased gravity prevents significant mountain and continent formation. Earth-sized planets completely flood, and any land formed is eroded by the seas in a short period (in the absence of tectonic activity).” The planet’s crust must be thin because tectonic processes cannot happen with thick plates. We may add that the source of radiation, like our Sun, must rotate; otherwise the planet would soon spiral down to the source and vanish. This stable state of affairs should then last billions of years. It goes without saying that the above is virtually impossible. But, if the impossible happens: “The theory that life can arise spontaneously from non-life molecules under proper conditions.” From a Caltech Press Release: “Water really is everywhere. Two teams of astronomers, each led by scientists at the California Institute of Technology (Caltech), have discovered the largest and farthest reservoir of water ever detected in the universe. ….. the researchers have found a mass of water vapor that’s at least 140 trillion times that of all the water in the world’s oceans combined, and 100,000 times more massive than the sun.” It seems that water is created quite easily in the Hagedorn-Tsallis process. Saturn has a ring of water of its own making. Enceladus has an active central fermion. It means ongoing H-T processes, and the entire surface of the moon is covered with ice. It is possible that Mother Earth once had a water ring of its own. The core fermion gradually diminished and at some stage the ring collapsed onto the Earth. Or then, the water just condensed from the atmosphere. Anyhow, hot water dissolved all possible substances from the crust. This solution, after it cooled sufficiently, became the “”. The fundamental hallmark of life is self-replicating molecular systems that copy and replicate themselves, but how these systems may emerge, it simply cannot be explained by modern physics. Biologists are completely on their own, with no help from physics, and this is why cell biology is forced to be only a descriptive science.

In the Beginning Universal force took over in the primordial sea. It’s most delicate features became visible, including the effect of thickness of spectra, as was described in the section Quasicrystals. We quote from there: “All kinds of nucleation and aggregation processes, including the structures of biological life, hover near the threshold value β/α = φ, the golden ratio. This is because minimum energy structures often have a fractal geometry based on that ratio.”

174 Scientists have differing views of what molecules formed first (see, for instance Protein-like structures from the primordial soup by Fabio Bergamin). But they have one thing in common: the structures are said to be formed by molecular self-assembly, defined as “the process by which molecules adopt a defined arrangement without guidance or management from an outside source”, for which scientists have no further explanation. A microscopic sample of the primordial sea must have been much similar to what we see inside cells today, a region of moving molecules...but why do they move?

BIOLOGY “The discovery that cells are filled with molecular motors is one of the major achievements of late 20th-century molecular biology.” “Molecular motors are biological molecular that are the essential agents of movement in living organisms. In general terms, a motor may be defined as a device that consumes energy in one form and converts it into motion or mechanical work.” “A common theme is that motor proteins may generate forces and vectorial motion by rectifying thermal fluctuations. In such “fluctuation ratchet” models, chemical energy does not produce force directly. Rather, the motor diffuses along its track (or some other position coordinate) by random walk, and the chemical reaction merely biases the walk so that steps in the forward direction are more probable than backward steps.” “Most organisms have many different motors that are specialized for particular purposes such as cell division, cell crawling, maintaining cell shape, movements of internal organelles, etc. A large number of biological motors and motorlike proteins have been discovered and characterized in recent years.”

Cell biologists have been forced to discover these motors (which clearly are a case of deus ex machina), because modern physics is totally useless for molecular biology. It is more than obvious that today's physics is lacking a force complex enough to explain the inner workings of the living cell.

From the book ESSENTIAL CELL BIOLOGY:

“Diffusion only works well for very short distances. To move molecules quickly over larger distances, cells need to rely on more active and directed methods of transport—processes that inevitably require an expenditure of cellular energy.”

“The cytoskeleton extends throughout the cytoplasm. This system of protein filaments is responsible for cell shape and movement and for the transport of organelles and molecules from one location to another in the cytoplasm.”

TRANSPORT PROTEIN: Carries small molecules or ions. MOTOR PROTEIN: Generates movement in cells and tissues. SIGNAL PROTEIN: Carries signals from cell to cell. RECEPTOR PROTEIN: Detects signals and transmits them to the cell's response machinery.

The ways by which cells can move (under the heading cell motility) are various: a cell can crawl, glide, swim, and twitch. And, for each of these types of motility, there are many different mechanisms that achieve the same end. Recent technological advances have allowed scientists to look at molecules and cells in much more detail. It is now possible to follow, at subcellular and even molecular levels, the spatiotemporal dynamics of molecules and processes inside cells. “Adrenaline stimulates glycogen breakdown in skeletal muscle cells. The hormone activates a GPCR that turns on a G protein (Gs), which activates adenylyl cyclase, boosting the production of cyclic aMp. Cyclic AMP, in turn, activates PKA, which phosphorylates and activates an enzyme called phosphory-lase kinase. This kinase activates glycogen phosphorylase, the enzyme that breaks down glycogen.”

175 Cell biology produces long chains of “cause and effect” expressed by the words “stimulate, activate, or boost”, but they are not causes and effects in physical sense. By definition, molecular motors are biological molecular machines that are the essential agents of movement in living organisms.

In our view, real physical understanding requires that all of “cell motility” follows from the universal force.

Membrane Traffic The living cell is not a closed system. A variety of molecules pierce the cell membrane all the time. It has been noticed that in the cell two traffic currents can be perceived, one towards the inner parts of the cell and the other outwards from there. This is called membrane traffic. The reason for this traffic is not known. But now we have the means to explain the traffic: The same way of reasoning must be adapted to the cell as we have used for a fermion. That is, background radiation from all directions pierces the cell and then disperses in all directions. The state of modulation of the outgoing radiation has changed from the effect of the inner structures of the cell. Due to this, some radiation causes forces when propagating inward, some interacts only when coming out. When two molecules bind to each other in a cell, the direction of the force and the resulting motion depends on the spectrum of the combination. The spectrum of the combination is not the sum of the two initial spectra but a new one (thus showing nonlinearity). This newly born molecule has its own actions and interactions. As it moves, it affects its surroundings with modulated radiation, which in turn can cause, for example, the opening of the DNA chain. All binding forces are functions of position inside the cell and moving molecules may later disengage from each other at some other location. The final result is three-dimensional traffic inside a cell. Without knowing the underlying cause one inevitably gets the impression that there are molecules carrying cargo.

“Intracellular cargo transport requires microtubule-based motors, kinesin and cytoplasmic dynein, and the actin- based myosin motors....the mechanism by which molecular motors exchange cargo while traveling between filamentous tracks and deliver it to its destination when going from the cell center to the periphery and back again.”

A living cell is a dissipative system, 2 N 2 far from equilibrium. The consti- drk F k 1 tuents of the cell move into certain Z@ mk2 -,()(,). F k == F k n Iu r s s d W å dt m c ò4p directions because they are subject to k =1 k Gauss's principle of least constraint. This principle constitutes the concept of “vis viva”, through the non-vanishing time derivative.

A living cell is a collection of nonliving molecules, but in a living cell the time derivatives never vanish. If they vanish, it means that the cell has died. Life is a process of nonlinear crystallization, and it continues as long as the living thing whatever is alive.

This molecular motion is, in the present parlance of cell biology, called molecular self-assembly and trafficking. It is defined as “the process by which molecules adopt a defined arrangement without guidance or management from an outside source.” In reality, all motion during mitosis follows the path of least resistance and so the Fibonacci structures emerge.” The expression “self-assembly” belongs of course to the category of descriptive research, into which biologists are forced, due to non-existent support from modern physics. In our approach there is guidance in the form of Gauss's principle inside the cell and also management from an outside source, in the form of modulated radiation from the rest of the organism. This leads to differentiation of the cells.

176 Conclusion. Gauss's principle of least constraint is simple and clear. Every particle of the cell follows it all the time and doesn’t do anything else, except that it fine-tunes it’s energy (frequency) according to the local energy density. Applying Gauss’s principle in the case of cells is practically impossible, because the constraints in the equation are forces caused by beams of polarized and modulated bosons from all frequency bands and from all directions. The whole system is nonlinear; there are spectrum-dependent forces and force-dependent spectra in effect simultaneously. But, with nonlinear Gauss’s principle of least constraint, a scientific explanation can be given for the functioning of a living cell. Protein Folding

“At the heart of computational chemistry lies the elucidation of structures. Knowing the exact positions of atoms in a molecular system is the prerequisite for any further investigation. This is most pronounced in computational biochemistry, where the protein folding problem i.e. finding the natural structure of a protein through computer simulations is considered the “holy grail” of the field.”

The only things required to understand biological phenomena is knowledge of the true complexity of electromagnetic force, and the particle’s ability to adapt itself to all aspects of the electromagnetic field at its position. We remind the reader that the fundamental interaction is this: particles (or molecules) of identical spectra repel each other. If the spectra are non-overlapped, the particles (or molecules) are pushed towards each other. We also remind that these spectra are about the force-causing radiation. They are much more complex and at higher frequencies than the optical emission spectra of the same molecules.

A good example is a chain of molecules, a polymer, which is made up of monomer units. The molecules of the monomer unit have different spectra, so they attempt to move towards each other. This force tries to bend the chain. Further along the chain, molecules of identical spectrum are again found. When the chain bends, identical molecules will see each other, they repel and the bending stops. The chain takes a fractal form called self-avoiding random walk. If we now force a continuous spectrum to the molecules of the chain by exposing it to a heat gradient, it folds together. The researchers of polymers refer to this phenomenon with the name coil-globule transition. ( It has also been called “Protein folding, that tantalizing mystery at the core of biology.” )

The spectrum transition from a self-avoiding state to folded state is the same and for the same reasons as the transition between optical emission spectra of gases and compressed gases. The molecules tune their overlapped spectra into non- overlapped spectra, because this is how the principle of local minimum energy realizes itself.

The DNA molecule is a polymer. It is said that “DNA is a molecule encoding the genetic instructions used in the development and functioning of all known living organisms”. But we see things differently:

The fundamental character of the DNA molecule is acting as a site of assembly (in analogy to a crystallizing center) for the system of organelles. There is no process of encoding/decoding going on in a dividing cell. Biological membranes are crucial to the fabrication process because they have a polarizing structure. If we watch the process of cytokinesis, we see chromosomes coming in sight and then disappearing again because enclosing a polymer within a polarizing membrane is another way to regulate the spectra of the chain, and thereby control the DNA folding.

Chromosomes What do we see when we “see the chromosomes”? We see molecules of some dye forming banding patterns. The dye molecules have bonded to some parts of the cylinder-shaped chromatid body but not to all parts. A pattern of alternating light and dark bands is produced, but the physical basis of the banding patterns is not understood. What we see is the visual manifestation of electronic band structure. The DNA folds, because its spectrum has turned from discrete to continuous. During folding the polymer chain is subject to Gauss's minimum constraint principle and the final result is a system of forces of minimum energy. The more densely fermions are packed together and must share the same space, the more they must differ from each other in terms of their spectrum.

177

The constituents of condensed chromatin (D) can move relatively freely in the axial direction of chromatid body. The principle of local minimum energy ( = non-overlapped spectra) manifests itself as a continuous wave-like distribution of valence electron energy in said direction of free motion. The molecules of dye bind themselves to chromatin's valence electrons within a specific energy band. Mitosis

In the beginning the helicase molecule slides along the DNA separating its two strands. The double helix is unwound and each strand of the original molecule acts as a template for the new DNA. As the DNA replication proceeds, the two new identical chains repel each other. They also immediately start acting as a site for crystallization. The forming structures of a cell have identical spectra, they repel and finally separate from each other. In other words: the cell has divided. Cellular Differentiation

The zygote of a multicellular organism is a general-purpose cell. All the greatly differing cells which are needed in the organism are originated from the zygote by multiple divisions into more and more specialized cells. At present, the general mechanisms underlying cell differentiation (which is the mechanism underlying epigenesis and differentiation into species) are still widely unknown. We explain these as follows: All the events of the cell result from the radiation pressure caused by beams of bosons which arrive at the cell from its environment. Because the state of modulation of these boson beams is crucial, it is clear that the processes in the cell are affected by the modulating/polarizing structures around the cell, i.e. the other cells.

Differentiation is required for the following reason: a new cell is formed on the aggregate of other cells. After mitosis, it is an exact copy of the parent cell, so they must repel. To stay as a part of the organism and not to become repelled to a distance, the spectrum of the new cell must become modified, or differentiated.

“Every growing organism; and every part of such a growing organism, has its own specific rate of growth, referred to this or that particular direction; and it is by the ratio between these rates in different directions that we must account for the external forms of all save certain very minute organisms. … It may sometimes be a very constant ratio, in which case the organism while growing in bulk suffers little or no perceptible change in form; ...and when the ratios tend to alter, then we have the phenomenon of morphological “development,” or steady and persistent alteration of form. …. The developing organism is very far from being homogeneous and isotropic...our coordinate systems may be no longer capable of strict mathematical analysis, they will still indicate graphically the relation of the new coordinate system to the old, and conversely will furnish us with some guidance as to the “law of growth,” or play of forces, by which the transformation has been effected.”

The excerpt and the picture above are from D'Arcy Thompson's monumental book “On Growth And Form”, first published in 1917. It detailed the many ways in which mathematics can illuminate our understanding of biological form. Thompson showed how differences in the forms of related animals could be described by means of relatively simple mathematical transformations.

178 Obviously the most common non-fatal genetic mutation must be the one that affects only the rate of cellular differentiation in a non-isotropic way, leading to metamorphosis, exactly as Thompson explains. By this type of mutation the entire organism morphs at one time. That what drives the whole process of evolution is natural selection operating on the results of these random genetic mutations.

Contraction of a Muscle Cell

The present explanation is as follows: “Myosin is a motor that contracts our muscle cells. Myosin (brown) exists as little heads on a brushy fiber that spans the length of a muscle cell and this fiber is connected to the both ends of the muscle cell by a springy connector called titin. The myosin heads are stacked against two other pieces of fiber called actin (black) that extends from either end of the cell.

When there is the chemical fuel ATP flooding into the cell, the myosin heads will tug at the two pieces of actin. The cumulative tugging of thousands of myosin motors pulls the two pieces of actin towards the middle of the cell and the muscle cell is compressed.”

Cell biologists have discovered the structure of the muscle cell (above). It has great resemblance to the electromagnetic actuator below. If cell biologists were using the concepts of electromagnetic theory, they would examine the magnetic properties of muscle cell molecules. Then they would determine the (ionic or electronic) current distribution in the cell during contraction.

If the magnetic fields were alike with the actuator, they would identify the cause of muscle cell contraction. It would be the most primitive electromagnetic effect. http://en.wikipedia.org/wiki/Sliding_filament_model

The Brain and Consciousness A purely electromagnetic description of the basic brain function is almost trivially simple.

The technical equivalent is a cavity resonator. Inside the cavity there is a loop of wire. Sending an electric current through the loop results in a standing wave pattern in the cavity. The pattern of the standing wave can be controlled with the shape and location of the loop. This is called excitation of fields. In this model the brain is the cavity and the excitation loop corresponds to the nerves from sensory organs.

179 The physical realization of the brain takes this model to its extreme. The simpler the form of the resonator, the more stable it is. The brain is a dielectric, fractal resonator. Therefore, the standing wave pattern can be almost infinitely complex. The wave is formed by positions, wave-like energy distributions and orientations (spin waves) of more or less free electrons. The essential feature of the brain resonator is that it is unstable. Thus, we connect the pattern recognition abilities and the whole concept of consciousness of the brain to its function as an unstable fractal resonator. The momentary consciousness of an organism is a pattern of standing wave. The streams of impulses from sensory organs regulate the pattern all the time. When a sensation excites a certain wave pattern in the resonator, it will serve as a seed to a series of momentary patterns. At least with the human being, the forming of new patterns doesn’t stop even for a moment; we call this a “stream of consciousness” or a “train of thought”. This is a manifestation of instability of the brain resonator.

Memory, Learning and Pattern Recognition are Closely Connected If a certain wave pattern is repeatedly set up in the brain (e.g. by seeing an object repeatedly), it becomes a memory. It is a region of brain in which the spins of electrons become more or less permanently oriented in an orderly manner; the brain learns. The view becomes familiar. If, after the learning process, the brain receives even an incomplete view or sense of the same object, it will excite the same wave pattern as the original view. This is manifested when we say: “The view- / voice- / smell brought to my mind…”.

The wave pattern, i.e. the momentary consciousness of an animal is determined by three different factors: 1. Inherited structures of the brain. 2. Learned properties, the development of the original resonator during the lifetime of an animal. 3. The electric impulses arriving from the sensor organs. Against the current belief, this information transmitted by the nerve cells is not “processed” in the brain. It directly regulates the pattern of a bewilderingly complex standing wave.

If the brain is not capable of learning or remembering (no “memory molecules”), the wave patterns in the brain resonator are determined only by inherited properties and sensations from the environment. This results in behavior we call instinctive. Insects do not need to learn how to walk or fly, “it is in the genes”.

The state of human consciousness may be altered by magnetic stimulation of the cortex. This is achieved by transcranial magnetic stimulation, or TMS. It produces functional brain lesions, small regions of cortex that are temporarily paralyzed. TMS also produces visual or audible phantom sensations, it may affect the person’s speech, it may cause movements of limbs… All this is very understandable if we recognize consciousness as an unstable electromagnetic standing field.

The brain field of human beings is so complicated that it often operates at the edge of chaos and coherence. Due to this, totally new wave patterns and streams of consciousness are easily created. This is manifested in the ability to innovate, the ability to solve problems. There is a delicate demarcation line between high creative skills and mental disorders; they are frequently paired. Distinction between the two states is determined by the degree of instability of the brain resonator. The brain of an insect is a system which is far from the edge of chaos and coherence. The function of the brain is always stereotypical. No new wave patterns appear, no new “thoughts”, except when a mutation takes place. The behavior of an animal depends merely on the wave patterns of brain. So we know cases in which a human being has several whole personalities which are able to become switched from one to another in a second. In this case the wave patterns will change at one time in large section of the cortex. The result is a new stream of consciousness, a new personality. All resonators tend to gravitate from higher modes to lower modes of oscillation. In other words, they prefer wider and more comprehensive wave patterns. This is equally true for short-lived particles in accelerator experiments, and the brain resonator. In conclusion, in order to take our physicalistic view to its extreme, we argue that in the human brain this tendency towards less fractured wave patterns manifests itself in the urge to explain. Religious and scientific explanations aim to the same goal: enlightenment and peace of mind. The goal is achieved when all the essential questions have been answered in a coherent way. The standing wave pattern in the brain resonator has reached its simplest possible form. Then, the mind ceases to ask more questions.

Jouko Rautio, B.S.E.E. 2018-01-12. This work is licensed under a Creative Commons Attribution 4.0 International License.

180