<<

ARTICLE IN PRESS

Physica A 348 (2005) 505–543 www.elsevier.com/locate/physa

Econophysics: from and to Quantum Mechanics

Edward Jimeneza,b,c,Ã, Douglas Moyad

aExperimental , Todo1 Services Inc., Miami, FL 33126, USA bGATE, UMR 5824 CNRS, France cResearch and Development Department, Petroecuador, Paul Rivet E11-134 y 6 de Diciembre, Quito, Ecuador dPhysics Department, Escuela Politecnica Nacional, Ladro´n de Guevara E11-253, Quito, Ecuador

Received 11 March 2004; received in revised form 20 September 2004 Available online 27 October 2004

Abstract

Rationality is the universal invariant among human behavior, universe physical laws and ordered and complex biological systems. isboth the use of physical concepts in Finance and Economics, and the use of in Physics. In special, we will show that it is possible to obtain the Quantum Mechanics principles using Information and Game Theory. r 2004 Elsevier B.V. All rights reserved.

PACS: 02.30.Cj; 02.50.Le; 03.65.Ta

Keywords: ; Optimal laws; Quantum mechanic laws; Energy; Information

1. Introduction

At the moment, Information Theory is studied or utilized by multiple perspectives (Economics, Game Theory, Physics, , and Computer Science). Our goal

ÃCorresponding author. Research and Development Department, Petroecuador, Todo1 Services, Ecuador , Miami, USA. E-mail address: [email protected] (E. Jimenez).

0378-4371/$ - see front matter r 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.physa.2004.09.029 ARTICLE IN PRESS

506 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 in this paper is to present the different focuses about information and to select the main ones, to apply them in Economics of Information and Game Theory. Economics and Game Theory1 are interested in the use of information and order state, in order to maximize the functions of the rational2 and intelligent3 players, which are part of an conflict. The players gather and process information. The information can be perfect4 and complete5 see Refs. [1–4], (Sorin, 1992). On the other hand, Mathematics, Physics and Computer Science are all interested in information representation, entropy (disorder measurement), optim- ality of physical laws and in the living beings’ internal order see Refs. [5–7]. Finally, information is stored, transmitted and processed by physical means. Thus, the concept of information and computation can be formulated not only in the context of Economics, Game Theory and Mathematics, but also in the context of physical theory. Therefore, the study of information ultimately requires experimentation and some multidisciplinary approaches such as the introduction of the Optimality Concept [8]. The Optimality Concept is the essence of the economic and natural sciences [9,10]. Economics introduces the optimality concept (maximum utility and minimum ) as equivalent of rationality and Physics understands action minimum principle, and maximum entropy (maximum information) as the explanation of nature laws [11,36]. If the two sciences have a common backbone, then they should allow certain analogies and to share other elements such us: equilibrium conditions, evolution, measurement and the entropy concept. In this paper, the contributions of Physics (Quantum Information Theory)6 and Mathematics (Classical Information Theory)7 are used in Game Theory and Economics being able to explain mixed

1Game Theory can be defined as the study of mathematical models of conflict and cooperation among intelligent rational decision-makers. 2A decision-maker is rational if he makes decision in pursuit of his own objectives. It is normal to assume that each player’s objective is to maximize the expected utility of his own payoff. 3A player in the game is intelligent if he knows everything that we know about the game and he can make inferences about the situation that we can make. 4Perfect information means that at each time only one of the players moves, that the game depends only on their choices, they remember the past information (, strategies), and in principle they know all possible futures of the game [1]. 5In games of incomplete information the state of the nature is fixed but not know to all players. In repeated games of incomplete information, the changes in time is each player’s knowledge about the other players’ past actions, which affects his beliefs about the (fixed) state of nature. Games of incomplete information are usually classified according to the nature of the three important elements of the model, namely players and payoffs (within the two-person games: zero-sum games and non-zero-sum games), prior information on the state of the nature (within two-person games: incomplete information on one side and incomplete information on two sides), and signaling structure (full monitoring and state independent signals) [2,3,37]. 6Quantum Information Theory is fundamentally richer than Classical Information Theory, because quantum mechanics includes so many more elementary classes of static and dynamic resources—not only does it support all the familiar classical types, but there are entirely new classes such as the static resource of entanglement (correlated equilibria) to make life even more interesting that it is classically. 7Classical Information Theory is mostly concerned with the problems of sending Classical Information- letters in an alphabet, speech, strings of bits—over communications channels which operate within the laws of classical physics. ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 507 strategy Nash’s equilibrium using Shannon’s entropy [12–15]. According to [16, p. 11] P ‘‘quantities of the form H ¼À pi log pi play a central role in information theory as measures of information, choice and uncertainty. The form of H will be recognized as that entropy as defined in certain formulations of statistical mechanics where pi is the probability of a system being in cell i of its phase space; ...’’ In Quantum Information Theory, the correlated equilibria in two-player games means that the associated probabilities of each-player strategies are functions of a correlation matrix. Entanglement, according to the Austrian physicist Erwin Schro¨ dinger, which is the essence of Quantum Mechanics, has been known for long time now to be the source of a number of paradoxical and counterintuitive phenomena. Of those, the most remarkable one is the usually called non-locality which is at the heart of the Einstein–Podolsky–Rosen (ERP) see Ref. [17, p. 12]. Einstein et al. [18] which consider a quantum system consisting of two particles separated long distance. ‘‘ERP suggests that measurement on particle 1 cannot have any actual influence on particle 2(locality condition); thus the property of particle 2 must be independent of the measurement performed on particle 1.’’ The experiments verified that two particles in the ERP case are always part of one quantum system and thus measurement on one particle changes the possible predictions that can be made for the whole system and therefore for the other particle [5]. It is evident that physical and mathematical theories have had a lot of utility for the economic sciences, but it is necessary to highlight that Information Theory and Economics also contribute to the explanation of Quantum Mechanics laws. Will the strict incorporation of Classic and Quantum Information Theory elements allow the development of Economics and Game Theory? The definitive answer is yes. Economics has carried out its own developments around information theory; especially it has demonstrated both that the asymmetry of information causes errors in the definition of a optimal negotiation and that the assumption of perfect markets is untenable in the presence of asymmetric information see Refs. [19,20]. The asymmetry of the information according to the formalism of Game Theory can have two causes: incomplete information and imperfect information. As we will see in the development of this paper, Information Economics does not even incorporate in a formal way neither elements of Classical Information Theory nor Quantum Information concepts. The creators of Information Theory are Shannon and von Newmann see Ref. [15, Chapter 11]. Shannon the creator of Classical Information Theory introduces the entropy as the heart of your theory, endowing it of a probabilistic characteristics. On

(footnote continued) ‘‘The key concept of Classical Information Theory is the Shannon Entropy Theory. Suppose we learn the value of a X. The Shannon Entropy of X quantifies how much information we gain, on average, when we learn the value of X: An alternative view is that the entropy of X measures the amount of uncertainty about X before we learn its value. These two views are complementary; we can view the entropy either as a measure of our uncertainty before we learn the value of X; or as measure of how much information we have gained after we learn the value of X: ‘‘[15, p. 500]. ARTICLE IN PRESS

508 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 the other hand, von Newmann also creator of Game Theory, uses the probabilistic elements take into account by Shannon but defines a new mathematical formulation of entropy using the density matrix of Quantum Mechanics. Both entropy formulations developed by Shannon and von Newmann, respectively, permit us to model pure states (strategies) and mixed states (mixed strategies). In Eisert et al. [21,35], they not only give a physical model of quantum strategies but also express the idea of identifying moves using quantum operations and quantum properties. This approach appears to be fruitful in at least two ways. On one hand, several recently proposed quantum information application theories can already be conceived as competitive situations, where several factors which have opposing motives interact. These parts may apply quantum operations using a bipartite quantum system. On the other hand, generalizing in the domain of quantum probabilities seems interesting, as the roots of game theory are partly rooted in . In this context, it is of interest to investigate what solutions are attainable if superpositions of strategies are allowed [22–24]. As we have seen before, from a historical perspective we can affirm that Game Theory and Information Theory advances are related to Quantum Mechanics especially regarding nanotechnology. Therefore, it is necessary to use Quantum Mechanics for five reasons:  The origin of quantum information and its potential applications: encryption, quantum nets and correction of errors has wakened up great interest especially in the scientific community physicists, mathematicians and .  Quantum Game Theory is the first proposal of unifying Game Theory and Quantum Mechanics with the objective of finding synergies between both.  In this paper we present an immediate result, product of using these synergies (possibility theorem). Possibility theorem allows us to introduce the concept of rationality in time series.  A perfect analogy exists between correlated equilibria that fall inside the domain of Game Theory, and entanglement that falls inside the domain of Quantum Information see Refs. [25,15].  The reason of being of this paper is to demonstrate theoretically and practically that Information Theory and Game Theory permit us to obtain Quantum Mechanics Principles. This paper is organized as follows. Section 1 is a revision of the existent bibliography. In Section 2, we show the main theorems of quantum games. Section 3 is the core of this paper; here we present the Quantum Mechanics Principles as a consequence of maximum entropy and minimum action principle. In Section 4 we can see the conclusions of this research.

2. Elements of Quantum Game Theory

Let G ¼ðK; S; vÞ be a game to n-players, with K the set of players k ¼ 1; ...; n: The finite set Sk of cardinality lk 2 N is the set of pure strategies of each player ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 509

where k 2 K; skjk 2 Sk; jk ¼ 1; ...; lk and S ¼ PK Sk set of pure strategy profiles with s 2 S an element of that set, l ¼ l1; l2; ...; ln represent the cardinality of S [26,4,9]. Furthermore, in physics the word strategy represents a state of the system. The vector function v : S ! Rn associates every profile s 2 S the vector of utilities vðsÞ¼ðv1ðsÞ; ...; vnðsÞÞT; where vkðsÞ designates the utility of the player k facing the profile s. In order to get facility of calculus we write the function vkðsÞ in one explicit k k way v ðsQÞ¼v ðj1; j2; ...; jnÞ: The matrix vn;l represents all points of the Cartesian k product k2K Sk: The vector v ðsÞ is the k-column of v: If the mixed strategies are allowed, then we have: () Xlk k lk k DðSkÞ¼ p 2 R : p ¼ 1 jk jk¼1 the unit simplex of the mixed strategies of player k 2 K; and pk ¼ðpk Þ is the jk probabilityQ vector. The set of profiles in mixed strategies is the polyhedron D with 1 2 n k k k k T D ¼ DðSkÞ; where p ¼ðp ; p ; ...; p Þ; and p ¼ðp ; p ; ...; p Þ : Using the k2K j1 j2 jn 1 2 ln Kronecker product it is possible to write:8

p ¼ p1 p2 ÁÁÁ pkÀ1 pk pkþ1 ÁÁÁ pn ; pðÀkÞ ¼ p1 p2 ÁÁÁ pkÀ1 lk pkþ1 ÁÁÁ pn ; k ¼ð ÞT ½ kŠ l 1; 1; ...; 1 ; l lk;1; k ¼ð ÞT ½ kŠ o 0; 0; ...; 0 ; o lk;1 : The n-dimensional function u : D ! Rn associates with every profile in mixed strategies the vector of expected utilities uðpÞ¼ðu1ðp; vðsÞÞ; ...; unðp; vðsÞÞÞT; where ukðp; vðsÞÞ is the expected utility for each player k: Every uk ¼ uk ðpðÀkÞ; vðsÞÞ jk jk represents the expected utility for each player’s strategy and the vector uk is noted k k k k T u ¼ðu1; u2; ...; unÞ :

Xlk k ðÀkÞ k uk ¼ u ðp ; vðsÞÞp ; jk jk jk¼1 u ¼ v0p ; uk ¼ðlk vkÞpðÀkÞ :

The triplet ðK; D; uðpÞÞ designates the extension of the game G with the mixed strategies. We get Nash’s equilibrium (the maximization of utility) if and only if, 8k; p; the inequality ukðpnÞXukððpkÞn; pðÀkÞÞ is respected. Moreover, we can write the original problem as a decision one where ukðpnÞukððpkÞn; pðÀkÞÞ: Another way to calculate the Nash’s equilibrium, [27, 9, pp. 96–104],is equaling the values of the expected utilities of each strategy when it is

8We use bold fonts in order to represent vector or matrix. ARTICLE IN PRESS

510 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 possible.

ukðpðÀkÞ; vðsÞÞ ¼ ukðpðÀkÞ; vðsÞÞ¼ÁÁÁ¼uk ðpðÀkÞ; vðsÞÞ ; 1 2 jk

Xlk pk ¼ 1 8k ¼ 1; ...; n ; jk jk¼1  Xlk ÀÁ2 s2 ¼ uk pðÀkÞ; vðsÞ À uk pk ¼ 0 : k jk jk jk¼1 ðÀkÞ Ã If the resulting system of equations does not have solution ðp ÞPthen we propose the minimum entropy theorem. This method is expressed as Minpð k HkðpÞÞ; 2 à n where skðp Þ standard deviation and Hkðp Þ entropy of each player k. 2 n 2 k n ðÀkÞ skðp Þpskððp Þ ; p Þ or

n k n ðÀkÞ Hkðp ÞpHkððp Þ ; p Þ

Theorem 1 (Minimum entropy theorem). The game entropy is minimumP only in mixed strategy Nash’s equilibrium. The entropy minimization program Minpð k HkðpÞÞ; is k equal to standard deviation minimization program Min ðPkskðpÞÞ; when ðu Þ has p jk Gaussian density function or multinomial logit see proof in Ref. [14]. Theorem 2 (Minimum dispersion). The Gaussian density function f ðxÞ¼jfðxÞj2 xÀhxi q represents the solution of one given by ð 2 þ ÞfðxÞ¼0 related 2sx qx to minimum dispersion of a lineal combination between the variable x, and a Hermitian q operator ðÀi qxÞ see proof in Ref. [14].

3. Information Theory towards Quantum Mechanics

In this section, a presentation of Information Theory is made. Moreover, a study about how this subject, along with the variational principles of Theoretical Mechanics, is used as a frame to construct Quantum Mechanics Principles. Also, a relation of equivalence between variation of information and mass–energy is found.

3.1. Information Theory elements

Let n be, a binary word with symbols wi 2ð1; 0Þ: The total number of words which can be built with symbols in n is 2n: Each word can be an instruction, a number, an alphanumeric character, and so on. Therefore, the number 2n indicates the total amount of information, which is: N ¼ 2n ; (1) ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 511 n can be a very large number so that we choose (due to Shannon [16]) to count logarithmically. We define the amount of information:

I i ¼Àlog2 Pi ; (2) where Pi is the probability of finding the word i. Mean information is defined X I m ¼À Pi log2 Pi i or equivalently X I m ¼À Pi ln Pi : (3) i Now, our purpose is to find the maximum amount of information, which implies that:

dI m ¼ 0 (4) with constrain X Pi ¼ 1 : (5) i In order to maximize the Eq. (3), we will use Lagrange’s multipliers X dI m ¼À ½ln Pi þ 1Š dPi ¼ 0 ; X i l dPi ¼ 0 : ð6Þ i Using the Eq. (6), we can obtain X ½ln Pi þ 1 þ lŠ dPi ¼ 0 ; (7) i whose solution is Àð1þlÞ Pi ¼ e (8) i.e., each word has the same probability to be chosen. A process of this type is called a random process. This process allows to save or load information in an efficient or optimal manner. Therefore, computers use random access memories (RAM). Considering X X Àð1þlÞ n Àð1þlÞ Pi ¼ e ¼ 2 e ¼ 1 ; i i we have 1 P ¼ (9) i 2n and l ¼ n ln 2 À 1 : (10) ARTICLE IN PRESS

512 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543

Using logarithms of base 2, the amount of information is 1 I ¼Àlog P ¼Àlog ¼ n bits : (11) 2 2 2n Consequently, the mean information is X I m ¼À Pi log2 Pi (12) i and when the process is completely random we can write X 1 1 X n n2n I m ¼À n log2 n ¼ n ¼ n ¼ n bits : (13) i 2 2 i 2 2

Example 1. Let us consider a two-state system

I m ¼ÀP log2 P Àð1 À PÞ log2ð1 À PÞ : (14) 1 The maximum is obtained when P ¼ 2 and I m ¼ 1 bit, see Fig. 1. If we have a multiplet of spins then they have a degeneracy of 2s þ 1: We can form words of length 2s þ 1 symbols; therefore, the total number of words must be: N ¼ 2ð2sþ1Þ (15) and the amount of information will be 2s þ 1 bits.

100

200

300

400

500

600

700

800

900 200 400 600 800 1000 1200

Fig. 1. Behavior of a two-state system described by (14). ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 513

Because of the Mixed State Spin Theory, spin would naturally fluctuate randomly. In a quantum computer, we would not have to program complex random number generation (which in fact are quasi-random algorithms).

3.2. Measurement of physical magnitudes

Let us consider an experiment, where we obtain data ui for i ¼ 1; 2; ...; n: The mean value of ui is given by X u¯ ¼ uiPi : (16) i

We are interested in the error. Evidently, Dui ¼ ui À u¯ has mean value X Dui ¼ ðui À u¯ÞPi ¼ 0 ; i which is not interesting at all. A valuable and interesting definition, in this case, is the quadratic mean value X 2 2 Dui ¼ ðui À u¯Þ PiX0 (17) i that is higher from or at least equal to zero. Now, we are faced with the problem of how to measure a physical parameter exactly, when we want to have the minimum quadratic possible error (minimum standard deviation or minimum dispersion theorem) and maximum information (or maximum average). The maximum average is according to von Newmann’s formalization of Game Theory see Ref. [26]. P 2 We have to maximize I m under the constrains i Pi ¼ 1 and Dui : The function 2 Dui must take its minimum value according to minimum dispersion theorem.

2 Theorem 3. Gaussian Density Function permits both maximize I m and minimize Dui :

Proof. Let it be the next maximization program ! X max I m ¼ max À Pi ln Pi ; P P i i i subject to X 1 ¼ Pi i and X 2 2 Dui ¼ Dui Pi i ARTICLE IN PRESS

514 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 evaluating and making zero first derivatives to find the max value. X dI m ¼À ðln Pi þ 1Þ dPi ¼ 0 ; X i a dPi ¼ 0 ; Xi 2 b Dui dPi ¼ 0 ; ð18Þ i which brings us to X 2 dI m ¼À ðln Pi þ 1 þ a þ bDui Þ dPi ¼ 0 (19) i and it has a solution

ÀbDu2 Pi ¼ Ae i : (20) In order to determine the integration constants A and b; let us consider the following continuous : Z 1 2 AeÀbDu dDu ¼ 1 ; ZÀ1 1 2 ÀbDu2 2 Du Ae dDu ¼ su : ð21Þ À1 Integrating we found

Du2 1 À 2 P ¼ pffiffiffiffiffiffi e 2su (22) 2psu Eq. (22) is a Gaussian density function which also follows minimum dispersion theorem see Ref. [14]. &

3.3. Information Theory and uncertainty relations

The action is the most important magnitude in Physics. It satisfies Hamilton principle, see Ref. [28, p. 38]. dS ¼ 0 : (23) This equation deserves additional analysis. Let E be a physical state of a particular system

E ¼ðqi; q_i; tÞ ; (24) where qi is a generalized reference system of coordinates, q_i generalized velocities, t the instant they are determined, and i every generalized coordinates. Let us define the configuration space C ¼fqig: A point in that space represents the system’s physical configuration. Considering time evolution, we observe that this point describes a trajectory in the configuration space. If we fix instants t1 and t2 ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 515 where t24t1 between qiðt2Þ and qiðt1Þ; then the system’s time evolution describes its story between ½t1; t2Š: Let us define a state system dynamic function

L ¼ Lðqi; q_i; tÞ : (25) The Lagrangian L is different for each possible trajectory. Let us postulate that to know this function allows us to know the set of differential equations that describes the system’s real story, because using ‘‘virtual work method’’ we can built L. It is clear that for every trajectory between the points Aðqiðt1ÞÞ and Bðqiðt2ÞÞ in the configuration space, the action integral has the next form Z t2 S ¼ Lðqi; q_i; tÞ dt ; (26) t1 which will take a different numeric value in function of the chosen trajectory. S is called the system’s action. What we now is only the initial A and final B configurations of the system. Therefore, there exist an infinite number of trajectories that begin in A and reach B, but there is only one that is real. The question is how to find it. Let us suppose that the real trajectory has an action S. Any other trajectory will be a virtual one even though it may be infinitesimally close. Trajectories like those have an action S þ dS: This way, the real trajectory satisfies that: dS ¼ 0 : (27) Eq. (27) represents an optimum in the functional field of all trajectories, and also is stationary because t2 and t1 are fixed numbers and the d operator is time independent. This brings us to deduce that Lðq ; q_ ; tÞ satisfies the Euler–Lagrange equation  i i qL d qL À ¼ 0 ; (28) qqi dt qq_i where it can be shown that (see Ref. [28]) L ¼ T À U (29) being T the kinetic energy and U the potential energy. When the systems are non- conservative, Eq. (28) takes the form  qL d qL qF À ¼ ; (30) qqi dt qq_i qq_i where F is the Rayleigh dissipation function. Eq. (30) can be deduced from Eq. (28) considering the universe as a whole. In fact, total energy is conserved in the universe. Furthermore, we can consider a subsystem described by a Lagrangian L. Let LAB be, the Lagrangian of interaction between the borders of the system and let L0 be, the Lagrangian of the rest of the universe. Because the Lagrangian is scalar, it satisfies the addition property. Therefore, the Lagrangian of the whole universe will be the sum of L; LAB ARTICLE IN PRESS

516 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 and L0

LU ¼ L þ LAB þ L0 ; (31) where LU is the Lagrangian of the universe, and it satisfies  qL d qL U À U ¼ 0 ; (32) qqi dt qq_i  qL d qL q d q À ¼À ðL0 þ LABÞÀ ðL0 þ LABÞ (33) qqi dt qq_i qqi dt qq_i the right term represents an interaction with the rest of the universe and we will call it generalized force Qi  qL d qL À ¼ Qi ; (34) qqi dt qq_i

Qi can represent dissipative processes as well as energy sources, so more generally (see Ref. [28]) qF Qi ¼ (35) qq_i with 1 2 ~ F ¼ 2 f iq_i À q_iQi : (36) The first right term represents dissipative processes and the other represents energy sources or external energy supplies. In short, it is clear that isolated systems in the universe cannot exist. In fact, let us consider a so-called ‘‘thermodynamically isolated system’’, which means that its energy is constant. Due to Hamilton–Jacobi equation, although its energy is constant, its action is not qS E ¼À ¼ const: (37) qt Therefore, the system is interchanging action with the rest of the universe. The Action is a physical magnitude which all theoretical physicists, except R. Feynman, have not considered of importance. But, it is the most fundamental physical magnitude because it is the origin of all other magnitudes, for instance, the energy (37) or any others, qS pi ¼ ; (38) qqi where pi and qi are conjugate variables. It is known that interactions travel at maximum speed (of light). This type of interaction with our system such as radiation, heat then E ¼ pc (39) ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 517 but qS E ¼À and ~p ¼rS : (40) qt Because ~c is perpendicular to rS ¼ cte and parallel to ~p; the Eq. (39) can be written as qS À ¼rÁS~c qt or equivalently qS rÁS~c þ ¼ 0 (41) qt which is a conservational law. Therefore, if we consider a region R in the universe, action will be a random input/ flux in this region, because natural phenomena satisfies the condition of maximum mean information. Also, Eq. (39) can be written as E2 À p2 ¼ 0 ; (42) c2  1 qS 2 ÀðrSÞ2 ¼ 0 : (43) c2 qt What enables us to define the Lagrangian density  1 qS 2 L ¼ ÀðrSÞ2 ; (44) c2 qt which satisfies the following Euler–Lagrange equation 1 q2S ¼r2S (45) c2 qt2 and

iðk~Á~xÀotÞ Sð~x; tÞ¼S0e (46) whose mean value is S¯ ¼ 0 (47) and its quadratic mean value is

n 2 SS ¼ jS0j a0 : (48) This completely justifies Eq. (56). On the other hand, Lagrange equation has the next form  qL d qL À ¼ 0 ; qqi dt qq_i ARTICLE IN PRESS

518 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 where X 1 2 L ¼ miq_i À UðqiÞ (49) i 2 satisfies the condition q2L X 2 0 : (50) qq_i In the book of Elsgostz [29], we can see the Eq. (50) which is considered as a condition dS ¼ 0 implies a strong minimum. Hamilton principle is another optimal principle that natural laws must satisfy, and the origin of the next physical magnitudes. qS qS H ¼À ; pi ¼ : (51) qt qqi In a free particle system, classical energy and momentum are conserved; H ¼ E and qi ¼ x: qS E ¼À ; qt S ¼ÀEt þ WðxÞ ; (52)

qS dW ¼ ¼ p (53) qx dx and Z

W ¼ p dx þ S0 (54) therefore, S ¼ px À Et þ S0 and DS ¼ px À Et : (55) From Eq. (22), considering all given arguments and the fact that action is not a directly measurable quantity, we postulate 2 1 ÀDS PðSÞ¼pffiffiffiffiffiffi e 2s2 (56) 2ps which contains all the information of the system. From Hamilton’s equation, where there is an interaction we can write. qH qE p_ ¼À ¼À : (57) qx qx Considering p; x; E; t independent conjugate variables from Eq. (57) we have dp dx ¼ÀdE dt ; (58) Z Z Z Z p x E t dp dx ¼À dE dt ; 0 0 0 0 ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 519

px ¼ÀEt (59) and DS ¼ px À Et ¼ 2px (60)

px is an area on the phase space (see Fig. 2) and it is a Poincare´ invariant therefore it is conserved. px ¼ DpDx :

From this equation we obtain DS ¼ 2DpDx :

Now let us maximize information X dI m ¼ Pi ln Pi ¼ 0 (61) dPi i with the constrains X 2 2 dDx ¼ Dxi dPi ¼ 0 ; i

50

p A ∆ 100

150 ∆ x

200

250 p 300 A

350 x x

50 100 150 200 250 300 350 400 450

Fig. 2. Evolution of a Poincare´ invariant. ARTICLE IN PRESS

520 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 X 2 2 dDp ¼ Dpi dPi ¼ 0 ; Xi 2 2 dDS ¼ DSi dPi ¼ 0 ; X i dPi ¼ 0 : ð62Þ i Using LagrangeX multipliers we obtain: 2 2 2 dI m ¼ ½ln Pi þ 1 þ l þ aDxi þ bDpi À gDSi Š dPi ¼ 0 : (63) i Let us represent the universe as a set (see Fig. 3). Let us consider a region R where we perform physical measurements. Information satisfies addition property, so we have

I mU ¼ I mR þ I mRU ; (64) where I mU is the total amount of information of the universe, I mR is the information in the place where the measurement is performed, and I mRU represents the information of the rest of the universe. Because the total information is optimal, then

DI mU ¼ 0 ; (65)

50 U

100

150

R

200

250

50 100 150 200 250

Fig. 3. The universe and a region R where we perform physical measurements. ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 521 which brings us to

DI mR þ DI mRU ¼ 0 (66) so

DI mR ¼ÀDI mRU ; (67) X X D Pi ln Pi ¼ÀD Pj ln Pj ; (68) i j jai X X ½ln Pi þ 1ŠDPi ¼À ½ln Pj þ 1ŠDPj ; i j X X X X ln Pi þ DPi ¼À ln Pj À DPj (69) i i j j and taking into account X Pi ¼ 1 ; i X Pj ¼ 1 ; (70) j X DPi ¼ 0 ; i X DPj ¼ 0 (71) j and X X ln Pi ¼À ln Pj ; (72) i j which will be satisfied only if X X ln Pi ¼À ln Pj ¼ 0 (73) i j in other words

Pi ¼ Pj ¼ 1 (74) The last equations indicate that the perturbation induced by an experiment will be felt in every point in space instantaneously, which produces a break in the randomness of the universe, which has no sense because it contradicts the Special Relativity Theory. Therefore, it must exist an additional term of information

I mU ¼ I mR þ I mRU þ I md (75) ARTICLE IN PRESS

522 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 from here

0 ¼ DI mR þ DI mRU þ DI md : (76) The term

DI mRU ¼ 0 (77) and

DI mR þ DI md ¼ 0 : (78) In this way, we can observe the randomness of the rest of the universe and continue satisfying the theory of relativity because information cannot be transmitted faster than the speed of light. To sum up, if we compute the probabilities in the whole universe and consider that space and time are globally homogeneous,we can infer that probability does not depend on xi; pi; ti; Ei: Therefore 2 2 2 aDxi þ bDpi ¼ gDSi (79) with Àð1þlÞ Pi ¼ e : (80) The Eq. (79) which implies that statistical interaction processes cover all phase space. _ X _ Theorem 4. Plank’s constant has a statistical nature. Furthermore, s ¼ ; DpDx 2 : Proof. The action distribution function, which is local, due to Eq. (56), is

2 2 2 eÀgDS ¼ eÀaDx eÀbDp : (81) Using the conditions R R 2 1 DS2 eÀgDS dDS 1 4Dp2Dx2 eÀaDx2 dDx eÀbDp2 dDp s2 ¼ À1R ¼ À1 R ; (82) 1 ÀgDS2 1 ÀaDx2 ÀbDp2 À1 e dDS À1 e dDx e dDp R R 1 Dx2 eÀaDx2 dDx 1 Dp2 eÀbDp2 dDp s2 ¼ 4 À1R À1R ; (83) 1 ÀaDx2 1 ÀbDp2 À1 e dDx À1 e dDp

2 2 2 s ¼ 4sxsp s ) s s ¼ ; (84) x p 2 we obtain the minimum Heisenberg’s uncertainty relation. ) s ¼ _ ; (85) sxsp represents the minimum area in phase space, more generally speaking, any other area will satisfies _ DpDxX : (86) 2 ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 523

We can conclude this because dDx2 ¼ 0 and Dx2X0 are true only for a minimum. The same arguments are valid for Dp2: & More general, any action can be written as (79). DS2 ¼ðpx À EtÞ2 ¼ p2x2 þ E2t2 À 2pExt : (87) We can perform a coordinate rotation, so that we obtain 2 2 2 DS ¼ l1x¯ þ l2t¯ (88) We can see that action is a quadratic function of x¯; t¯: In fact, it is proportional to the relativistic interval DS2 / c2t¯2 À x¯ 2 which represents a wave front.

3.4. Quantum mechanics

The analysis we have made is very general and we want to insist in this idea in a different way. In order to do that, it is necessary to recall 2 2 2 2 4 E ¼ p c þ m0c ; (89) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 E ¼Æc p þ m0c ¼ÆcP ; (90) which has the same form of a photon equation. Also we know DS ¼ ~p Á ~x À Et (91) which shows that at t ¼ 0; DS ¼ ~p Á ~x represents an equation of a plane when DS ¼ cte: Every point in this plane have the same time, t ¼ 0: This is a characteristic of a wave front. Moreover, DS ¼ px À Et ¼ cte bring us to E px_ À E ¼ 0 ) x_ ¼ ¼ v ; (92) p f where vf is the phase speed. Or also, x dp À t dE ¼ 0 dE x_ ¼ ¼ vg (93) dp qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 4 is the group speed. Given that E ¼ p c þ m0c dE pc2 ¼ : (94) dp E If we multiply Eqs. (92) and (93) then we have E p v v ¼ c2 ¼ c2 ; (95) g f p E which represents a relation of dispersion of an electromagnetic wave in Optics. ARTICLE IN PRESS

524 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543

Therefore, it is possible to think about action waves. We can propose the following integral equation Z 2 ÀDS e 2s2 ¼ Aðp; E; k; oÞf ðkx À otÞ dk do : (96)

On the other hand, we do not consider any relation between k and o; we will find it from the Eq. (96). Theorem 5. Optimal wave superposition (Eq. (96)), which satisfies Gaussian density function (Eq. (22)), permit us to obtain Plank’s and De Broglie’s equations.

Proof. Let dc be the derivative on space coordinates. Z 2 2 DS ÀDS Àd e 2s2 ¼ Aðp; E; k; oÞd f dk do ; c 2s2 c Z Z DS2 Àd Aðp; E; k; oÞf dk do ¼ Aðp; E; k; oÞd f dk do : c 2s2 c From here Z  DS2 Aðp; E; k; oÞ fd þ d f dk do ¼ 0 c 2s2 c this implies the differential equation

DS2 fd þ d f ¼ 0 ; (97) c 2s2 c

DS2 þ ln f ¼ ln A 2s2

2 ÀDS ) f ¼ Ae 2s2 ; (98) where f is an exponential function on x and t

f ¼ AeÀFðkxÀotÞ : (99) From where

DS2 Fðkx À otÞ¼ À ln A : (100) 2s2 Expanding Fðkx À otÞ in Taylor’s series we have

dF 1 d2F ð À Þ¼ ð Þþ ð À Þþ ð À Þ2 F kx ot F 0 kx ot 2 kx ot du 0 2 du 0 DS2 þÁÁÁ¼ À ln A : ð101Þ 2s2 ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 525

All terms are zero, except Fð0Þ Fð0Þ¼Àln A ;

a2 ðpx À EtÞ2 ðkx À otÞ2 ¼ (102) 2 2s2 and we obtain

p ¼ ask ¼ sk0 ;

E ¼ aso ¼ so0 : (103) The integral equation is Z 2 0 0 2 ÀDS 1 0 0 Àðk xÀo tÞ 0 0 e 2s2 ¼ Aðp; E; k ; o Þ e 2 dk do ; (104) a2 k0 and o0 are dummy variables, then we can do k0 ! k and o0 ! o Z 2 2 ÀDS 1 ÀðkxÀotÞ e 2s2 ¼ Aðp; E; k; oÞe 2 dk do a2 therefore a ¼ 1 and

p ¼ sk ; E ¼ so ; ð105Þ which are De Broglie’s and Plank’s equations, respectively. Again we obtain that s ¼ _ : & (106)

The new integral equation is Z 2 2 ÀðpxÀEtÞ ÀðkxÀotÞ e 2s2 ¼ Aðp; E; k; oÞe 2 dk do this implies that  E p Aðp; E; k; oÞ¼d À o d À k (107) s s and obviously Z  2 2 ÀðpxÀEtÞ E p ÀðkxÀotÞ e 2s2 ¼ d À o d À k e 2 dk do : (108) s s Let us analyze every term in Eq. (108)  Z 1 E 1 Ài ðEÀsoÞt d À o ¼ e s dt ; s 2psZ À1  1 p 1 i ðpÀskÞx d À k ¼ e s dx ð109Þ s 2ps À1 ARTICLE IN PRESS

526 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 from there Z 1 i ðpxÀEtÞ Ài ðskxÀsotÞ e s e s dx dt ¼ Aðp; E; k; oÞ : (110) ð2psÞ2 Now we are able to define the wave functions

1 i ðpxÀEtÞ C ¼ e s ; p;E 2ps 0 0 1 Ài ðp xÀE tÞ C 0 0 ¼ e s : ð111Þ p ;E 2ps Or considering only p (and the fact that s ¼ _)

1 px i _ Cp ¼ pffiffiffiffiffiffiffiffi e ; (112) 2p_ Z n 0 CpCp0 dx ¼ dðp À p Þ ; (113) which is the Dirac’s condition of normalization. If we consider only E; we have

1 Et Ài _ CE ¼ pffiffiffiffiffiffi e ; (114) 2p_ Z n 0 CECE0 dt ¼ dðE À E Þ ; (115) which is a energy conservation law. From Eq. (112) we see that q Ài_ C ¼ pC (116) qx p p and from Eq. (114) we obtain q i_ C ¼ EC : (117) qt E E We see the way in which all concepts of Quantum Mechanics appear naturally. In general, the wave function is

i DS C ¼ Ae _ : (118) Given that X DS ¼ piqi À Et ; (119) i where pi and qi are physical conjugate observables. _ qC Ài ¼ piC (120) qqi in this way, every physical (pi) observable has associated a Hermitian operator (p^ ¼Ài_ q ), such that its mean values are the eigenvalues of that operator. i qqi ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 527

Let us consider a photon E ¼ pc ;

E2 À p2 ¼ 0 (121) c2 qS qS given that E ¼Àqt and p ¼ qx then   1 qS 2 qS 2 À ¼ 0 : (122) c2 qt qx This bring us to define the Lagrangian density   qS 2 1 qS 2 L ¼ À : (123) qx c2 qt From Euler–Lagrange equation "# "# q qL q qL ÀÁþ ÀÁ¼ qS qS 0 ; (124) qx qx qt qt

q2S 1 q2S À ¼ 0 ; (125) qx2 c2 qt2 where we obtain

iðkxÀotÞ S ¼ S0e (126) the energy qS E ¼ ¼ÀioS eiðkxÀotÞ : (127) qt 0 The square modulus 2 2 2 jEj ¼ o jS0j : (128) The average is Z 1 DS2 2 1 2 2 À 0 2 2 hjEj i¼pffiffiffiffiffiffi o jS0j e 2s2 dDS0 ¼ o _ (129) 2ps À1 i.e. qffiffiffiffiffiffiffiffiffiffiffiffi 2 Efoton ¼ hjEj i ¼ o_ : (130) What we measure, in fact, is the mean quadratic value of the random fluctuations of energy. Given that DS2 DE2 Dt2 0 ¼ þ 2 2 2 ; (131) 2s 2sE 2st ARTICLE IN PRESS

528 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 we find s _ s s ¼ ¼ (132) E t 2 2 or in general _ DEDtX ; (133) 2 which is the other Heisenberg’s uncertainty equation. Eq. (131) explains the reason there is a Gaussian dispersion of energy in a monochromatic LASER and not a Dirac’s delta (see Fig. 4). Remark 1. Every eigenvalue of a physical observable is a mean quadratic value of its random fluctuations. qffiffiffiffiffiffiffiffiffiffiffi 2 pi ¼ hjpi ji (134) if we measure its conjugate observables DS2 Dp2 Dq2 ¼ i þ i 2 2 2 (135) 2s 2sp 2sq and _ Dp Dq X ; (136) i i 2

E=(ω) ω 100 E=h 0

200

300

400

500

600

700

800 ω ω 0 900 200 400 600 800 1000 1200

Fig. 4. Spectrum of dispersion of a minimum uncertainty LASER. ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 529

Dp2 i R DS2 R À 2 À 2 2s2 p e 2s2 dDS p e p dDp hp2i¼ i ¼ i (137) i R DS2 Dp2 À 2 R i e 2s dDS À 2 e 2sp dDp on the other hand,

hp i¼0 : (138) i pffiffiffiffiffiffiffiffi 2 Therefore, pi ¼ hpi i is the only one magnitude which we can measure. Quantum Mechanics consider it as the eigenvalues of the observable p. Because p is a number, it has standard deviation zero.  Z Z 2 2 2 2 n 2 3 n 3 sp ¼hpi iÀhpii ¼ C p^i C d x À C p^iC d x ;

2 2 2 sp ¼ pi À pi ¼ 0 : (139) Copenhagen school concludes that eigenvalues do not have standard deviation, then they have to find other arguments (for example, the uncertainty principle) to explain the standard deviation observed in experiments. That is not scientifically accepted because Occam’s razor principle (see [30,31]) exhorts to introduce the minimum number of arbitrary postulates in order to choose the best of the competing theories. pWeffiffiffiffiffiffiffiffi justify that in experimental measurements, a maximum will be obtained in pi ¼ 2 hpi i and around this value there will be a statistical dispersion.

3.4.1. Spontaneous decay transitions Optimal configuration are those configurations that satisfies the condition of maximum of mean information quantity. Let us consider a number of atoms in their stationary states. Let E0 be a stationary state, i.e., which satisfies that

I mop ¼ I mðE0Þ :

Let us suppose a transition E0 ! E0 þ DE

2 dI m 1 dI m 2 3 I mðE0 þ DEÞ¼I mðE0Þþ DE þ DE þ OðDE Þ ; dE 2 2 E0 dE E0 where OðDE3Þ is negligible. If DE ¼ 10 eV; then, DE is of order of 10À18 J; therefore, DE2 is of the order of 10À36 J2; which is a very small quantity.

dI m ¼ 0 ðmaximumÞ ; dE E0

2 2 dI m Àa ¼ 2 o0 ðmaximumÞ ; dE E0 ARTICLE IN PRESS

530 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543

a2 I ðE þ DEÞ¼I ðE ÞÀ DE2 : m m 0 2 Therefore, the amount of information decreases from its maximum value, which is not a stable condition. The system will evolve in such a way that it will try to remove the energy DE in order to reach the maximum amount of information, a2 from DI ¼À DE2 ; m 2

to DI m ¼ 0 : (140) 5 1 Now, let us suppose that the probability varies in  1 around P ¼ 2 where the maximum of the distribution is produced. The variation on P in a small quantities  is equivalent to study the replicator dynamics see Ref. [32].

I m ¼ P ln P þð1 À PÞ ln P ;  1 1 1 1 I ¼ þ  ln þ  þ À  ln À  ¼À42 : (141) m 2 2 2 2

In the next sections we will show that (Eq. (187))  2DS 2 DI m ¼À pffiffiffiffiffiffi (142) 2ps therefore DS  ¼Æpffiffiffiffiffiffi (143) 2ps

DN but  ¼ N ; where N is the number of excited atoms and DS ¼ DEDt: DN DEDt ¼Àpffiffiffiffiffiffi : N 2ps In order to provide these arguments with physical sense, we define the decay speed as dN NDE ¼Àpffiffiffiffiffiffi dt 2ps or dN DE dt ¼Àpffiffiffiffiffiffi ; (144) N 2ps  N DEt ln ¼Àpffiffiffiffiffiffi ; (145) N0 2ps

ÀpDffiffiffiEt N ¼ N0e 2ps : (146) ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 531

Defining pffiffiffiffiffiffi 2ps t ¼ ; (147) DE

Àt N ¼ N0e t : (148) Therefore, the excited atoms will decrease exponentially. We cannot know which atom will be affected by a transition E0 þ DE ! E0 emitting a photon during the process, the only thing we can say is that the number of atoms will evolve as is stated in (148). t measures the mean time of de-excitation of the population of atoms R 1 Àt te t dt R0 hti¼ 1 t ¼ t ; (149) Àt 0 e dt because of DE ¼ _o and s ¼ _; we have pffiffiffiffiffiffi 2p 1 t ¼  (150) o o this is the expected result. This theoretical argumentation gives a solid base to Fermi’s study (calculation of the speed of transition of an excited atom).

3.4.2. Schro¨dinger’s equation Let a quantum mechanical system be confined in a potential U. We can expect that the wave function will be scattered an infinite number of times in the potential. Forming a wave package which is described by Z 1 ~ Cð~x; tÞ¼ Aðk~ÞeiðkÁ~xÀotÞ d3k (151) À1 with o ¼ oðkÞ (152) and p ¼ _k; E ¼ _o (153) these equations are the momentum and total energy. Applying the Laplace operator in Eq. (151) Z 1 ~ r2Cð~x; tÞ¼ Àk2Aðk~ÞeiðkÁ~xÀotÞ d3k ; À1 Z 2 2 1 2 _ r C p ~ À ¼ Aðk~ÞeiðkÁ~xÀotÞ d3k : (154) 2m0 À1 2m0 ARTICLE IN PRESS

532 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543

In the same way Z 1 qC ~ ¼ ÀioAðk~ÞeiðkÁ~xÀotÞ d3k ; qt À1Z 1 qC ~ i_ ¼ EAðk~ÞeiðkÁ~xÀotÞ d3k ð155Þ qt À1 but p2 E ¼ þ U ; (156) 2m0 Z Z 1 2 1 qC p ~ ~ i_ ¼ Aðk~ÞeiðkÁ~xÀotÞ d3k þ Uð~xÞeiðkÁ~xÀotÞ d3k : qt À1 2m0 À1 Thus, we obtain Schro¨ dinger’s equation. qC _2 i_ ¼À r2C þ Uð~xÞC (157) qt 2m0 because Uð~xÞ depends on the position, the integral is calculated respect to k. Eq. (157) is the non-relativistic Schro¨ dinger’s equation.

3.4.3. Quantum mechanics postulates 1. We have seen that to every physical observable corresponds a Hermitian operator. The measured eigenvalues are the eigenvalues of this operator.

p^iC ¼ piC (158) with

_ q p^i ¼ i (159) qqi also qffiffiffiffiffiffiffiffi 2 pi ¼ hpi i : (160) 2. Every physical system is described by a complex function C which satisfies Schro¨ dinger’s equation. qC _2 i_ ¼À r2C þ Uð~xÞC : (161) qt 2m 3. The wave function C contains all the information of the physical system (because it is a function of the action of the system) and its square modulus is the probability density of measuring an interaction in the element d3x: In fact, multiplying (161) by Cn; and taking the complex conjugate of (161) and multiplying it by C; we have: qC _2 i_Cn ¼À Cnr2C þ Uð~xÞCnC ; qt 2m ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 533

qCn _2 Ài_C ¼À Cr2Cn þ Uð~xÞCnC : qt 2m Subtracting them qC qCn _2 i_Cn þ i_C ¼À ðCnr2C À Cr2CnÞ : (162) qt qt 2m

q _2 i_ ðCnCÞ¼À ðCnr2C À Cr2CnÞ qt 2m or q i_ jCj2 ¼ rÁðCnrC À CrCnÞ : (163) qt 2m Integrating respect the volume and applying Gauss’ theorem, we have Z I d i_ jCj2 d3x ¼ ðCnrC À CrCnÞÁ~n ds : (164) dt 2m s The system is confined in a potential CðÆ1Þ ¼ 0; so, the value of the integral in the infinite is zero Z d jCj2 d3x ¼ 0 ; (165) dt U where U represents the whole space. Therefore, this integral does not depends on time, it is a constant whose value we decide is 1. Z jCj2 d3x ¼ 1 : (166) U Therefore, jCj2 is a density probability. On the other hand, if we define i_ J~ ¼À ðCnrC À CrCnÞ (167) 2m for a volume V limited by a surface A. Z I d jCj2 d3x þ J~Á ~n ds ¼ 0 (168) dt V A or q jCj2 þrÁJ~ ¼ 0 ; (169) qt which is the law of conservation of probability. Because probability is conserved, information will be conserved too. Conservation of the amount of information is associated with physical symmetries. If a symmetry is broken, the amount of information decreases and mass is generated, as we will see later in this paper. 4. Given that wave functions are eigenfunctions of a Hermitian operator, they form a Hilbert space and if they are not degenerated then the space is complete. Any ARTICLE IN PRESS

534 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 other function will be expressed as a linear combination of the elements of the base X C ¼ CiCi : (170) i This is what is called superposition principle. 5. The mean value of a physical magnitude is X ¯ f ¼ f iPi ; i where Pi is the probability of measure f i: Given that ^ f iCi ¼ f iCi (171) and Z n 3 Ci Cj d x ¼ dij (172) also n Pi  CiCj dij (173) therefore X X Z ¯ n n n 3 f ¼ f iCiCj dij ¼ f iCiCj Cj Ci d x ; (174) i i;j X Z ¯ n n ^ 3 f ¼ CiCj Cj f Ci d x ; i;j Z X Z X X ¯ n ^ 3 n n n ^ 3 f ¼ Cj f Ci d xCiCj ¼ Cj Cj f CiCi d x ; (175) i;j j i Z f¯ ¼ Cnf^C d3x : (176)

This is called or mean value. All of this is in perfect agreement with Landau’s hypothesis (see Ref. [33]).

3.5. Information and matter

Theorem 6. If symmetry is broken then the variation of information implies the variation of action according to Eq. (187). Proof. The distribution function of action is

2 1 ÀDs FðSÞ¼pffiffiffiffiffiffi e 2s2 2ps ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 535 and the probability Z S0 PðjSjpS0Þ¼ FðsÞ dDS ÀS0 whose diagrams are represented in Fig. 5. FðSÞ¼pffiffiffiffi1 reaches its maximum value when DS ¼ 0 which corresponds to P ¼ 1 : 2ps 2 So 1 PðþcÞ¼PðÀcÞ¼2 (177) considering that DS / Ds (the relativistic interval) and Ds2 ¼ c2Dt2 À Dx2 ¼ 0 for light, then DS ¼ 0 corresponds to the action for light; therefore v ¼Æc : (178) a a 1 When matter exists DS 0 and P 2 : The symmetry is broken. Let us consider:

DS 0o 51 ; (179) s Z Z 0 2 DS 2 1 ÀDS 1 ÀDS PðDSÞ¼pffiffiffiffiffiffi e 2s2 dDS þ pffiffiffiffiffiffi e 2s2 dDS ; 2ps À1 2ps 0

F(S) 100

200

300

400 0 ∆S P(S) 500 1

600 1 2 700

800 0 ∆S

900 200 400 600 800 1000 1200

Fig. 5. Distribution function and accumulated probability of action. ARTICLE IN PRESS

536 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543

1 DS PðDSÞ¼ þ pffiffiffiffiffiffi : (180) 2 2ps In the same way 1 DS PðÀDSÞ¼ À pffiffiffiffiffiffi ; 2 2ps

PðDSÞþPðÀDSÞ¼1 (181) and the amount of information is  1 DS 1 DS 1 DS 1 DS ÀI m ¼ þ pffiffiffiffiffiffi ln þ pffiffiffiffiffiffi þ À pffiffiffiffiffiffi ln À pffiffiffiffiffiffi : (182) 2 2ps 2 2ps 2 2ps 2 2ps Let us analyze the term   1 DS 1 2DS 1 2DS ln þ pffiffiffiffiffiffi ¼ ln 1 þ pffiffiffiffiffiffi ¼ ln þ ln 1 þ pffiffiffiffiffiffi 2 2ps 2 2ps 2 2ps 1 2DS ¼ ln þ pffiffiffiffiffiffi ; ð183Þ 2 2ps  1 DS 1 2DS ln À pffiffiffiffiffiffi ¼ ln À pffiffiffiffiffiffi : (184) 2 2ps 2 2ps From here   1 DS 1 2DS 1 DS 1 2DS ÀI m ¼ þ pffiffiffiffiffiffi ln þ pffiffiffiffiffiffi þ À pffiffiffiffiffiffi ln À pffiffiffiffiffiffi ; 2 2ps 2 2ps 2 2ps 2 2ps  1 4DS2 ÀI ¼ ln þ : (185) m 2 2ps2 The optimal information is    1 1 1 1 1 ÀI ¼ ln þ ln ¼ ln (186) mop 2 2 2 2 2 from here we obtain 4DS2 DI ¼À (187) m 2ps2 and pffiffiffiffiffiffiffiffiffi 2DS DI m ¼ i pffiffiffiffiffiffi 2ps or rffiffiffiffiffiffiffiffiffiffiffi DS pDI i ¼ m: & (188) s 2 ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 537

Mass is generated due to the interaction dE p_ ¼À ) px ¼ÀEt ) DS ¼ px À Et ¼ 2px : (189) dx For a wave that travels at light speed, we have DS ¼ 0 ¼ px À Et ;

px i _ Cp light ¼ Ae ; (190)

DðpxÞ ¼ DCp light i _ Cp luz (191) and the dispersion relation Dð2pxÞ DS ¼ ¼ DCp mass i _ Cp i _ Cp (192) with rffiffiffiffiffiffiffiffiffiffiffi DS pDI i ¼ m : (193) _ 2 For the wave that travels at speed of light, we have to consider the statistical weight 1 2 ; which is the probability 1 PðþcÞ¼PðÀcÞ¼2 ;

1 DS DC ¼ i C ; (194) p light 2 _ p

DS ¼ L DCp light i _ Cp (195) with

DS ¼ 2DSL : (196) Note that the angular momentum of the interaction that generates mass is DS DS ¼ 2 L ; (197) Df Df

Lz grav ¼ 2_ ; (198) which is the spin of the graviton. Let us consider the first proposed problem in Feynman’s book ‘‘Addition of paths and Quantum Mechanics’’ (see Fig. 6). A free Dirac’s particle satisfies the following Hamiltonian 2 ^ H^ ¼ cða^ Á p^Þþm0c b (199) ARTICLE IN PRESS

538 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543

ct

50 A

x=-ct x=ct

100

150 C

200

250

300 0 x

50 100 150 200 250 300 350 400 450 500 550

Fig. 6. Contribution of virtual Dirac’s particles to the process of generation of mass.

with

(200) and

(201) where 1 is the identity matrix and s^i Pauli’s spinors. Calculating a speed in one direction, for instance, x; we have i ½ ^ Š¼_ _ H; x^ x^ ; (202)

i i ½ ð^ Á Þþ 2 ^ Š¼ ½ ð^ Á Þ Š _ c a p^ m0c b; x^ _ c a p^ ; x^ ;

i i ½ ^ þ ^ þ ^ Š¼ ½ ^ Š c _ axp^x ayp^y azp^z; x^ c _ axp^x; x^ i i ¼ ^ ½ Š¼ ^ ðÀ _Þ¼ ^ ð Þ c _ ax p^x; x^ c _ ax i axc ; 203 ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 539

_ x^ ¼ a^xc which applied to a state function gives   _ C1 C1 x^ ¼ ca^x : (204) C2 C2

Let us calculate the eigenvalues of a^x   C C 1 ¼ l 1 (205) C2 C2 y given that a^x ¼ a^x:  C 1 ¼ 0 : (206) C2 Its determinant is 2 2 2 l 1 À s^x ¼ 0; s^x ¼ 1 ; (207)

ðl2 À 1Þ1 ¼ 0 ; (208)

l ¼Æ1 therefore x_^C ¼ÆcC (209) and the eigenvalue of speed is Æc: DSL Feynman calls that particle ‘‘Dirac’s particle’’ and gives it the propagator i _ ; so DS ¼ L DCp i _ Cp He divides the segment OA in n parts and build different paths for the propagator, this is shown in Fig. 6, which represents a particle propagating in the x-axis with þc or Àc: Actually, this is a very clever application of Huygens principle because every change in the direction represents the point from a wave is propagating, as it is indicated in Fig. 6 in point C. The propagator which goes from O to A is  Xn n iDS p ð Þ¼ L ð Þ K A; 0 _ K 0 : (210) p¼0 p The number of changes of direction is N ¼ 2n; because of the sum  Xn n ¼ 2n ; (211) p¼0 p which is deduced from the binomial  Xn n ð1 þ xÞn ¼ xp (212) p¼0 p ARTICLE IN PRESS

540 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 for x ¼ 1: The derivative of the binomial is  Xn n nð1 þ xÞnÀ1 ¼ p xpÀ1 p¼0 p so that  Xn n n2nÀ1 ¼ p : (213) p¼0 p The action gained in p changes of direction is  n Sp ¼ 2p DSL (214) p in order to preserve the mirror symmetry between x and Àx in 0. The total gained action is  Xn n nÀ1 n Stotal ¼ 2p DSL ¼ 2n2 ¼ n2 : (215) p¼0 p The mean action ct axis is S n2n S¯ ¼ total ¼ DS ¼ nDS (216) N 2n L L therefore S¯ DS ¼ (217) L n and n iS¯ KðA; 0Þ¼ 1 þ Kð0Þ (218) _n When n !1

S¯ KðA; 0Þ¼Kð0Þei _ (219) with

2 S¯ ¼Àm0c Dtp ; (220) where Dtp is the proper time. Therefore, particles does not exist, they are dispersion processes of waves. Consider an observer moving with speed v; then rffiffiffiffiffiffiffiffiffiffiffiffiffi v2 S¯ ¼Àm c2 1 À Dt ; (221) 0 c2 where Dt is not the proper time. ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 541

For v5c m v2 S¯ Àm c2 þ 0 0 2 and x À x v ¼ A 0 ; tA À t0

Dt ¼ tA À t0 : Therefore,

m ðx Àx Þ2 2 0 A 0 Àiðm c =_ÞDt i _ KðA; 0Þ¼Kð0Þe 0 e 2 ðtAÀt0Þ (222) 2 m0c and by ‘‘smoothing’’ the function because _ ¼ o is a high frequency

2 m0ðxAÀx0Þ i _ KðA; 0Þ¼Kð0Þe 2 ðtAÀt0Þ (223) or

ð À Þ2 À ÂÃxA x0 i_ðt Àt Þ 2 A 0 KðA; 0Þ¼Kð0Þe m0 with  i_ðt À t Þ s2 ¼ A 0 (224) m0 given that Z 1 KðA; 0Þ dx ¼ 1 (225) À1 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2p_ðt À t Þi Kð0Þ 0 ¼ 1 (226) m0 and rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m Kð0Þ¼ 0 ; 2p_iðt À t0Þ which gives the Feynman’s non-relativistic propagator rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 m0ðxAÀx0Þ m0 i _ KðA; 0Þ¼ e 2 ðtAÀt0Þ : (227) 2p_iðt À t0Þ

4. Conclusions

1. The Universe is structured in optimal laws. Random processes are those that maximize the mean information and are strongly related to symmetries, therefore, to ARTICLE IN PRESS

542 E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 conservational laws. Random processes have to do with optimal processes to manage information, but they do not have anything to do with the contents of it, this is the anthropic principle: Laws and physical constants are designed to produce life and conscience see Ref. [34]. 2. Every physical process satisfies that the action is a locally minimum. It is the most important physical magnitude, after information, because through it is possible to obtain the energy, angular momentum, momentum, charge, etc. Every physical theory must satisfy the necessary (but not sufficient) condition dS ¼ 0; because it is an objective principle. 3. We have demonstrated that the concepts of information and the principle of minimum action dS ¼ 0 leads us to develop the concepts of Quantum Mechanics and to explain the spontaneous decay transitions. Also, we have understood that all physical magnitudes are quadratic mean values of random fluctuations. 4. Information connects every thing in nature, each phenomenon is an expression of totality. Moreover, in quantum gravity, this fact is explained by Wheeler–DeWitt equation. 5. Mass appears were certain symmetries are broken, or equivalently, when the mean information decreases. A very small decrease in information produces enormous amounts of energy. Also, we have shown that there exist a relation between the amount of information and the energy of a system (Eq. (187)). Information, mass and energy are conservative quantities which are able to transform one into another. 6. A interdisciplinary scope it is possible, only, if we suppose that General System Theory is valid because games and information can be seen as particular cases of complex systems. Quantum Mechanics is obtained in the form of a generalized complex system where entities, properties and relations conserve his primitive form and add some other ones related with nature of the physical system.

References

[1] J. Mycielski, Games with , in: Handbook of Game Theory, vol. I, 1992 (Chapter 3). [2] S. Zamir, Repeated Games of Incomplete Information: Zero-Sum, in: Handbook of Game Theory, vol. I, Wiley, New York, 1992 (Chapter 5). [3] F. Forges, Repeated Games of Incomplete Information: Non-Zero-Sum, Handbook of Game Theory, vol. I, 1992 (Chapter 6). [4] H. Varian, Analyse Microe´ conomique, Troisie` me e´ dition, De Boeck Universite´ , Bruxelles, 1995. [5] S. Braunstein, Quantum Computing, Wiley-Vch, Weinheim, Germany, 1999. [6] N. Boccara, Modeling Complex Systems, Springer, Heidelberg, 2004. [7] Y. Bar-Yam, Dynamics of Complex Systems, Addison-Wesley, Reading, MA, 1997. [8] D. Bouwmeester, A. Eckert, A. Zeilinger, The Physics of Quantum Information, Springer, London, UK, 2001. [9] R. Myerson, Game Theory Analysis of Conflict, Harvard University Press, Massachusetts, MA, 1991. [10] L. Landau, Mecanica Cua´ ntica, Editorial Reverte´ , 1978. [11] D. Moya, El Campo de Accio´ n, una nueva interpretacio´ n de la Meca´ nica Cua´ ntica, Memorias de Encuentros de Fı´ sica de IV, V, VI, VII y VIII, Polite´ cnica Nacional, 1993. ARTICLE IN PRESS

E. Jimenez, D. Moya / Physica A 348 (2005) 505–543 543

[12] E. Jime´ nez, Preemption and Attrition in Credit/Debit Cards Incentives: Models and Experiments, Proceedings, 2003 IEEE International Conference on Computational Intelligence for Finantial Engineering, Hong Kong, 2003a. [13] E. Jime´ nez, Quantum games and minimum entropy, in: Lecture Notes in Computer Science, vol. 2669, Springer, Canada, 2003b, pp. 216–225. [14] E. Jime´ nez, Quantum games: mixed strategy Nash’s equilibrium represents minimum entropy, J. Entropy 5 (4) (2003c) 313–347. [15] M. Nielsen, I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, Cambridge, United Kingdom, 2000. [16] C.E. Shanonn, A mathematical theory of communication, Bell System Tech. J. 27 (1948) 379–656. [17] C. Machiavello, G. Palma, A. Zeilinger, Quantum Computation and Quantum Information Theory, World Scientific, London, UK, 2000, 1993. [18] A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. 47 (1935) 777. [19] J. Stiglitz, La Grande De´ sillusion, Fayard, Le Libre de Poche, Parı´ s, 2003. [20] J. Stiglitz, Quand le Capitalisme Perd la Tete, Fayard, Parı´ s, 2003. [21] J. Eisert, M. Wilkens, M. Lewenstein, Quantum Games and Quantum Strategies, Working paper, University of Postdam, Germany, 2001. [22] Du Jiangfeng et al., Experimental realizations of quantum games on a quantum computer, Mimeo, University of Science and Technology of China, 2002. [23] D. Meyer, Quantum strategies, Phys. Rev. Lett. 82 (1999) 1052–1055. [24] D. Meyer, Quantum games and quantum algorithms, AMS contemporary Mathematics Volume: Quantum Computation and Quantum Information Science, USA, 2000. [25] R. Myerson, Communication, in: Correlated Equilibria and Incentive Compatibility, Handbook of Game Theory, vol. II, 1994 (Chapter 24). [26] D. Fudenberg, J. Tirole, Game Theory, The Mit Press, Cambridge, MA, 1991. [27] E. Rasmusen, Games and Economic Information: An Introduction to Game Theory, Basil Blackwell, Oxford, 1989. [28] H. Goldstein, Meca´ nica Cla´ sica, Editorial Aguilar, 1966, Seciones: 1.4, p. 25; 2.3, p. 48. [29] L. Elsgoltz, Ecuaciones Diferenciales y Ca´ lculo Variacional, editorial MIR, Moscu´ , 1969. p. 370. [30] F. Mora, Diccionario de la Filosofı´ a, Occam—Cuchilla o Navaja de Occam, Tomo III, p. 2615, Editorial Ariel, Barcelona, 1994a. [31] F. Mora, Diccionario de Filosofı´ a, Economı´ a, Tomo II E-J, p. 967, Editorial Ariel, Barcelona, 1994b. [32] P. Hammerstein, R. Selten, Game Theory and Evolutionary Biology, in: Handbook of Game Theory, vol. 2, Elsevier BV, Amsterdam, 1994. [33] L. Landau, Meca´ nica Cua´ ntica no Relativista, Reverte´ SA, 1967, Capı´ tulo I, p. 3. [34] S. Hawking, El Universo en una Ca´ scara de Nuez, Crı´ tica/Planeta, Barcelona, edicio´ n espan˜ ola, 2002. p. 86. [35] J. Eisert, S. Scheel, M.B. Plenio, Distilling Gaussian states with Gaussian operations is impossible, Phys. Rev. Lett 89 (2002) 137903. [36] D. Moya, P. Alvarez, A. Cruz, Elementos de la Computacion Cuantica, I+D Innovacion, vol. 7(13) Quito Ecuador, 2004. [37] S. Sorin, Repeated games with , Handbook of Game Theory, vol. 1, Chap. 4, 1992.