Advances in Three Hypercomputation Models
Total Page:16
File Type:pdf, Size:1020Kb
EJTP 13, No. 36 (2016) 169{182 Electronic Journal of Theoretical Physics Advances in Three Hypercomputation Models Mario Antoine Aoun∗ Department of the Phd Program in Cognitive Informatics, Faculty of Science, Universit´edu Qu´ebec `aMontreal (UQAM), QC, Canada Received 1 March 2016, Accepted 20 September 2016, Published 10 November 2016 Abstract: In this review article we discuss three Hypercomputing models: Accelerated Turing Machine, Relativistic Computer and Quantum Computer based on three new discoveries: Superluminal particles, slowly rotating black holes and adiabatic quantum computation, respectively. ⃝c Electronic Journal of Theoretical Physics. All rights reserved. Keywords: Hypercomputing Models; Turing Machine; Quantum Computer; Relativistic Computer; Superluminal Particles; Slowly Rotating Black Holes; Adiabatic Quantum Computation PACS (2010): 03.67.Lx; 03.67.Ac; 03.67.Bg; 03.67.Mn 1 Introduction We discuss three examples of Hypercomputation models and their advancements. The first example is based on a new theory that advocates the use of superluminal particles in order to bypass the energy constraint of a physically realizable Accel- erated Turing Machine (ATM) [1, 2]. The second example refers to contemporary developments in relativity theory that are based on recent astronomical observations and discovery of the existence of huge slowly rotating black holes in our universe, which in return can provide a suitable space-time structure to operate a relativistic computer [3, 4 and 5]. The third example is based on latest advancements and research in quantum computing modeling (e.g. The Adiabatic Quantum Computer and Quantum Morphogenetic Computing) [6, 7, 8, 9, 10, 11, 12 and 13]. Copeland defines the terms Hypercomputation and Hypercomputer as it follows: \Hypecomputation is the computation of functions or numbers that [. ] cannot be computed with paper and pencil in a finite number of steps by a human clerk ∗ Email: [email protected], [email protected] 170 Electronic Journal of Theoretical Physics 13, No. 36 (2016) 169{182 working effectively. A hypercomputer is any information-processing machine, no- tional or real, that is able to achieve more than the traditional human clerk working by rote. Hypercomputers compute functions or numbers, or more generally solve problems or carry out tasks, that lie beyond the reach of the Universal Turing Ma- chine" [1]. Up until the year 2010, Hypercomputation did not have a solid grounding in the realm of engineering, however this fact has not just changed but even more, ameliorated with the development of the ‘Infinity Machine' [7] (Technically, Adia- batic Quantum Computer; which its commercial version was first released in 2011, by D-Wave company based in Vancouver, Canada) that can find solutions, in a matter of days, for optimization problems that require years to solve if executed on standard; even parallel processing, computer models [7, 8 and 10]. Besides, in Science, every theory has its exceptions and limitations that bind it to the domain of its application. In the computing discipline, we should look at where limitations can be exploited in order to open the doors for new developments and give rise for possible new theories. For instance, what if the energy consumption to perform an operation on an Accelerated Turing Machine (ATM) does not grow exponentially with the number of operations to be executed? [2]. Thus, by just reflecting on the hypotheses upon which we conclude our results (e.g. a physical realization of an ATM is impossible) and by expanding the frontiers of the domain of applicability (e.g. The Machine upon which we implement our attempt), then finding solutions for unsolvable problems could be possible. It can be thought unreasonable of con- sidering such efforts and conjectural challenges; like rejecting pre-set hypotheses, objecting old theories whilst building new theories in the aim to solve problems that their solutions are already comprehended as unattainable. However, this is not the case in the scientific practice, which its virtue relies on questioning, explo- ration and discovery. For instance, let's consider the proof of Fermat last Theorem. This theorem took 700 years to be solved. Its solution by Andrew Wiles in 2001 gave rise to new developments in the field of mathematics that make a link between Galois representations, Modular forms and L-functions; which are distinct areas in number theory [14]. We note that the solution of Fermat Last Theorem has ever been deemed to be impossible. The mathematician Kenneth Ribet said that with the proof of Fermat last theorem \the mathematical landscape has changed. You discover that things that seemed completely impossible are more of a reality" [15]. Moreover, a Super Task is a task, which requires an infinite number of operations to be performed in a finite amount of time [16]. For instance, imagine a computer model that can execute an infinite number of statements (translated into logical and arithmetical operations) in a finite amount of time. This seems to be impossible. However, one theoretical model; was envisaged in such direction, it is called Accel- erated Turing Machine (ATM) [1] and described next (Section 2). Furthermore, in section 3, we will discuss a new insight for physically realizing ATMs using Super- luminal particles. Then we will tackle new advancements in Relativistic computing Electronic Journal of Theoretical Physics 13, No. 36 (2016) 169{182 171 (Section 4) and Quantum Computing modelling (Section 5) that attempt to execute super tasks. Section 6 concludes the article with Critics by Martin Davis. 2 Accelerated Turing Machines (ATM) One theoretical model that can achieve a Super Task is called Accelerated Turing Machine (ATM) [1]. An ATM executes a single operation in the half amount of time taken to execute the previous operation [1]. This can be explained with the following mathematical formula: ET (t − 1) ET (t) = (1) 2 Where ET means Execution Time and t is the time step; which can be defined in any unit of time (e.g. milliseconds, seconds, minutes. ) With initial conditions: ET(0) = 1. In the scenario of executing a super task, the Total Execution Time (TET) of the task; which requires t to go to infinity, is the limit of this sequence at infinity and can be written as follows: Xt T ET = ET (i) i=0 T ET = ET (0) + ET (1) + ET (2) + ET (3) + :::::: 1 1 1 T ET = 1 + + + + :::::: 2 4 8 ∼ T ET = 2 (2) Since the limit of this sequence converges to 2, then the time steps required to execute a super task; that requires an infinite number of operations, converges to two time units. In this mode of operation, an ATM will execute an infinite number of operations in a finite amount of time. This can be considered as a model of a Hypercomputer in a pure theoretical sense [1]. Next, we will study a possible physical realisation of this theoretical ATM. 3 ATM using Superluminal Particles In this section, we will first discuss why ATM was considered not to be physically realizable, and then we will discuss how it can be physically realizable by considering the use of the recent theory of Superluminal Particles [2]. According to Takaaki Musha [2], ATM is considered not to be physically realiz- able \because the energy to perform the computation will be exponentially increased when the computational step is accelerated" [2]. This is based on the energy-time uncertainty relation [17, 18] given next: 172 Electronic Journal of Theoretical Physics 13, No. 36 (2016) 169{182 ∆H∆T ∼ h¯ (3) Where, ∆H is \the standard of energy, ∆T is a time interval" [17] andh ¯ is the planck constant h divided by 2π. In order to realize hypercomputation in the means of an ATM, Musha [2] suggests the use of superluminal particles instead of subliminal particles (e.g. Photons) to escape the uncertainty of energy-time relation (equation 3). Musha bases his work on the energy consumption of the reversible computer (e.g. Richard Feynman model), which is a quantum computer. He refers to Lloyd influential paper on the physical limits of computation [19] by showing that the quantum system; based on the study of its energy consumption per computational step as given by Feynman, requires an infinite amount of time to complete infinite steps of computation when the system uses subluminal particles, as photons [2]. Hence, Musha proceeds by studying the uncertainty principle based on superluminal particles. Superluminal means traveling faster that light. By the means of quantum tunneling, where photons tunnels a material or a barrier, it was shown that a photon could travel faster than light when it tunnels a specific barrier or material. This means that the speed of photons can by pass the celerity of light (c) when they are inside a specific barrier or material. In this regard, Musha quotes the studies of [20] and the experiments of [21, 22 and 23] which proves superluminal behavior based on quantum tunneling. Musha reaches a new uncertainty relation of energy-time for superluminal particles, which is: h¯ ∆H∆T ∼ (4) β:(β − 1) v Where, β = c c is the celerity of light and v is the speed of the particle after it has been measured (Note that the speed of the particle is equal to c before the measurement). Last, Musha derives the equation of the total computational time required by a quantum system based on a Feynman model that uses superluminal particles when the system performs a number of computational steps. He finds out that the total computational time converges while the number of computational steps that the system executes tends to infinity. This means, \an accelerated Turing machine can be realized by utilizing superluminal particles instead of subliminal particles for the Feynman's model of computation" [2]. 4 Relativistic Computers In the context of hypercomputation and the time required for a computer to fulfill an infinite number of operations in a finite amount of time, Copeland states Hogarth who says \there is no reason why a computer user must remain beside the computer" [1].