
VOLUME 83, NUMBER 7 PHYSICAL REVIEW LETTERS 16AUGUST 1999 Computational Complexity for Continuous Time Dynamics Hava T. Siegelmann and Asa Ben-Hur Faculty of Industrial Engineering and Management, Technion, Haifa 32000, Israel Shmuel Fishman Department of Physics, Technion, Haifa 32000, Israel (Received 22 March 1999) Dissipative flows model a large variety of physical systems. In this Letter the evolution of such systems is interpreted as a process of computation; the attractor of the dynamics represents the output. A framework for an algorithmic analysis of dissipative flows is presented, enabling the comparison of the performance of discrete and continuous time analog computation models. A simple algorithm for finding the maximum of n numbers is analyzed, and shown to be highly efficient. The notion of tractable (polynomial) computation in the Turing model is conjectured to correspond to computation with tractable (analytically solvable) dynamical systems having polynomial complexity. PACS numbers: 05.45.–a, 89.70.+c, 89.80.+h The computation of a digital computer, and its mathe- The Hopfield neural network is a dynamical system matical abstraction, the Turing machine is described by which evolves to attractors which are interpreted as a map on a discrete configuration space. In recent years memories; the network is also used to solve optimization scientists have developed new approaches to computation, problems [2]. Brockett introduced a set of ODEs that some of them based on continuous time analog systems. perform various tasks such as sorting and solving linear The most promising are neuromorphic systems [1], mod- programming problems [9]. Numerous other applications els of human memory [2], and experimentally realizable can be found in [10]. An analytically solvable ODE quantum computers [3]. Although continuous time sys- for the linear programming problem was proposed by tems are widespread in experimental realizations, no the- Faybusovich [11]. Our theory is, to some extent, a ory exists for their algorithmic analysis. The standard continuation of their work, in that it provides a framework theory of computation and computational complexity [4] for the complexity analysis of continuous time algorithms. deals with computation in discrete time and in a discrete Our model is restricted to exponentially convergent configuration space, and is inadequate for the description autonomous dissipative ODEs of such systems. This Letter describes an attempt to fill dx F͑x͒ , (1) this gap. Our model of a computer is based on dissipa- dt tive dynamical systems (DDS), characterized by flow to for x [ ޒn and F,ann-dimensional vector field, where attractors, which are a natural choice for the output of a n depends on the input. For a given problem, F takes computation. This makes our theory realizable by small- the same mathematical form, and only the length of the scale classical physical systems (since there dissipation is various objects in it (vectors, tensors, etc.) depends on usually not negligible) [5]. We define a measure of com- the size of the instance, corresponding to “uniformity” putational complexity which reflects the convergence time in computer science [12]. We discuss only systems with of a physical implementation of the continuous flow, en- fixed point attractors, and the term attractor will be used abling a comparison of the efficiency of continuous time to denote an attracting fixed point. We study only au- algorithms with discrete ones. On the conceptual level, tonomous systems since for these the time parameter is the framework introduced here strengthens the connection not arbitrary (contrary to nonautonmous ones): under any between the theory of computational complexity and the nonlinear transformation of the time parameter the sys- field of dynamical systems. tem is no longer autonomous, as will be explained in what Turing universality of dynamical systems is a funda- follows. The restricted class of exponentially convergent mental issue; see [6] and a recent book [7]. A system of vector fields describes the “typical” convergence scenario ordinary differential equations (ODEs) which simulates a for dynamical systems [13]. Structural stability of expo- Turing machine was constructed in [8]. Such construc- nentially convergent flows is an important property for tions retain the discrete nature of the simulated map, in analog computers. As a further justification we argue that they follow its computation step by step by a continu- that exponential convergence is a prerequisite for efficient ous equation. In the present Letter, on the other hand, we computation, provided the computation requires reaching consider continuous systems as is, and interpret their dy- the asymptotic regime, as is usually the case. Asymptoti- ͞ ء namics as a process of computation. cally, jx͑t͒ 2 x j ϳ e2t tch [see Eq. (5)]. When a tra- The view of the process of computation as a flow to jectory is close to its attractor, in a time tch ln2 a digit an attractor has been taken by a number of researchers. of the attractor is computed. Thus the computation of L 0031-9007͞99͞83(7)͞1463(4)$15.00 © 1999 The American Physical Society 1463 VOLUME 83, NUMBER 7 PHYSICAL REVIEW LETTERS 16AUGUST 1999 digits requires a time which scales as tchL. This is in linear transformation effectively changes the system itself. contrast with polynomially convergent vector fields: Sup- Therefore we suggest autonomous systems as representing j ϳ t2b for some b.0, then in or- the intrinsic complexity of the class of systems that can beءpose that jx͑t͒ 2 x .with L significant digits, we need to obtained from them by changing the time parameter ءder to compute x j , 22L,ort . 2L͞b, for an exponential The evolution of a DDS reaches an attractor only in theءhave jx͑t͒ 2 x time complexity. infinite time limit. Therefore for any finite time we can Last, we concentrate on ODEs with a formal solution, compute it only to some finite precision. This is sufficient since for these, complexity is readily analyzed, and it since for combinatorial problems with integer or rational is easy to provide criteria for halting a computation. inputs, the set of fixed points (the possible solutions) Dynamical systems with an analytical solution are an will be distributed on a grid of some finite precision. A exception. But despite their scarcity, we argue later that computation will be halted when the attractor is computed a subclass of analytically solvable DDS’s which converge with enough precision to infer a solution to the associated exponentially to fixed point attractors are a counterpart problem by rounding to the nearest grid point. for the classical complexity class P. This then suggests The phase space evolution of a trajectory may be rather a correspondence between tractability in the realm of complicated, and a major problem is to decide when a dynamical systems and tractability in the Turing model. point approached by the trajectory is indeed the attractor The input of a DDS can be modeled in various of the dynamics, and not a saddle point. An attractor ways. One possible choice is the initial condition. This is certified by its attracting region which is a subset of is appropriate when the aim of the computation is to the trapping set of the attractor in which the distance decide to which attractor out of many possible ones the from the attractor is monotonically decreasing in time. system flows. This approach was pursued in [14]. The The convergence time to an attracting region U, tc͑U͒ is main problem within this approach is related to initial the time it takes for a trajectory starting from the initial conditions in the vicinity of basin boundaries. The flow in condition x0 to enter U. the vicinity of the boundary is slow, resulting in very long When the computation has reached the attracting region computation times. In the present Letter, on the other of a fixed point, and is also within the precision required hand, the parameters on which the vector field depends for solving the problem, ep, the computation can be are the input, and the initial condition is a function of halted. We thus define the halting region of a DDS the input, chosen in the correct basin, and far from basin with attracting region U and required precision ep as ء ء boundaries to obtain an efficient computation. For the H U > B͑x , ep͒, where B͑x , ep͒ is a ball of radius ء gradient vector field equation (7), designed to find the ep around the attractor x . The computation time is the maximum of n numbers, the n numbers c constitute i convergence time to the halting region, tc͑H͒, given by the input, and the initial condition given by (11) is untypically simple. More generally, when dealing with tc͑H͒ max͓tc͑ep͒, tc͑U͔͒ , (2) the problem of optimizing some cost function E͑x͒, e.g., ء by a gradient flow xᠨ gradE͑x͒, an instance of the where tc͑ep͒ is the convergence time to B͑x , ep͒. problem is specified by the parameters of E͑x͒, i.e., by In general, we cannot calculate tc͑H͒ for a DDS the parameters of the vector field. algorithm. Thus we resort to halting by a bound on the The vector x͑t͒ represents the state of the correspond- computation time of all instances of size L: ing physical system at time t. The time parameter is ͑ ͒ ͑ ͑ ͒͒͒ thus time as measured in the laboratory, and has a well- T L max tc H P , (3) jPjL defined meaning. Therefore we suggest it as a measure of the time complexity of a computation. However, for where P denotes the input, and L jPj is its size in nonautonomous ODEs that are not directly associated bits. The definition of the input size depends on the with physical systems, the time parameter seems to be input space considered.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-