Mechanism of Organization Increase in Complex Systems
Georgi Yordanov Georgiev 1,2,3* , Kaitlin Henry 1, Timothy Bates 1, Erin Gombos 1,4 , Alexander Casey 1,5 , Michael Daly 1,6 , Amrit Vinod 1,7 and Hyunseung Lee 1
1Physics Department, Assumption College, 500 Salisbury St, Worcester, MA, 01609, USA 2Physics Department, Tufts University, 4 Colby St, Medford, MA, 02155, USA 3Department of Physics, Worcester Polytechnic Institute, Worcester, MA, 01609, USA 4Current address: National Cancer Institute, NIH, 10 Center Drive, Bethesda, MD 20814, USA 5Current address: University of Notre Dame, Notre Dame, IN 46556, USA 6Current address: Meditech, 550 Cochituate Rd, Framingham, MA 01701, USA 7Current address: University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655, USA
*Corresponding author. Emails: [email protected]; [email protected];
Abstract
This paper proposes a variational approach to describe the evolution of organization of complex systems from first principles, as increased efficiency of physical action. Most simply stated, physical action is the product of the energy and time necessary for motion. When complex systems are modeled as flow networks, this efficiency is defined as a decrease of action for one element to cross between two nodes, or endpoints of motion - a principle of least unit action. We find a connection with another principle that of most total action, or a tendency for increase of the total action of a system. This increase provides more energy and time for minimization of the constraints to motion in order to decrease unit action, and therefore to increase organization. Also, with the decrease of unit action in a system, its capacity for total amount of action increases. We present a model of positive feedback between action efficiency and the total amount of action in a complex system, based on a system of ordinary differential equations, which leads to an exponential growth with time of each and a power law relation between the two. We present an agreement of our model with data for core processing units of computers. This approach can help to describe, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self-organization and robustness.
Keywords: self-organization; complex system; flow network; variational principles; principle of least unit action; principle of most total action; positive feedback mechanism; ordinary differential equations.
1
1. Introduction
We study the processes of self-organization in nature seeking to find one set of rules or one universal law that causes all of them to occur. The importance of such an endeavor has been recognized, particularly when it comes to the optimization of energy flows in a system (Hubler and Crutchfield, 2010, Chaisson, 2004, 2011a.b). The appearance of elementary particles from radiation energy after the Big Bang started a chain of events that is the object of our study. Those particles assembled into atoms, which in turn formed molecules that gave rise to organisms, and eventually our human civilization and beyond. We see this chain of hierarchical events unified by the same underlying natural laws, leading to the rise of all of them. The endeavor to find a unifying theory for self-organization is an exciting prospect that we hope will continue motivate others to continue in this endeavor.
There are many crucial questions that urgently need an answer. What principle determines the motions in complex systems? Which motions are preferred? What does it mean for a system to be organized and to self-organize? Why do systems self-organize? How do we measure organization? What does self-organization depend on? What is the rate of self-organization and when does it stop or reverse? How does the increase of the size of a system affect the increase of its organization? Why do some complex systems continue to self-organize for billions of years but others are temporary? What is special about systems that reach the highest levels of organization? What is driving them forward in their increased levels of organization? These and other similar questions have been staring at us since we became conscious, but we do not yet know all of the answers. We address some of them in this paper.
Our approach is to study the minimization of physical action per unit motion and maximization of the total action for all motions in systems. Self-organization in complex systems can be described as an increased efficiency of physical action, which provides a means to define what exactly organization is and how it is achieved and measured (Georgiev and Georgiev, 2002; Georgiev, 2012; Georgiev et al., 2012). This approach stems from the principle of least action, which underlies all branches of physics and all motion in nature. Complex systems are comprised of individual elements. Each element is the smallest mobile unit in the system and moves, most often, in a flow of other elements along a network of paths (edges) between the starting and ending points in order to build, recombine, or change the system. In CPU’s, one unit of motion (event) is a single computation in which electrons flow from the start node to the end node. In our model, the flow is based off of events, not of energy or matter per se, even though they are participating in the events. It has been shown that in complex systems the nodes of its network representation need to be well defined, so the elements can traverse deterministic walks instead of random walks (Boyer and Larralde 2005). Random walks characterize equilibrated, non-self-organizing system.
In complex systems, elements cannot move along their least possible action paths that characterize their motion outside of systems because of obstacles to the motion (constraints). The
2 principle of least action expanded for complex systems states that systems are attracted toward a state with least average action per one motion given those constraints (Georgiev and Georgiev, 2002; Georgiev, 2012; Georgiev et al., 2012). Similarly, the Hertz’s principle states that objects move along paths with the least curvature (Hertz, 1896; Goldstein, 1980), and the Gauss principle states that they move along the paths of least constraint (Gauss, 1829). We extend these two principles for complex systems that the elements do work on the constraints to minimize them, reducing the curvature and the amount of action spent for unit motion. The new geodesic in the curved by the constraints to motion space is the path with the lowest amount of action. When the elements do work to minimize the constraints, they form paths of least constraint. Those are the flow paths in the system (Georgiev and Georgiev, 2002; Georgiev, 2012; Georgiev et al., 2012). Because the action is lowest along those paths of least action, compared to all neighboring paths, the rest of the elements traversing the same nodes, move along the same paths. As the constraints are minimized further, and the action decreases along a certain path, the probability for more elements to move along them increases, and those paths become attractors for other elements, which further minimizes action along them. Indeed, it has been recognized that in complex systems the major control parameter is the throughput (Hübler, 2005). Therefore in our work, organization is defined as the state of the constraints to motion determining the average action per one element of the system and one of its motions. As the constraints are minimized, the same motion is done more efficiently, i.e. the same two nodes are connected using less action, and organization increases.
We use a flow network representation of a complex system, where the trajectories of the elements are along flow paths of least action, compared to neighboring paths. A flow network implies an inflow and outflow of energy and can exist only in open systems far from equilibrium. The sources and sinks define the start and endpoints of the elements and flows in the system, which are the nodes of the network. As a result of the “principle of minimum dissipation per channel length” natural network formation “in an open, dissipative system” is as “branching, hierarchical networks” (Smyth and Hubler, 2003) which means that flow networks maximize their energy efficiency if they have a hierarchical, fractal structure. This points to an explanation from first principles of the formation of hierarchical fractal flow networks, such as internet, transportation networks, respiratory or cardiovascular systems, and many others, which “share the scale-free property … of self-similarity or fractality” (Rozenfeld-11). We see evidence of fractality in scale free systems in nature everywhere, from snowflakes and coast lines to data and molecules, and recent research has begun to quantify these observations (Rozenfeld-11). The network model of a complex system has gained importance in recent years (Alain et al., 2008; Ángeles et al., 2007; Dangalchev, 1996; Liu et al., 2013; Mark et al., 2011; Wu et al., 2006; Xulvi-Brunet and Sokolov, 2007). The scaling laws of transport properties for scale free networks have been found (Goh et al., 2001) and their betweenness centrality was measured (Kitsak-07). The self-similar scaling of density was found important in complex real-world networks (Blagus-12). It was found that scale-free networks allow routing schemes to self-adjust in order to overcome congestion (Zhang-07, Tang-09). Congestion is a jamming transition which
3 decreases the flow, which in turn lowers the action efficiency of the system, therefore its level of organization. This leaves us with questions about why these fractal, scale-free flow networks exist in a first place, as opposed to elements moving by diffusion or in a different network pattern? When the system’s size is below a certain threshold, diffusion must minimize the action for the motions in the system given their constraints. When those systems are above certain size, it should take less action to move along flow channels where the constraints to motion are minimized, as compared to diffusion. This hints about a size dependence of complexity and efficiency of flow networks. As networks grow, apparently the structure that is most efficient in terms of unit action and most ubiquitous is the scale free.
Size-Complexity relation: Self-organizing systems have two attractors – increase of level of complexity and increase in size of the system (Bell and Mooers, 1997; Bonner, 2004; Carneiro, 1967). Represented as flow networks, we measure those two quantities as a decrease in average unit physical action (the average action necessary for one element of the system to cross one edge) and an increase in total physical action (the sum of the actions of all elements in the system in certain interval of time). The positive feedback between the least unit action and the maximum total action leads to a process of exponential growth in time of both of them and a power law relation between them characterizing developing systems and is ubiquitous in nature (Bell and Mooers, 1997; Bertalanffy, 1968; Bonner, 2004; Carneiro, 1967). The effect of system’s size on its efficiency also supports this observation (Bejan 2011, Kleiber-1932, West 1999) as do the processes of growth and efficiency in information and other technologies (Moore 1965, Kurzweil 2005, Nagy 2011). Development and evolution in different systems have been studied through the dynamics of complex systems in great detail, and some projections about the future of this trajectory have been made (Bar-Yam, 1997; Gershenson and Heylighen, 2003; James et al., 2011; Salthe, 1993; Smart, 2002; Vidal, 2010). In describing the emergence of order in random systems, simulations have proved particularly useful (Kassebaum and Iannacchione, 2009) which will be the next step in this research, after the mechanisms of self-organization are understood sufficiently.
In this paper we strive to show that the quality of a system depends on its quantity and when one is increased or decreased the other is affected in the same way i.e. they are in a positive feedback loop. We also want to show that this dependence is a major driving force and a mechanism of progressive development measured as increase in action efficiency in complex systems. This is important for understanding self-organization and can be used in designing more efficient complex systems. We hope that researchers in various disciplines from chemistry and biology to engineering and social science will be able to apply the results.
4
2. Theory
2.1 Overview
This manuscript proposes a variational approach, maximizing total action and minimizing the average unit action with increase of level of organization in complex systems. The total quantity of action is the sum of all quanta of action occurring in a system per unit time. Quanta of action are obtained as the total action is divided by the Planck’s constant – the smallest unit (quantum) of action in nature. The average unit action is the average action necessary for one event in a system, measured as the total action is divided by the total number or events in certain interval of time. Lower average unit action means higher action efficiency of a system, which is our definition for quality and organization, as inversely proportional to the average number of quanta of action per unit motion. The importance of variational (extremization) principles in describing systems’ behavior, and the extension of the principle of least action for dissipative systems have been noted widely (Sieniutyez and Farkas, 2004). Eric Chaisson’s work points to maximization of the free energy rate density (FERD), i.e. the flow of energy per unit mass and time in systems, as a function of time (Chaisson, 1998, 2001). He describes how the evolution of systems of different nature, such as physical, chemical, biological and social, correlates with an exponential increase of the FERD, which indicates that as systems are becoming more organized they are moving further away from thermodynamic equilibrium. This supports the observation of increase of quantity in systems, as increased energy gradients and therefore flows, through more organized systems. There are many other variational principles which have been applied to self- organization, such as the maximum entropy production (Paltridge 1979) the minimum entropy production (Nicolis and Prigogine, 1977) and the least dissipation principles (Onsager and Machlup, 1953). The minimization of constraints to motion of flows of elements is also supported by the constructal law which is that complex systems’ configurations evolve in a way that provides easier access to the currents that flows through them (Bejan, 2005) which is also an extremization principle. The utility of the principle of least action to describe self-organization has been shown (Annila and Salthe, 2010; Annila, 2010; Mäkelä, and Annila, 2010; Chatterjee, 2012, 2013) and has been used to describe a “natural selection for least action” (Pernu and Annila 2012, Hartonen and Annila 2012). Hubler and Crutchfield noted a “tendency in systems with a constant flow to minimize energy consumption” (Hubler and Crutchfield, 2010) and introduced a “principle of minimum dissipation per channel length” (Smyth and Hübler, 2003). This is not far from our view of minimization of time use and energy dissipation per unit motion connected to a maximization of total energy dissipation and time of existence of the entire system. This leads to a minimum entropy production per unit motion, related to a maximum total entropy production of the system. A minimization of an economy function has also been used (Boyer and Larralde 2005). It has been suggested that applying the principle of least action to biological systems could be very helpful in solving the mystery of how, during organism development and differentiation, complex patterns emerge (Vandenberg et al., 2012).
5
To clarify why we minimize the action but not just the energy or time per unit motion, let’s suppose that we minimized them separately. If only free energy is minimized, the result is that motion ceases and flows stop. This is the process of crystallization as one example. Therefore if free energy was minimized, it would take an infinite amount of time for an element to bridge two nodes, i.e. complex systems cannot exist and function. There is no flow of energy through them. Similarly, minimizing only time would lead to a paradox, necessitating infinite amounts of energy to be spent by the system as the time interval for an event approaches zero and the speed of the elements increases to the speed of light. Time and energy must therefore be in balance with each other, which is achieved by the principle of least action or the least product of time and energy, but not the least amount of each of them separately.
To specify the systems studied, we need to separate self-organizing complex systems in two classes: passive and active. Passive are those that exist until external energy gradients (differences) are equilibrated. Those energy gradients occur independently of the system. Temporary dissipative structures, such as Bernard cell or vortices belong to this type. They minimize unit action to accommodate flows while energy differences exist, but do not exhibit further self-organization, and fall apart as the energy gradients are equilibrated. Active systems are those that increase their energy gradients and drive themselves further out of equilibrium, actively increasing their size and action efficiency further - evolving in time. This active type of systems exhibits continuous self-organization, growth and increased robustness, as in biological and social systems (Bar-Yam, 1997; Bertalanffy, 1968; Chaisson, 2001, Gershenson and Heylighen 2003).
Outside of complex systems, the physical action has a single minimum for the motion of a particle compared to all other paths, given by its equations of motion, which is along a geodesic. In complex systems, due to the constraints to motion, moving along the same geodesic will have higher action, compared to a large set of symmetric longer trajectories. For infinitely long paths, action rises to infinity. In phase space, this forms a well-known “Mexican hat” surface with a circular minimum around the geodesic of a particle which describes its motion when it is not a part of a system. On this surface the elements of the system spontaneously choose one of the infinite number of minimum action trajectories, which signifies a phase transition from a simple to a complex system, or from one level of organization to another. The initial geodesic paths of the elements become the “vacuum” or “ground state” of the complex system. There is a body of work on phase transitions and symmetry breaking in open dissipative systems with local interactions and global constraints, causing bottom-up and top-down sequences of symmetry breakings (Hübler, 2005). Lower symmetry is generally connected to more order, and higher symmetry, to more randomness and entropy. An order parameter approach for describing these symmetries has been proposed to measure self-organization (Haken 1982, 2006).
6
2.2 Basic Principles
The principle of least action determines the motions of all objects in the universe, therefore it must determine the motions of elements in complex systems. We explore the idea that self- organization is driven by the principle of least action for systems, which states that the variation of the average unit action is zero in the most organized state,
nm
∑ Iij δ ij = 0 nm
where is the total amount of action in the system per unit time. n is the total number of elements in the system, and m is the number of edge crossings of one element per unit time.
When the unit action decreases, the system becomes more efficient, obeying the principle of least action, i.e. self-organizes:
nm
∑Iij δ ij < 0 nm
As this is a minimization problem, the limit is a minimum, which approaches zero when the system is infinitely organized:
nm
∑ Iij lim ij = min t→∞ nm
When the unit action increases, its level of organization decreases:
nm
∑ Iij δ ij > 0 nm
When a system is regressing, it can reach a limit which approaches infinity in the maximally random entropic state:
nm
∑ Iij lim ij = max t→∞ nm
7
The total action in a system is the sum of all actions of all elements and edges that they are crossing. i.e. it is the action necessary for the total flow in the system as measured by all events that are occurring along the flow network. Quality and quantity are proportional in our model therefore when the total action increases, the system self-organizes, as inferred from the literature and our data:
δ ∑Iij > 0 ij
When the total action decreases, the organization decreases:
δ ∑ Iij < 0 ij Self-organization is a maximization problem, therefore the limit is a maximum during self- organization which approaches infinity when the system is infinitely organized.
nm
lim∑Iij = max t→∞ ij
In that state, the action cannot increase futher and its variation is zero:
nm
δ ∑ Iij = 0 ij
This is the final state, where a system cannot organize anymore, i.e. cannot grow in quality or quantity.
The reverse is also true: when a system becomes less organized, it falls apart and decreases in size, ultimately until no elements are left in it, and it is not defined as a system anymore, i.e. its total action is decreasing to a minimum, which is zero when the system stops to exist.
This leads us to a measure for organization which is inversely proportional to the average number of quanta for an element in a flow network to cross one edge – its action efficiency. For identical elements (Georgiev, 2012):