Mechanism of Organization Increase in Complex Systems
Total Page:16
File Type:pdf, Size:1020Kb
Mechanism of Organization Increase in Complex Systems Georgi Yordanov Georgiev 1,2,3* , Kaitlin Henry 1, Timothy Bates 1, Erin Gombos 1,4 , Alexander Casey 1,5 , Michael Daly 1,6 , Amrit Vinod 1,7 and Hyunseung Lee 1 1Physics Department, Assumption College, 500 Salisbury St, Worcester, MA, 01609, USA 2Physics Department, Tufts University, 4 Colby St, Medford, MA, 02155, USA 3Department of Physics, Worcester Polytechnic Institute, Worcester, MA, 01609, USA 4Current address: National Cancer Institute, NIH, 10 Center Drive, Bethesda, MD 20814, USA 5Current address: University of Notre Dame, Notre Dame, IN 46556, USA 6Current address: Meditech, 550 Cochituate Rd, Framingham, MA 01701, USA 7Current address: University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655, USA *Corresponding author. Emails: [email protected]; [email protected]; Abstract This paper proposes a variational approach to describe the evolution of organization of complex systems from first principles, as increased efficiency of physical action. Most simply stated, physical action is the product of the energy and time necessary for motion. When complex systems are modeled as flow networks, this efficiency is defined as a decrease of action for one element to cross between two nodes, or endpoints of motion - a principle of least unit action. We find a connection with another principle that of most total action, or a tendency for increase of the total action of a system. This increase provides more energy and time for minimization of the constraints to motion in order to decrease unit action, and therefore to increase organization. Also, with the decrease of unit action in a system, its capacity for total amount of action increases. We present a model of positive feedback between action efficiency and the total amount of action in a complex system, based on a system of ordinary differential equations, which leads to an exponential growth with time of each and a power law relation between the two. We present an agreement of our model with data for core processing units of computers. This approach can help to describe, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self-organization and robustness. Keywords: self-organization; complex system; flow network; variational principles; principle of least unit action; principle of most total action; positive feedback mechanism; ordinary differential equations. 1 1. Introduction We study the processes of self-organization in nature seeking to find one set of rules or one universal law that causes all of them to occur. The importance of such an endeavor has been recognized, particularly when it comes to the optimization of energy flows in a system (Hubler and Crutchfield, 2010, Chaisson, 2004, 2011a.b). The appearance of elementary particles from radiation energy after the Big Bang started a chain of events that is the object of our study. Those particles assembled into atoms, which in turn formed molecules that gave rise to organisms, and eventually our human civilization and beyond. We see this chain of hierarchical events unified by the same underlying natural laws, leading to the rise of all of them. The endeavor to find a unifying theory for self-organization is an exciting prospect that we hope will continue motivate others to continue in this endeavor. There are many crucial questions that urgently need an answer. What principle determines the motions in complex systems? Which motions are preferred? What does it mean for a system to be organized and to self-organize? Why do systems self-organize? How do we measure organization? What does self-organization depend on? What is the rate of self-organization and when does it stop or reverse? How does the increase of the size of a system affect the increase of its organization? Why do some complex systems continue to self-organize for billions of years but others are temporary? What is special about systems that reach the highest levels of organization? What is driving them forward in their increased levels of organization? These and other similar questions have been staring at us since we became conscious, but we do not yet know all of the answers. We address some of them in this paper. Our approach is to study the minimization of physical action per unit motion and maximization of the total action for all motions in systems. Self-organization in complex systems can be described as an increased efficiency of physical action, which provides a means to define what exactly organization is and how it is achieved and measured (Georgiev and Georgiev, 2002; Georgiev, 2012; Georgiev et al., 2012). This approach stems from the principle of least action, which underlies all branches of physics and all motion in nature. Complex systems are comprised of individual elements. Each element is the smallest mobile unit in the system and moves, most often, in a flow of other elements along a network of paths (edges) between the starting and ending points in order to build, recombine, or change the system. In CPU’s, one unit of motion (event) is a single computation in which electrons flow from the start node to the end node. In our model, the flow is based off of events, not of energy or matter per se, even though they are participating in the events. It has been shown that in complex systems the nodes of its network representation need to be well defined, so the elements can traverse deterministic walks instead of random walks (Boyer and Larralde 2005). Random walks characterize equilibrated, non-self-organizing system. In complex systems, elements cannot move along their least possible action paths that characterize their motion outside of systems because of obstacles to the motion (constraints). The 2 principle of least action expanded for complex systems states that systems are attracted toward a state with least average action per one motion given those constraints (Georgiev and Georgiev, 2002; Georgiev, 2012; Georgiev et al., 2012). Similarly, the Hertz’s principle states that objects move along paths with the least curvature (Hertz, 1896; Goldstein, 1980), and the Gauss principle states that they move along the paths of least constraint (Gauss, 1829). We extend these two principles for complex systems that the elements do work on the constraints to minimize them, reducing the curvature and the amount of action spent for unit motion. The new geodesic in the curved by the constraints to motion space is the path with the lowest amount of action. When the elements do work to minimize the constraints, they form paths of least constraint. Those are the flow paths in the system (Georgiev and Georgiev, 2002; Georgiev, 2012; Georgiev et al., 2012). Because the action is lowest along those paths of least action, compared to all neighboring paths, the rest of the elements traversing the same nodes, move along the same paths. As the constraints are minimized further, and the action decreases along a certain path, the probability for more elements to move along them increases, and those paths become attractors for other elements, which further minimizes action along them. Indeed, it has been recognized that in complex systems the major control parameter is the throughput (Hübler, 2005). Therefore in our work, organization is defined as the state of the constraints to motion determining the average action per one element of the system and one of its motions. As the constraints are minimized, the same motion is done more efficiently, i.e. the same two nodes are connected using less action, and organization increases. We use a flow network representation of a complex system, where the trajectories of the elements are along flow paths of least action, compared to neighboring paths. A flow network implies an inflow and outflow of energy and can exist only in open systems far from equilibrium. The sources and sinks define the start and endpoints of the elements and flows in the system, which are the nodes of the network. As a result of the “principle of minimum dissipation per channel length” natural network formation “in an open, dissipative system” is as “branching, hierarchical networks” (Smyth and Hubler, 2003) which means that flow networks maximize their energy efficiency if they have a hierarchical, fractal structure. This points to an explanation from first principles of the formation of hierarchical fractal flow networks, such as internet, transportation networks, respiratory or cardiovascular systems, and many others, which “share the scale-free property … of self-similarity or fractality” (Rozenfeld-11). We see evidence of fractality in scale free systems in nature everywhere, from snowflakes and coast lines to data and molecules, and recent research has begun to quantify these observations (Rozenfeld-11). The network model of a complex system has gained importance in recent years (Alain et al., 2008; Ángeles et al., 2007; Dangalchev, 1996; Liu et al., 2013; Mark et al., 2011; Wu et al., 2006; Xulvi-Brunet and Sokolov, 2007). The scaling laws of transport properties for scale free networks have been found (Goh et al., 2001) and their betweenness centrality was measured (Kitsak-07). The self-similar scaling of density was found important in complex real-world networks (Blagus-12). It was found that scale-free networks allow routing schemes to self-adjust in order to overcome congestion (Zhang-07, Tang-09). Congestion is a jamming transition which 3 decreases the flow, which in turn lowers the action efficiency of the system, therefore its level of organization. This leaves us with questions about why these fractal, scale-free flow networks exist in a first place, as opposed to elements moving by diffusion or in a different network pattern? When the system’s size is below a certain threshold, diffusion must minimize the action for the motions in the system given their constraints.