Quick viewing(Text Mode)

Emergent Behavior in Systems of Systems John S

Emergent Behavior in Systems of Systems John S

Emergent Behavior in of Systems John S. Osmundson Thomas V. Huynh Department of Department of Systems Naval Postgraduate School Naval Postgraduate School 1 University Circle 1 University Circle Monterey, CA 93943, USA Monterey, CA 93943, USA [email protected] [email protected] Gary O. Langford Department of Naval Postgraduate School 1 University Circle Monterey, CA 93943 [email protected] Copyright © 2008 by John Osmundson, Thomas Huynh and Gary Langford. Published and used by INCOSE with permission.

Abstract. As the development of systems of systems becomes more important in global ventures, so does the issues of emergent behavior. The challenge for systems engineers is to predict and analyze emergent behavior, especially undesirable behavior, in systems of systems. In this paper we briefly discuss definitions of emergent behavior, describe a large-scale of system that exhibits emergent behavior, review previous approaches to analyzing emergent behavior, and then present a system modeling and approach that has promise of allowing systems engineers to analyze potential emergent behavior in large scale systems of systems. Introduction Recently there has been increased interest in what has been become known as systems of systems (SoS) and SoS engineering. Examples of SoS are military command, control, , and information (C4I) systems (Pei 2000), intelligence, surveillance and reconnaissance (ISR) systems (Manthrope 1996), intelligence collection systems (Osmundson et al 2006a), electrical power distribution systems (Casazza and Delea 2000) and food chains (Neutel 2002.), among others. Much of this current interest in SoS is focused on the desire to integrate existing systems to achieve new capabilities that are unavailable by operation of any of the existing single systems. Our focus in this paper follows a similar vein; we are addressing the issue of emergent behavior of an SoS that is formed by combining existing systems into a new larger system to achieve some new operational capability. System is a challenge to the systems engineering architectural design of many SoS, particularly those that interact with financial systems such as the transportation logistics networks, communications networks, and delivery networks (Motter and Lai 2002). These systems are usually comprised of a large number of component systems and subsystems, human operators, and software agents. Developing systems engineering techniques to analyze the effects of system complexity and resulting emergent behavior is essential to understanding how , policies, and will affect the behavior of complex, evolutionary systems and designing against unwanted behavior. In this paper we discuss definitions of emergent behavior, discuss an example of a SoS that exhibits emergent behavior, discuss possible approaches to analyzing SoS emergent behavior and the limitations of these approaches, and then present a modeling and simulation approach that is applicable in certain cases to analysis of SoS emergent behavior. We conclude that in cases where the modeling and simulation approach is applicable, it is possible to predict and analyze the effects of SoS emergent behavior. Systems of Systems A is described by Maier and Rechtin (Maier and Rechtin 2002) as systems which are operationally independent, managerially independent, evolutionary developed, with emergent behavior and are geographically distributed. Maier and Rechtin go on to define three different types of managerial control: directed, collaborative, and virtual. Directed systems are those in which the integrated SoS is built and managed to fulfill specific purposes. It is centrally managed during long term operation to continue to fulfill those purposes, and any new ones the system owners may wish to address. The component systems maintain an ability to operate independently, but their normal operational mode is subordinated to the central managed purpose. For example, an integrated air defense network is usually centrally managed to defend a region against enemy systems, although its component systems may operate independently. Collaborative systems are distinct from directed systems in that the central management does not have coercive power to run the system. The component systems must, more or less, voluntarily collaborate to fulfill the agreed upon central purposes. The Internet is a collaborative system. Standards are developed for the internet, but there is no power to enforce them. Agreements among the central players on service provision and rejection provide what enforcement mechanism there is to maintain standards. The Internet began as a directed system, controlled by the Advanced Research Projects Agency, to share computer resources. Over time it has evolved from central control through unplanned collaborative mechanisms (Muajumdar 2007). Virtual systems lack a central management authority. Indeed, they lack a centrally agreed upon purpose for the SoS. Large scale behavior emerges, and may be desirable, but the super system must rely upon relatively invisible mechanisms to maintain it. A virtual system may be deliberate or accidental. A virtual system is controlled by the forces that make cooperation and compliance to the core standards. The standards do not evolve in a controlled way; rather they emerge from the market success of various innovators (Muajumdar 2007). Boardman and Sauser (Boardman and Sauser, 2006) describe differentiating characteristics of an SoS as: autonomy exercised by the constituent systems in order to fulfill the purpose of the SoS, constituent systems choose to belong to the SoS for greater good, SoS are typically connected dynamically to enhance the SoS performance, and characterized by a diversity of systems. Also, SoS may seem unbounded. The levels of connectivity, platform diversity, and degree of associated interoperability point to the risk of whether the SoS is unbounded. Bounded systems are characterized by centralized data and control, systems and their linkages are known a priori and are specific to the connection and interoperability specified. Unbounded systems are characterized by protocols which provide a loose coupling and are omnipresent and allow for dynamic spontaneous connections (DiMario 2006). There is not universal agreement on a definition of the term ‘system of systems’, but many definitions have basic thoughts in common. Sage and Cuppan (Sage and Cuppan 2001) describe an SoS as having operational and managerial independence of the individual systems as well as emergent behavior. Other definitions include operational and managerial independence, and geographical separations of the component systems. In this paper we will consider an SoS to consist of separately developed systems that are usually operated by separate organizational entities and are usually geographically dispersed. Emergent Behavior In addition to many definitions of systems of systems, there are many definitions of emergent behavior. Dyson and George state that: “Emergent behavior is that which cannot be predicted through analysis at any level simpler than that of the system as a whole. Emergent behavior, by definition, is what’s left after everything else has been explained” (Dyson and George 1997). Emergent behavior has also been defined as the action of simple rules combining to produce complex results (Rollings and Adams 2003). While simple rules might be those applied to a single system or to a single system element in an SoS, the resulting of the system of systems might turn out to be quite complex and unpredictable. Parunak and VanderBok (1997) point out that a distributed control approach can produce system of systems behavior that is more complex than the behavior of the individual component systems or system elements. They call such behavior emergent behavior because it emerges from the overall interactions, often in ways not intended by the original designers. Fisher (2006) calls emergent “cumulative effects” of the actions and interactions among constituents of complex systems, a constituent being any automated, mechanized, or human participant. The Wilkpedia (2007) definition states that emergent phenomena can occur due to the of interactions between the elements of a system over time. Emergent phenomena are often unexpected, nontrivial results of relatively simple interactions of relatively simple components. A common theme of these definitions is that emergent behavior is the result of complex interactions. In an SoS the component systems may achieve goals that they couldn’t otherwise by interactions with other component systems. Operators and software agents, considered here as SoS elements, can also interact with component systems as well as with one another. Emergent behavior occurs as a result of these interactions. The definitions disagree, however, on whether emergent behavior is understandable and, to some degree, predictable. As systems get larger and more complex emergent behavior may become more probable. As systems engineers, can we anticipate emergent behavior, can we design to mitigate undesired effects and encourage desired effects? In this paper we do not attempt to resolve the issue as to whether emergent behavior is always predictable and therefore amenable to systems engineering analysis, but we will restrict ourselves to discussing cases in which emergent behavior may be analyzable. An Example of Emergent Behavior in Systems of Systems A large scale, complex engineered system of systems is the North American power grid which consists of 680,000 miles of backbone transmission lines and about 2.5 million miles of local transmission lines. An electric power transmission system contains many components such as generators, transmission lines, transformers, and substations. Each component a certain loading each day and when all the components are considered together they some pattern of loadings. The pattern of component loadings is determined by the system operating policy and is driven by the aggregated customer demands at the substations (Chassin et al. 2004). The power system operating policy includes short term actions such as power dispatch. The operating policy seeks to satisfy the customer demands at least cost. Short term customer demand is typically driven by daily and seasonal cycles while long term demand is driven by geographic shifts in population and population changes, development of alternative energy sources, and industry growth and changes. Events can occur on the power grid that either limit the loading of a component to a maximum or zero the component loading if that component trips or fails. Events that can occur include transformer failure, the probability of which generally increases with loading, and operator re-dispatching to limit power flow on a transmission line to its maximum rating. These events can cause a redistribution of power flow in the network and hence an increase in the loading of other system components, and the events can cascade. If a cascade of events includes limiting or zeroing the load at substations, then a blackout results. Such systems can undergo non-periodic major cascading disruptions that have serious consequences, such as the collapse of the Canadian system in the Quebec province in 1989 and wide-scale power outages affecting about 50 million people in the eastern US in 1993. The dynamics of North America power grid can create conditions under which major disruptions from a wide variety of sources can occur. Large scale disruptions can be intrinsic to some SoS, especially those that display self-organized criticality (Carreras et al. 2000). A self-organized criticality system is one in which the nonlinear dynamics in the presence of perturbations organize the overall average system state near to, but not at, the state that is marginal to major disruptions. The requirement of the power grid SoS is to reliably generate and supply , wherever and whenever it is needed on the grid. In order to achieve this, there is a need to continuously balance the generation and load in the power grid, to adjust generation output to match load or reduce load to match available generation. Control actions are limited primarily to adjusting generation output and to opening and closing switches to reconfigure the network. Every action affects all other activities on the grid. The activities of all of the control actions must be coordinated, often across large areas. If not managed properly, failure of a single element can cause the subsequent rapid failure of many additional elements, disrupting interconnected transmission systems over a broad geographic area. Cooperation between electrical utilities was critical to maintain reliable transmission of electricity. As a result, utility companies formed voluntary control areas which also had the benefit of reducing operating cost by aggregating demand in each of their areas of service. It reduced the possibility of large scale outages because the control areas had better visibility over a larger area of the electricity network. Deregulation of the electrical power industry in the US was begun in 1992 (FERC 1996) to introduce competition in the market and reduce costs to consumers. This changed the requirement of the industry to not only reliably generate and transmit electricity, but to do so profitably. Under deregulation, generation, in addition to reliability requirements, became required in the industry. Cooperation became more important with deregulation because transmission of electricity between regions increases and requires more careful operation. However, the utilities have different and sometimes, competing, economic interests. This is aggravated by the fact that more than one service provider can be involved in providing electricity service to a customer and, if problems arise, responsibility for the problems might not belong to anyone until a dispute is resolved. As a result, the electrical power grid SoS that evolved under one set of operating principles, is now adapting to a new set of operating principles, with the of new behaviors that include an increase in grid problems and failures, and unexpected increasing costs of electricity to consumers. Approaches to Analyzing Emergent Behavior Various approaches have been applied to the analysis of emergent behavior in biological, economic and engineered systems. Reynolds (1987) developed a computer simulation of birds flocking based on simple behavior and interactions of individual birds. The flocking is an example of emergent behavior that could be understood based on simple interactions. Mathematicians at the in New Mexico pioneered techniques that were later used to try to predict movements in trading environments (Waldrop 1993). Industrial equipment manufacturers are applying these techniques to determine how to rapidly adapt manufacturing processes, and telecommunications firms use the techniques to help determine business strategies (Caulkin 2007). In the field of the macroscopic behavior of complex systems of molecules have been successfully analyzed by considering the form of the system’s function (Chassin et al. 2004). Sergeev, Smith and Foley (2002) have adopted this method to analyze the stability of a by considering the form of the system’s entropy. Success in the field of thermodynamics suggests that equilibrium models can be developed for any complex system. However, it is much more difficult to develop time dependent models of such systems using principles of entropy. Parunak and VanderBok (1997) consider emergent behavior in distributed control architectures, stating that a population of asynchronously executing processes without central top-down control can exhibit unexpected or “emergent” behavior. Paunak and VanderBok go on to say that nonlinear systems can be used to detect and manage such interactions. Thus this type of behavior can be considered to be generated by deterministic interactions among control elements. If systems engineers are to be successful in analyzing emergent behavior, an approach is needed that allows analysis of dynamic behavior of SoS, with the ability to model and simulate the important aspects of the systems, system elements and, most importantly, the interactions between systems and system elements. A System of Approach In order for an SoS to provide more capability than any of the individual systems, the systems have to interact and, therefore, the key to understanding and engineering an SoS, as well as understanding emergent behavior, is clearly defining the system-to-system interfaces. Several recent approaches to modeling complex systems have been developed recently, including that of (Oliver et al. 2007). We have developed an engineering and analysis methodology that aims at providing clear elucidation of interfaces. An SoS systems engineering problem involves analysis of existing and proposed system of systems architectures and analysis of architectures of complex system of systems (Osmundson and Huynh 2005). A process modeling methodology for performing engineering analyses of an SoS has been developed at the Naval Postgraduate School (Osmundson et al. 2004, Osmundson 2000) over the past several years and has been successfully applied to military systems of systems (Osmundson et al 2006a). This methodology consists of a sequence of analyses, transformations, model building, and . The steps, which we have applied to a number of US defense systems, are: • Development of system of systems scenarios and operational architectures • Identification of system of systems elements and threads • Representation of operational architectures in a modeling language format (Huynh and Osmundson 2006) • Identification of system of parameters and factor levels • Transformation of modeling language representations into executable models • Application of design of experiments • Simulation runs and analysis of results System elements in modeling language representations can be described as executable modeling icons by using modern software applications. This applies to system elements such as people and software agents, as well as hardware elements. These icons can be graphically linked to form models of physically distributed systems of systems. Models of systems of systems can be constructed in a modular manner so that design factors are represented by an association with modeling application objects. System options are represented by rearranging the objects and by varying the object attributes from model to model. Design of experiments guides the development of executable models and the running of simulations. Interactions that might result in emergent behavior occur at interfaces between systems, between systems and operators and/or between systems and software agents. Following the idea in (Klir 1991) we can consider each of these interface elements as elements that seek to satisfy a goal governed by a set of rules whose inputs are provided through the SoS interactions with the goal seeking element. In some cases the goal-seeking element may have probabilistic behavior and/or may adapt to changing input conditions. In normal system design the goals and the performance functions of the system and system elements are well defined. In systems engineering we say that the system is designed to meet some functional requirements to specified levels of performance. Now we consider a complex system of systems where SoS elements may include people and software agents whose goals may be the same as, or different from, the original system goals. Unanticipated behavior might then be due to miss-application of the rules by a person. Another source of unanticipated behavior might be due to the fact a hardware element, a person or a software agent correctly applied the rules, but for a set of input conditions that were unanticipated by the system of systems engineers. A third source of unanticipated behavior might be of a person or a software agent to sets of inputs. In our approach we model an agent as a rule-based algorithm, with some probability of design error, whose behavior is dependent on its inputs. We model a person as a probabilistic rule-based algorithm whose behavior is dependent on its inputs. We model a system function as a rule-based with some probability of design error, whose behavior is dependent on its inputs. Figure 1 shows a simple model of a highly abstracted power grid system, consisting of five power generating nodes, three trading agents, three consumer nodes and transmission lines linking the power generating nodes and the consumer nodes. The power generating nodes generate given rates of electricity at initial offering prices unique to each node. The consumer nodes have a rate of demand and initial buying prices unique to each node. The transmission lines have unique capacities and probability of failures that are dependent on their load relative to their capacity. The trading agents seek to maximize their profit by buying from the lowest price generator and selling to the highest bidder. If a generating node is successful in selling an increment of power, it raises its selling price during the next interval and lowers its price if it was unsuccessful. Consumer nodes lower their buying price if they were successful in purchasing power and raise if they were not successful. The emergent behavior of the power grid can be analyzed by comparing the price and transmission system failure rates to those of the same model in the absence of trading agents, where the system is designed to provide reliable delivery of power.

Figure 1. Representation of a Model of an Abstracted Power Grid System of Systems In this work only a very limited design of experiments is performed. The power grid is modeled as shown in Figure 1 to represent deregulated operation with trading agents operating normally in order to optimize profit, as well as an excursion case in which one rouge trading agent creates an artificial shortage of electricity at one consumer node in order to artificially raise the cost of electricity at that node and further increase the agent’s profit. Figure 2 shows the average cost of electricity at node 2 as a function of simulation time when deregulation is in effect. The average cost increases relative to the cost for a regulated power grid when there are no trading agents, and the goal is to supply electricity to consumer nodes in a reliable manner at regulated prices. Value Plotter, Discrete Event 20

19.16667 18.33333 17.5 16.66667

15.83333

15 14.16667 13.33333

12.5 11.66667 10.83333 10

9.166667 8.333333 7.5 6.666667

5.833333

5 4.166667 3.333333

2.5 1.666667 0.8333333 0 0 2.5 5 7.5 1012.51517.52022.52527.530 Time Mean Variance StdDev iation Black Figure 2. Average Cost of Electricity at Node 2for a Deregulated Power Grid as a Function of Simulation Time Figure 3 shows the effect of a rouge trading agent. Here a trading agent blocks consumer node 2 for a short time in order to create an artificial shortage of electricity at node 2. The effect is to cause the cost of electricity at node 2 to increase much more than in the case represented by Figure 2.

Value Plotter, Discrete Event 30

28.75

27.5

26.25 25

23.75

22.5

21.25 20

18.75

17.5

16.25 15

13.75

12.5

11.25 10

8.75

7.5

6.25

5 3.75

2.5

1.25

0 0 4.166667 8.333333 12.5 16.66667 20.83333 25 29.16667 33.33333 37.5 41.66667 45.83333 50 Time Mean Variance StdDev iation Black

Figure 3. Average Cost of Electricity at Node 2 for a Deregulated Power Grid as a Function of Simulation Time when a Rouge Trading Agent Blocks Node 3 for a Short Interval The final measure of grid behavior during regulated operation is shown in Figure 4 where the average number of system failures due to the change in operating policy is shown. Because of the assumptions made in the model, the numerical results do not reflect a realistic percentage but do show a relative increase of about 3% over a regulated grid. Value Plotter, Discrete Event 0.04

0.03833333

0.03666667 0.035

0.03333333

0.03166667

0.03

0.02833333

0.02666667 0.025

0.02333333

0.02166667 0.02

0.01833333

0.01666667 0.015

0.01333333

0.01166667

0.01 0.008333333

0.006666667

0.005 0.003333333

0.001666667

0 0 41.66667 83.33333 125 166.6667 208.3333 250 291.6667 333.3333 375 416.6667 458.3333 500 Time Mean Divide StdDev iation Black

Figure 4. Percentage Increase in Grid Failures for Deregulated Operation vs. Regulated Operation Results show that for even an extremely simple model the results are consistent with the actual behavior of the deregulated North American power grid, and this suggests that such a systems engineering analysis of proposed changes in the grid operating policy would have been valuable in providing quantitative evidence of unwanted emergent behavior in advance of implementing policy changes. The method described here can be extended to include more realistic and complex agent behavior as well much larger, and more realistic, system architectures. Different types of systems with different emergent behavior issues can be analyzed. For example, the success of failure of web-based systems such as eBay and Napster can be analyzed in terms of that result from number of interactions resulting from such system variables as number of nodes, interaction links, and frequency of interactions. Conclusion The importance of systems of systems in today’s global endeavors requires that systems engineers develop methods for analyzing emergent behavior, in order to predict favorable and unfavorable consequences and in order to architect SoS to better assure desired results. Modeling SoS in executable form, with emphasis on the interacting elements of SoS, and applying design of experiments to guide the development of executable models and setup of simulation runs shows promise for providing systems engineers with a suitable technique.

References Boardman, J., and B. Sauser, “The Meaning of System of Systems,” IEEE International System of Systems Conference, Los Angeles, CA, April 24-26, 2006. Carreras, B. A.. D. E. Newman, I. Dobson and A. B. Poole, “Initial Evidence for Self-Organized Criticality in Electric Power System Blackouts,” Proceedings of Hawaii International Conference on System Sciences, January 4-7, 2000, Maui, Hawaii. 2000 IEEE Casazza, J. A. and F. Delea, Understanding Electric Power Systems: An Overview of the and the Marketplace; Wiley: New York, 2003 Chassin, D. P., N. Lu, J. M. Malard, S. Katipamula, C. Posse, J. V. Mallow and A. Gangopadhyaya, Modeling Power Systems as Complex Adaptive Systems, U.S. Department of Energy Contract DE-AC06-76RL01830 Report, December, 2004) Caulkin, Simon, “Chaos Inc.”, http://www.co-i-l.com/coil/knowledge- garden/oi/articles/chaosinc.shtml, retrieved Oct., 2007. DiMario, M., “System of Systems Interoperability Types and Characteristics in Joint Command and Control,” Proceedings of the 2006 IEEE International System of Systems Conference, Los Angeles, CA, April 24-26, 2006. Dyson, George, Darwin Among the Machines, Addison-Weslely, Reading, MA, 1997. Federal Energy Regulatory Commission (FERC) Order 888, April 24, 1996. Fisher, David. “An Emergent Perspective on Interoperation in Systems of Systems,” CMU/SEI- 2006-TR-003, Pittsburgh, PA, Software Engineering Institute, Carnegie Mellon University, 2006. Huynh, Thomas V., and John S. Osmundson, “An Integrated Systems Engineering Methodology for Analyzing Systems of Systems Architectures”, Proceedings of the Asia-Singapore Systems Engineering Conference, Singapore, March 23-24, 2007. Huynh, Thomas V., and John S. Osmundson, “A Systems Engineering Methodology for Analyzing Systems of Systems Using the Systems Modeling Language (SysML)”, Procedings of the nd Annual System of Systems Engineering Conference, Ft. Belvoir, VA, sponsored by the National Defense Industrial Association (NDIA) and OUSD AT&L, 25-26 July, 2006. Klir, George J., Facets of Systems , Plenum Press, New York, 1991. Maier, M. and E. I. Rechtin, The Art of Systems Architecting, CRC Press, Boca Raton, FL, 2002. Manthrope Jr., W. H., “The Emerging Joint System-of-Systems: A Systems Engineering Challenge and Opportunity for APL”, John Hopkins SPL Technical Digest, Vol. 17, No. 3, 1996, pp. 305-310. Motter, Adilson E. and Ying-Cheng Lai, “Cascade-based Attacks on Complex Networks,”Physical Review E 66, 2002 Muajumdar, Windy Joy, System of Systems Technology Readiness Assessment, Master Thesis, Naval Postgraduate School, Monterey, CA, Sept., 2007. Neutel, Anje-Margriet, Johan A. P. Heesterbeek and Peter C. deRuiter, “Stability in Real Food Webs: Weak Links in Long Loops,” Science May 2002, Vol. 296. no. 5570. Oliver, David W., Timothy P. Kelliher and James G. Keegan, Jr., Engineering Complex Systems with Models and Objects, McGraw-Hill, New York, 2007. Osmundson, John S., Nelson Irvine, Gordon Schacher, Jack Jensen, Gary Langford, Thomas Huynh and Richard Kimmel, “Application of System of Systems Engineering Methodology to Study of Joint Military Systems Interoperability”, Procedings of the 2nd Annual System of Systems Engineering Conference, Ft. Belvoir, VA, sponsored by the National Defense Industrial Association (NDIA) and OUSD AT&L, 25-26 July, 2006a. Osmundson, John, and Thomas Huynh, “Systems-of-Systems (SOS) Systems Engineering”, Procedings of the System of Systems Engineering Conference, Johnstown, PA, sponsored by the National Defense Industrial Association (NDIA) and OUSD AT&L, June 13-14, 2005. Osmundson, John S., Russell Gottfried; Chee Yang Kum: Lau, Hui Boon; Lim,Wei Lian; Poh, Seng Wee Patrick; and Tan, Choo Thye, “Process Modeling: A Systems Engineering Tool for Analyzing Complex Systems”, Systems Engineering, Vol. 7, No. 4, 2004, pp 320-337. Osmundson, John, “A Systems Engineering Methodology for Information Systems”, Systems Engineering, Vol. 3, No. 2, July, 2000, pp 68-81. Pei, R. S., “Systems-of-Systems Integration (SoSI) – A Smart Way of Acquiring Army C4I2WS Systems”, Proceedings of the Summer Computer Simulation Conference, 2000, pp. 574-579. Parunak, H. Van Dyke, and Raymond S. VanderBok, “Managing Emergent Behavior in Emergent Control Systems,” Proceedings of ISA Tech, 1997. Reynolds, C. W. (1987) “Flocks, Herds, and Schools: A Distributed Behavioral Model,” in Computer Graphics, 21(4) SIGGRAPH '87 Conference Proceedings, 1987. Rollings, Andrew and Ernest Adams, On Game Design, New Riders Publishing, Indianapolis, IN, 2003. Sage, A. P., and C. D. Cuppan, “On the Systems Engineering and Management of Systems-of Systems and Federations of Systems”, Information, Knowledge, Systems Management, Vol 2, No. 4, 2001, pp. 325-345. Waldrop, M. Mitchell, Chaos, Touchstone, New York, 1993. Waldrop, M. Mitchell, Complexity, Simon and Schuster, New York, NY, 1992. Wilkpedia, http://en.wikipedia.org/wiki/Emergence, retrieved Oct., 2007. Biographies John Osmundson is an associate research professor with a joint appointment in the Systems Engineering and Information Sciences Departments at the Naval Postgraduate School in Monterey, CA. His research interest is applying systems engineering and computer modeling and simulation methodologies to the development of systems of systems architectures, performance models, and system trades of time-critical information systems. Prior to joining the Naval Postgraduate School in 1995, Dr. Osmundson worked for 23 years at Lockheed Missiles and Space Company (now Space Division) in Sunnyvale and Palo Alto, CA, as a systems engineer, systems engineering manager, and manager of advanced studies. Dr. Osmundson received a B.S. in from Stanford University and a Ph.D. in physics from the University of Maryland. He is a member of INCOSE.

Tom Huynh is an associate professor of systems engineering at the Naval Postgraduate School in Monterey, CA. His research interests include uncertainty management in systems engineering, complex systems and complexity theory, system scaling, simulation-based acquisition, and system-of-systems engineering methodology. Prior to joining the Naval Postgraduate School, Dr. Huynh was a Fellow at the Lockheed Martin Advanced Technology Center, where he engaged in research in computer network performance, computer timing control, bandwidth allocation, heuristic algorithms, nonlinear estimation, perturbation theory, differential equations, and optimization. He was also a lecturer in the Mathematics Department at San Jose State University. Dr. Huynh obtained simultaneously a B.S. in Chemical Engineering and a B.A. in Applied Mathematics from UC Berkeley and an M.S. and a Ph.D. in Physics from UCLA. He is a member of INCOSE.

Gary Langford is a lecturer in the Systems Engineering Department at the Naval Postgraduate School in Monterey, California. His research interests include the theory of systems engineering and its application to commercial and military competitiveness. Mr. Langford founded and ran five corporations – one NASDAQ listed. He was a NASA Ames Fellow. He received an A.B. in astronomy from UC Berkeley, and an M.S. in physics from Cal State Hayward. He is a member of INCOSE.