Fall 2009 Version of Course 15-359, Computer Science Department, Carnegie Mellon University

Total Page:16

File Type:pdf, Size:1020Kb

Fall 2009 Version of Course 15-359, Computer Science Department, Carnegie Mellon University Fall 2009 version of Course 15-359, Computer Science Department, Carnegie Mellon University. Acknowledgments: CMU’s course 15-359, Probability and Computing, was originally conceived and designed by Mor Harchol-Balter and John Lafferty. The choice, order, and presentation of topics in the latter half of the course is strongly informed by the work of Mor Harchol-Balter. Indeed, you might like to buy her book! Performance Modeling and Design of Computer Systems: Queueing Theory in Action, http://www.cs.cmu.edu/~harchol/PerformanceModeling/book.html The choice, order, and presentation of topics in the earlier half of the course is informed by the work of John Lafferty: http://galton.uchicago.edu/~lafferty/ Further, a very great deal of material in these lecture notes was strongly informed by the outstanding book Probability and Computing by Michael Mitzenmacher and Eli Upfal, http://www.cambridge.org/us/academic/subjects/computer-science/algorithmics-complexity- computer-algebra-and-computational-g/probability-and-computing-randomized-algorithms-and- probabilistic-analysis Many thanks to Mor Harchol-Balter, John Lafferty, Michael Mitzenmacher, Eli Upfal (and many other web sources from which I borrowed)! 15-359: Probability and Computing Fall 2008 Lecture 1: Probability in Computing; Verifying matrix multiplication 1 Administrivia The course web site is: http://15-359.blogspot.com. Please read carefully the Policies document there, which will also be handed out in class. This is also a blog, so please add it to your RSS reader, or at least read it frequently. Important announcements | and more! | will appear there. 2 Probability and computing Why is probability important for computing? Why is randomness important for computing? Here are some example applications: Cryptography. Randomness is essential for all of crypto. Most cryptographical algorithms in- volve the parties picking secret keys. This must be done randomly. If an algorithm deterministically said, \Let the secret key be 8785672057848516," well, of course that would be broken. Simulation. When writing code to simulate, e.g., physical systems, often you model real-world events as happening randomly. Statistics via sampling. Today we often work with huge data sets. If one wants to approximate basic statistics, e.g., the mean or mode, it is more efficient to sample a small portion of the data and compute the statistic, rather than read all the data. This idea connects to certain current research topics: \property testing algorithms" and \streaming algorithms". Learning theory. Much of the most successful work in AI is done via learning theory and other statistical methods, wherein one assumes the data is generated according to certain kinds of probability distributions. Systems & queueing theory. When studying the most effective policies for scheduling and processor-sharing, one usually models job sizes and job interarrival times as coming from a stochastic process. Data compression. Data compression algorithms often work by analyzing or modeling the un- derlying probability distribution of the data, or its information-theoretic content. Error-correcting codes. A large amount of the work in coding theory is based on the problem of redundantly encoding data so that it can be recovered if there is random noise. 1 Data structures. When building, e.g., a static dictionary data structure, one can optimize response time if one knows the probability distribution on key lookups. Even moreso, the time for operations in hash tables can be greatly improved by careful probabilistic analysis. Symmetry-breaking. In distributed algorithms, one often needs a way to let one of several identical processors \go first”. In combinatorial optimization algorithms | e.g., for solving TSP or SAT instances | it is sometimes effective to use randomness to decide which city or variable to process next, especially on highly symmetric instances. Theory of large networks. Much work on the study of large networks | e.g., social networks like Facebook, or physical networks, like the Internet | models the graphs as arising from special kinds of random processes. Google's PageRank algorithm is famously derived from modeling the hyperlinks on the internet as a \Markov chain". Quantum computing. The laws of physics are quantum mechanical and there has been tremen- dous recent progress on designing \quantum algorithms" that take advantage of this (even if quan- tum computers have yet to be built). Quantum computing is inherently randomized | indeed, it's a bit like computing with probabilities that can be both positive and negative. Statistics. Several areas of computing | e.g., Human-Computer Interaction | involve running experimental studies, often with human subjects. Interpreting the results of such studies, and decid- ing whether their findings are \statistically significant", requires a strong knowledge of probability and statistics. Games and gambling. Where would internet poker be without randomness? Making algorithms run faster. Perhaps surprisingly, there are several examples of algorith- mic problems which seem to have nothing to do with randomness, yet which we know how to solve much more efficiently using randomness than without. This is my personal favorite example of probability in computing. We will see an example of using randomness to make algorithms more efficient today, in the problem of verifying matrix multiplication. 3 About this course This course will explore several of the above uses of probability in computing. To understand them properly, though, you will need a thorough understanding of probability theory. Probability is traditionally a \math" topic, and indeed, this course will be very much like a math class. The emphasis will be on rigorous definitions, proofs, and theorems. Your homework solutions will be graded according to these \mathematical standards". One consequence of this is that the first part of the course may be a little bit dry, because we will spend a fair bit of time going rigorously through the basics of probability theory. But of course it is essential that you go through these basics carefully so that you are prepared for the more advanced applications. I will be interspersing some \computer science" applications throughout 2 this introductory material, to try to keep a balance between theory and applications. In particular, today we're going to start with an application, verifying matrix multiplication. Don't worry if you don't understand all the details today; it would be a bit unfair of me to insist you do, given that we haven't even done the basic theory yet! I wanted to give you a flavor of things to come, before we get down to the nitty-gritty of probability theory. 4 Verifying matrix multiplication Let's see an example of an algorithmic task which a probabilistic algorithm can perform much more efficiently than any deterministic algorithm we know. The task is that of verifying matrix multiplication. 4.1 Multiplying matrices There exist extremely sophisticated algorithms for multiplying two matrices together. Suppose we have two n × n matrices, A and B. Their product, C = AB, is also an n × n matrix, with entries given by the formula n X Cij = AikBkj: (1) k=1 How long does it take to compute C? Since C has n2 entries, even just writing it down in memory will take at least n2 steps. On the other hand, the \obvious" method for computing C given by (1) takes about n3 steps; to compute each of the n2 entries it does roughly n many arithmetic operations. Actually, there may be some extra time involved if the numbers involved get very large; e.g., if they're huge, it could take a lot of time just to, say, multiply two of them together. Let us eliminate this complication by just thinking about: Matrix multiplication mod 2. Here we assume that the entries of A and B are just bits, 0 or 1, and we compute the product mod 2: n X Cij = AikBkj (mod 2): (2) k=1 This way, every number involved is just a bit, so we needn't worry about the time of doing arithmetic on numbers. The \obvious" matrix multiplication algorithm therefore definitely takes at most O(n3) steps.1 As you may know, there are very surprising, nontrivial algorithms that multiply matrices in time faster than O(n3). The first and perhaps most famous is Strassen's algorithm: Theorem 1. (Strassen, 1969.) It is possible to multiply two matrices in time roughly nlog2 7 ≈ n2:81. 1By the way, if you are not so familiar with big-Oh notation and analysis of running time, don't worry too much. This is not an algorithms course, and we won't be studying algorithmic running time in too much detail. Rather, we're just using it to motivate the importance of probability in algorithms and computing. 3 Incidentally, I've heard from Manuel Blum that Strassen told him that he thought up his algo- rithm by first studying the simpler problem of matrix multiplication mod 2. (Strassen's algorithm works for the general case, non-mod-2 case too.) There were several improvements on Strassen's algorithm, and the current \world record" is due to Coppersmith and Winograd: Theorem 2. (Coppersmith-Winograd, 1987.) It is possible to multiply two matrices in time roughly n2:376. Many people believe that matrix multiplication can be done in time n2+ for any > 0, but nobody knows how to do it. 4.2 Verification We are not actually going to discuss algorithms for matrix multiplication. Instead we will discuss verification of such algorithms. Suppose your friend writes some code to implement the Coppersmith-Winograd algorithm (mod 2). It takes as input two n × n matrices of bits A and B and outputs some n × n ma- trix of bits, D. The Coppersmith-Winograd algorithm is very complicated, and you might feel justifiably concerned about whether your friend implemented it correctly.
Recommended publications
  • Franceschetti M., Meester R. Random Networks for Communication
    This page intentionally left blank Random Networks for Communication When is a network (almost) connected? How much information can it carry? How can you find a particular destination within the network? And how do you approach these questions – and others – when the network is random? The analysis of communication networks requires a fascinating synthesis of random graph theory, stochastic geometry and percolation theory to provide models for both structure and information flow. This book is the first comprehensive introduction for graduate students and scientists to techniques and problems in the field of spatial random networks. The selection of material is driven by applications arising in engineering, and the treatment is both readable and mathematically rigorous. Though mainly concerned with information-flow-related questions motivated by wireless data networks, the models developed are also of interest in a broader context, ranging from engineering to social networks, biology, and physics. Massimo Franceschetti is assistant professor of electrical and computer engineering at the University of California, San Diego. His work in communication system theory sits at the interface between networks, information theory, control, and electromagnetics. Ronald Meester is professor of mathematics at the Vrije Universiteit Amsterdam. He has published broadly in percolation theory, spatial random processes, self-organised crit- icality, ergodic theory, and forensic statistics and is the author of Continuum Percolation (with Rahul Roy) and A Natural Introduction to Probability Theory. CAMBRIDGE SERIES IN STATISTICAL AND PROBABILISTIC MATHEMATICS Editorial Board R. Gill (Mathematisch Instituut, Leiden University) B. D. Ripley (Department of Statistics, University of Oxford) S. Ross (Department of Industrial & Systems Engineering, University of Southern California) B.
    [Show full text]
  • Stability Analysis of Fitzhughâ•Finagumo With
    Proceedings of GREAT Day Volume 2011 Article 4 2012 Stability Analysis of FitzHugh–Nagumo with Smooth Periodic Forcing Tyler Massaro SUNY Geneseo Follow this and additional works at: https://knightscholar.geneseo.edu/proceedings-of-great-day Creative Commons Attribution 4.0 License This work is licensed under a Creative Commons Attribution 4.0 License. Recommended Citation Massaro, Tyler (2012) "Stability Analysis of FitzHugh–Nagumo with Smooth Periodic Forcing," Proceedings of GREAT Day: Vol. 2011 , Article 4. Available at: https://knightscholar.geneseo.edu/proceedings-of-great-day/vol2011/iss1/4 This Article is brought to you for free and open access by the GREAT Day at KnightScholar. It has been accepted for inclusion in Proceedings of GREAT Day by an authorized editor of KnightScholar. For more information, please contact [email protected]. Massaro: Stability Analysis of FitzHugh–Nagumo with Smooth Periodic Forcin Stability Analysis of FitzHugh – Nagumo with Smooth Periodic Forcing Tyler Massaro 1 Background Since the concentration of K+ ions is so much higher inside the cell than outside, there is a As Izhikevich so aptly put it, tendency for K+ to flow out of these leak channels “If somebody were to put a gun to the head of along its concentration gradient. When this the author of this book and ask him to name the happens, there is a negative charge left behind by single most important concept in brain science, he the K+ ions immediately leaving the cell. This would say it is the concept of a neuron[16].” build-up of negative charge is actually enough to, in a sense, catch the K+ ions in the act of leaving By no means are the concepts forwarded in his and momentarily halt the flow of charge across the book restricted to brain science.
    [Show full text]
  • Random Graphs in Evolution François Bienvenu
    Random graphs in evolution François Bienvenu To cite this version: François Bienvenu. Random graphs in evolution. Combinatorics [math.CO]. Sorbonne Université, 2019. English. NNT : 2019SORUS180. tel-02932179 HAL Id: tel-02932179 https://tel.archives-ouvertes.fr/tel-02932179 Submitted on 7 Sep 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. École Doctorale de Sciences Mathématiques de Paris Centre Laboratoire de Probabilités, Statistiques et Modélisation Sorbonne Université Thèse de doctorat Discipline : mathématiques présentée par François Bienvenu Random Graphs in Evolution Sous la direction d’Amaury Lambert Après avis des rapporteurs : M. Simon Harris (University of Auckland) Mme Régine Marchand (Université de Lorraine) Soutenue le 13 septembre 2019 devant le jury composé de : M. Nicolas Broutin Sorbonne Université Examinateur M. Amaury Lambert Sorbonne Université Directeur de thèse Mme Régine Marchand Université de Lorraine Rapporteuse Mme. Céline Scornavacca Chargée de recherche CNRS Examinatrice M. Viet Chi Tran Université de Lille Examinateur Contents Contents3 1 Introduction5 1.1 A very brief history of random graphs................. 6 1.2 A need for tractable random graphs in evolution ........... 7 1.3 Outline of the thesis........................... 8 Literature cited in the introduction.....................
    [Show full text]
  • At Work in Qualitative Analysis: a Case Study of the Opposite
    HISTORIA MATHEMATICA 25 (1998), 379±411 ARTICLE NO. HM982211 The ``Essential Tension'' at Work in Qualitative Analysis: A Case Study of the Opposite Points of View of Poincare and Enriques on the Relationships between Analysis and Geometry* View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Giorgio Israel and Marta Menghini Dipartimento di Matematica, UniversitaÁ di Roma ``La Sapienza,'' P. le A. Moro, 2, I00185 Rome, Italy An analysis of the different philosophic and scienti®c visions of Henri Poincare and Federigo Enriques relative to qualitative analysis provides us with a complex and interesting image of the ``essential tension'' between ``tradition'' and ``innovation'' within the history of science. In accordance with his scienti®c paradigm, Poincare viewed qualitative analysis as a means for preserving the nucleus of the classical reductionist program, even though it meant ``bending the rules'' somewhat. To Enriques's mind, qualitative analysis represented the af®rmation of a synthetic, geometrical vision that would supplant the analytical/quantitative conception characteristic of 19th-century mathematics and mathematical physics. Here, we examine the two different answers given at the turn of the century to the question of the relationship between geometry and analysis and between mathematics, on the one hand, and mechanics and physics, on the other. 1998 Academic Press Un'analisi delle diverse posizioni ®loso®che e scienti®che di Henri Poincare e Federigo Enriques nei riguardi dell'analisi qualitativa fornisce un'immagine complessa e interessante della ``tensione essenziale'' tra ``tradizione'' e ``innovazione'' nell'ambito della storia della scienza.
    [Show full text]
  • Writing the History of Dynamical Systems and Chaos
    Historia Mathematica 29 (2002), 273–339 doi:10.1006/hmat.2002.2351 Writing the History of Dynamical Systems and Chaos: View metadata, citation and similar papersLongue at core.ac.uk Dur´ee and Revolution, Disciplines and Cultures1 brought to you by CORE provided by Elsevier - Publisher Connector David Aubin Max-Planck Institut fur¨ Wissenschaftsgeschichte, Berlin, Germany E-mail: [email protected] and Amy Dahan Dalmedico Centre national de la recherche scientifique and Centre Alexandre-Koyre,´ Paris, France E-mail: [email protected] Between the late 1960s and the beginning of the 1980s, the wide recognition that simple dynamical laws could give rise to complex behaviors was sometimes hailed as a true scientific revolution impacting several disciplines, for which a striking label was coined—“chaos.” Mathematicians quickly pointed out that the purported revolution was relying on the abstract theory of dynamical systems founded in the late 19th century by Henri Poincar´e who had already reached a similar conclusion. In this paper, we flesh out the historiographical tensions arising from these confrontations: longue-duree´ history and revolution; abstract mathematics and the use of mathematical techniques in various other domains. After reviewing the historiography of dynamical systems theory from Poincar´e to the 1960s, we highlight the pioneering work of a few individuals (Steve Smale, Edward Lorenz, David Ruelle). We then go on to discuss the nature of the chaos phenomenon, which, we argue, was a conceptual reconfiguration as
    [Show full text]
  • Table of Contents More Information
    Cambridge University Press 978-1-107-03042-8 - Lyapunov Exponents: A Tool to Explore Complex Dynamics Arkady Pikovsky and Antonio Politi Table of Contents More information Contents Preface page xi 1Introduction 1 1.1 Historical considerations 1 1.1.1 Early results 1 1.1.2 Biography of Aleksandr Lyapunov 3 1.1.3 Lyapunov’s contribution 4 1.1.4 The recent past 5 1.2 Outline of the book 6 1.3 Notations 8 2Thebasics 10 2.1 The mathematical setup 10 2.2 One-dimensional maps 11 2.3 Oseledets theorem 12 2.3.1 Remarks 13 2.3.2 Oseledets splitting 15 2.3.3 “Typical perturbations” and time inversion 16 2.4 Simple examples 17 2.4.1 Stability of fixed points and periodic orbits 17 2.4.2 Stability of independent and driven systems 18 2.5 General properties 18 2.5.1 Deterministic vs. stochastic systems 18 2.5.2 Relationship with instabilities and chaos 19 2.5.3 Invariance 20 2.5.4 Volume contraction 21 2.5.5 Time parametrisation 22 2.5.6 Symmetries and zero Lyapunov exponents 24 2.5.7 Symplectic systems 26 3 Numerical methods 28 3.1 The largest Lyapunov exponent 28 3.2 Full spectrum: QR decomposition 29 3.2.1 Gram-Schmidt orthogonalisation 31 3.2.2 Householder reflections 31 v © in this web service Cambridge University Press www.cambridge.org Cambridge University Press 978-1-107-03042-8 - Lyapunov Exponents: A Tool to Explore Complex Dynamics Arkady Pikovsky and Antonio Politi Table of Contents More information vi Contents 3.3 Continuous methods 33 3.4 Ensemble averages 35 3.5 Numerical errors 36 3.5.1 Orthogonalisation 37 3.5.2 Statistical error 38 3.5.3
    [Show full text]
  • The Development of Hierarchical Knowledge in Robot Systems
    THE DEVELOPMENT OF HIERARCHICAL KNOWLEDGE IN ROBOT SYSTEMS A Dissertation Presented by STEPHEN W. HART Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY September 2009 Computer Science c Copyright by Stephen W. Hart 2009 All Rights Reserved THE DEVELOPMENT OF HIERARCHICAL KNOWLEDGE IN ROBOT SYSTEMS A Dissertation Presented by STEPHEN W. HART Approved as to style and content by: Roderic Grupen, Chair Andrew Barto, Member David Jensen, Member Rachel Keen, Member Andrew Barto, Department Chair Computer Science To R. Daneel Olivaw. ACKNOWLEDGMENTS This dissertation would not have been possible without the help and support of many people. Most of all, I would like to extend my gratitude to Rod Grupen for many years of inspiring work, our discussions, and his guidance. Without his sup- port and vision, I cannot imagine that the journey would have been as enormously enjoyable and rewarding as it turned out to be. I am very excited about what we discovered during my time at UMass, but there is much more to be done. I look forward to what comes next! In addition to providing professional inspiration, Rod was a great person to work with and for|creating a warm and encouraging labora- tory atmosphere, motivating us to stay in shape for his annual half-marathons, and ensuring a sufficient amount of cake at the weekly lab meetings. Thanks for all your support, Rod! I am very grateful to my thesis committee|Andy Barto, David Jensen, and Rachel Keen|for many encouraging and inspirational discussions.
    [Show full text]
  • Who Would Have Thought Card Shuffling Was So Involved?
    Who would have thought card shuffling was so involved? Leah Cousins August 26, 2019 Abstract In this paper we present some interesting mathematical results on card shuffling for two types of shuffling: the famous riffle shuffle and the random-to-top shuffle. A natural question is how long it takes for a deck to be randomized under a particular shuffling technique. Mathematically, this is the mixing time of a Markov chain. In this paper we present these results for both the riffle shuffle and the random-to-top shuffle. For the same two shuffles, we also include natural results for card shuffling which directly relate to well-known combinatorial objects such as Bell numbers and Young tableaux. Contents 1 Introduction 2 2 What is Card Shuffling, Really? 3 2.1 Shuffles are Bijective Functions . .3 2.2 Random Walks and Markov Chains . .4 2.3 What is Randomness? . .5 2.4 There are even shuffle algebras ?.........................6 3 The Riffle Shuffle 7 3.1 The Mixing Time . .8 3.2 Riffle Shuffling for Different Card Games . 11 3.3 Carries and Card Shuffles . 14 4 The Random-to-top Shuffle 17 4.1 Lifting Cards . 17 4.2 Properties of the Lifting Process . 18 4.3 Bell Numbers and the Random-to-top Shuffle . 18 4.4 Young Diagrams . 22 4.5 The Mixing Time . 29 5 Concluding Remarks 36 1 1 Introduction Humans have been shuffling decks of cards since playing cards were first invented in 1000AD in Eastern Asia [25]. It is unclear when exactly card shuffling theory was first studied, but since card shuffling theory began with magicians' card tricks, it is fair to think that it was around the time that magicians were studying card tricks.
    [Show full text]
  • Voltage Control Using Limited Communication**This Work Was
    20th IFAC World Congress 2017 IFAC PapersOnline Volume 50, Issue 1 Toulouse, France 9-14 July 2017 Part 1 of 24 Editors: Denis Dochain Didier Henrion Dimitri Peaucelle ISBN: 978-1-5108-5071-2 Printed from e-media with permission by: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 Some format issues inherent in the e-media version may also appear in this print version. Copyright© (2017) by IFAC (International Federation of Automatic Control) All rights reserved. Printed by Curran Associates, Inc. (2018) For permission requests, please contact the publisher, Elsevier Limited at the address below. Elsevier Limited 360 Park Ave South New York, NY 10010 Additional copies of this publication are available from: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 USA Phone: 845-758-0400 Fax: 845-758-2633 Email: [email protected] Web: www.proceedings.com TABLE OF CONTENTS PART 1 VOLTAGE CONTROL USING LIMITED COMMUNICATION*............................................................................................................1 Sindri Magnússon, Carlo Fischione, Na Li A LOCAL STABILITY CONDITION FOR DC GRIDS WITH CONSTANT POWER LOADS* .........................................................7 José Arocas-Pérez, Robert Griño VSC-HVDC SYSTEM ROBUST STABILITY ANALYSIS BASED ON A MODIFIED MIXED SMALL GAIN AND PASSIVITY THEOREM .......................................................................................................................................................................13 Y. Song, C. Breitholtz
    [Show full text]
  • When Is a Deck of Cards Well Shuffled?
    Technische Universiteit Delft Faculteit Elektrotechniek, Wiskunde en Informatica Delft Institute of Applied Mathematics When is a deck of cards well shuffled? (Nederlandse titel: Wanneer is een pak kaarten goed geschud?) Verslag ten behoeve van het Delft Institute of Applied Mathematics als onderdeel ter verkrijging van de graad van BACHELOR OF SCIENCE in TECHNISCHE WISKUNDE door RICARDO TEBBENS Delft, Nederland Augustus 2018 Copyright © 2018 door Ricardo Tebbens. Alle rechten voorbehouden. 2 BSc verslag TECHNISCHE WISKUNDE \When is a deck of cards well shuffled?" (Nederlandse titel: \Wanneer is een pak kaarten goed geschud?)" RICARDO TEBBENS Technische Universiteit Delft Begeleider Dr. M.T. Joosten Overige commissieleden Dr. B. van den Dries Prof. Dr.ir. M.C. Veraar Augustus, 2018 Delft Abstract When is a deck of cards shuffled good enough? We have to perform seven Riffle Shuffles to randomize a deck of 52 cards. The mathematics used to calculate this, has some strong connections with permutations, rising sequences and the L1 metric: the variation distance. If we combine these factors, we can get an expression of how good a way of shuffling is in randomizing a deck. We say a deck is randomized, when every possible order of the cards is equally likely. This gives us the cut-off result of seven shuffles. Furthermore, this gives us a window to look at other ways of shuffling, some even used in casinos. It turns out that some of these methods are not randomizing a deck enough. We can also use Markov chains in order to see how we randomize cards by "washing" them over a table.
    [Show full text]
  • Unconventional Means of Preventing Chaos in the Economy
    CES Working Papers – Volume XIII, Issue 2 Unconventional means of preventing chaos in the economy Gheorghe DONCEAN*, Marilena DONCEAN** Abstract This research explores five barriers, acting as obstacles to chaos in terms of policy (A), economic (B), social (C), demographic effects (D) and natural effects (E) factors for six countries: Russia, Japan, USA, Romania, Brazil and Australia and proposes a mathematical modelling using an original program with Matlab mathematical functions for such systems. The initial state of the systems is then presented, the element that generates the mathematical equation specific to chaos, the block connection diagram and the simulation elements. The research methodology focused on the use of research methods such as mathematical modelling, estimation, scientific abstraction. Starting from the qualitative assessment of the state of a system through its components and for different economic partners, the state of chaos is explained by means of a structure with variable components whose values are equivalent to the economic indices researched. Keywords: chaos, system, Chiua simulator, multiscroll, barriers Introduction In the past four decades we have witnessed the birth, development and maturing of a new theory that has revolutionised our way of thinking about natural phenomena. Known as chaos theory, it quickly found applications in almost every area of technology and economics. The emergence of the theory in the 1960s was favoured by at least two factors. First, the development of the computing power of electronic computers enabled (numerical) solutions to most of the equations that described the dynamic behaviour of certain physical systems of interest, equations that could not be solved using the analytical methods available at the time.
    [Show full text]
  • Quantitative Research Methods in Chaos and Complexity: from Probability to Post Hoc Regression Analyses
    FEATURE ARTICLE Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses DONALD L. GILSTRAP Wichita State University, (USA) In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and complexity methods that have the potential to bridge the divide between qualitative and quantitative, as well as theoretical and applied, human research studies. These methods include multiple linear regression, nonlinear regression, stochastics, Monte Carlo methods, Markov Chains, and Lyapunov exponents. A postulate for post hoc regression analysis is then presented as an example of an emergent, recursive, and iterative quantitative method when dealing with interaction effects and collinearity among variables. This postulate also highlights the power of both qualitative and quantitative chaos and complexity theories in order to observe and describe both the micro and macro levels of systemic emergence. Introduction Chaos and complexity theories are numerous, and, for the common reader of complexity texts, it is easy to develop a personal panacea for how chaos and complexity theories work. However, this journal highlights the different definitions and approaches educational researchers introduce to describe the phenomena that emerge during the course of their research. Converse to statements from outside observers, complexity theory in its general forms is not complicated, it is complex, and when looking at the micro levels of phenomena that emerge we see much more sophistication or even messiness.
    [Show full text]