Information-Theoretic Aspects in the Control of Dynamical Systems

Information-Theoretic Aspects in the Control of Dynamical Systems

Information-Theoretic Aspects in the Control of Dynamical Systems by Hugo Touchette B.Sc., Universite de Sherbrooke, Sherbrooke, Quebec, Canada (1997) Submitted to the Department of Mechanical Engineering in partial fulfillment of the requirements for the degree of Master of Science (without specification) at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 2000 @ Massachusetts Institute of Technology 2000. All rights reserved MASSACHUSETTS INSTITUTE OF TECHNOLOGY SEP 2 0 2000 LIBRARIES Signature of Author ............... ................. V Department of Mechanical Engineering January 15, 2000 Certified by .......... ..................... Seth Lloyd Associate P of ssor of Mechanical Engineering --0W-.-Tbesis SuDervisor A ccepted by................................... Ain A. ,onin Chairman, Departmental Committee on Graduate Students Information-theoretic aspects in the control of dynamical systems by Hugo Touchette Submitted to the Department of Mechanical Engineering on January 15, 2000 in Partial Fulfillment of the Requirements for the Degree of Master of Science (without specification) Abstract Information is an intuitive notion that has been quantified successfully both in physics and communication theory. In physics, information takes the form of entropy; informa- tion that one does not possess. From this connection follows a trade-off, most famously embodied in Maxwell's demon: a device able to gather information about the state of a thermodynamic system could use that information to decrease the entropy of the system. In Shannon's mathematical theory of communication, on the other hand, an entropy-like measure called the mutual information quantifies the maximum amount of information that can be transmitted through a communication channel. In this thesis, we bring together these two different aspects of information, among others, in a the- oretical and practical study of control theory. Several observations indicate that such an information-theoretic study of control is possible and can be effective. One of them, is the fact that control units can be regarded intuitively as information gathering and using systems (IGUS): controllers gather information from a dynamical system by mea- suring its state (estimation), and then use that information to conduct a specific control action on that system (actuation). As the thesis demonstrates, in the case of stochas- tic systems, the information gathered by a controller from a controlled system can be quantified formally using mutual information. Moreover, it is shown that this mutual information is at the source of a limiting result relating, in the form of a trade-off, the availability of information in a control process and its performance. The consequences of this trade-off, which is very similar to the one in thermodynamics mentioned above, are investigated by looking at the practical control of various systems, notably, the control of chaotic systems. The thesis also defines and investigates the concept of controllabil- ity, central in the classical theory of control, from an information viewpoint. For this part, necessary and sufficient entropic conditions for controllability are proved. Thesis Supervisor: Seth Lloyd Title: Assistant Professor of Mechanical Engineering Acknowledgments I always had the intention to write this thesis using singular predicates instead of plural ones, and notably use 'I' instead of 'we', as a thesis is supposed to be a personal work. In fact, I even tried to convince some people to do the same, until I realized that writing such a piece of work is really impossible without the help of many people, beginning with my advisor, Professor Seth Lloyd. In writing my thesis, I actually, with all sincerity, present our work which extended over two years of collaboration, and, in my case, of fruitful learning. I want to thank him for the friendly environment of research, for proofreading this work and, above all, for the freedom I was granted in doing my work from the beginning of the program. I also want to thank Richard Watson for reading carefully the entire manuscript, for suggesting many improvements, and helping to get my writing 'de-Frenched'. I owe the same to Lorenza Viola who read a summary article of the work presented in this thesis, and raised many important questions related to the physics of all this. Many thanks are also due to Valerie Poulin for interesting discussions on the mathematics of dynamical systems, and other subjects which have no relation whatsoever with science. I received help in various other forms. Financially, first, from the National Science and Engineering Research Council of Canada (NSERC) in the form of an tS A schol- arship, and from the d'Arbeloff Laboratory for Information Systems and Technology at MIT. The financial support of these organizations has been more than appreciated. Second, I want to thank the Centre de recherche en physique du solide (CRPS) of the Universit6 de Sherbrooke, and especially Pr. Andr-Marie Tremblay, for the permission to access the supercomputing facilities of the CRPS. Finally, I want to thank all the members of my family for their constant support, and for being interested about my work. I hope, one day, their insatiable thirst for understanding what I am doing will be satisfied. This thesis is dedicated to them. Cette these (un memoire en fait) leur est dediee. Contents 1 Introduction 11 1.1 Perspectives on information and control . .. 11 1.2 General framework and overview ... .. ......... 17 1.3 Related works .... .... .... ... .. ... ... 20 2 Entropy in dynamical systems 21 2.1 Basic results of information theory .. 21 2.2 Interpretations of entropy ...... ... 25 2.3 Dynamics properties and entropy rates 26 3 Information control 29 3.1 Information-theoretic problems of control 29 3.2 General reduced models ...... .... 31 3.3 Separation analysis ..... ..... ... 34 3.4 Controllability ... ..... ..... .. 37 3.5 Entropy reduction .. ....... .... 39 3.6 Optimality and side information .. ... 43 3.7 Continuous extensions of the results . .. 46 3.8 Thermodynamic aspects of control . ... 49 4 Applications 51 4.1 Controlling a particle in a box: a counterexample? 51 4.2 Binary control automata ............... 54 4.3 Control of chaotic maps ..... ......... 59 5 Conclusion 67 5.1 Summary and informal extensions ........ 67 5.2 Future work ... ......... ......... 68 5 List of Figures 1.1 The signalman's office. Trains are coming from the bottom of the railways. 13 1.2 (a)-(d) Exclusive N-to-1 junctions with N = 2,3,4,8. (e) Non-exclusive 2-to-1 junction. ........ ........ ........ ........ 14 2.1 Venn diagrams representing the correspondence between entropy, condi- tional entropy and mutual information ... ........ ........ 23 2.2 Discrete probability distribution resulting from a regular quantization. 25 3.1 (a) Deterministic propagation of the state x,, versus (b) stochastic prop- agation. In (b) both discrete and continuous distributions are illustrated. The thick grey line at the base of the distributions gives an indication of the uncertainty associated with X.. ... ........ ........ 30 3.2 Directed acyclic graphs (DAGs) corresponding to (a) open-loop and (b) closed-loop control. The states of the controlled system X are represented by X and X', whereas the state of the controller C and the environment 9 are C and E respectively. (c)-(d) Reduced DAGs obtained by tracing over the random variable of the environment. ....... ........ 32 3.3 Separation analysis. (a)-(b) open-loop control and (c) closed-loop con- trol. The size of the sets, or figuratively of the entropy 'bubbles', is an indication of the value of the entropy. .. ........ ........ 35 3.4 Illustration of the separation analysis procedure for a binary closed-loop controller acting on a binary state system. ........ ........ 36 3.5 Entropy bubbles representing optimal entropy reduction in (a) open-loop control and (b) closed-loop control. .. ........ ........ ... 45 4.1 Different stages in the control of a particle in a box of N states. (a) Initial equiprobable state. (b) Exact localization. (c) Open-loop actuation c = 0. .......... ....................................... 52 4.2 Closed-loop control stages. (a) Initial equiprobable state. (b) Coarse- grained measurement of the position of the particle. (c) Exact location of the particle in the box. (d) Actuation according to the controller's state. 53 4.3 Apparent violation of the closed-loop optimality theorem. (0) AHciosed as a function of N. (o) AHopen + I(X; C) versus N. Outside the dashed region AHclosed > AHopen + I(X; C). The dashed line represents the corrected result when is AHpen is calculated according to the proper definition. ...... ........ ........ ........ ..... 55 7 4.4 (a) H(C) as a function of of the measurement error e and the initial parameter a. (b) I(X; C) as a function of e and a. .. ... .. ... 56 4.5 (a) AHclsed as a function of e and a. (b) Comparison of I(X; C) (top surface) and AHclosed (partly hidden surface). ... ... ... ... .. 58 4.6 (a) Realization of the logistic map for r = cte = 3.78, x* ~ 0.735. (b) Application of the feedback control algorithm starting at n = 150 with - = 7.0. (c) Application of the control law from n = 150 with -y = 7.0 in the presence of a uniform partition of size A = 0.02 in the estimation of the state. .. ..... ..... ..... ..... ..... .... .... 61 4.7 (Left) Lyapunov spectrum A(r) of the logistic map. The numerical calcu- lations used approximately 20 000 iterates of the map. (Right) Spectrum for the noisy logistic map. The definition of the 4 chaotic control regions (0, 1, 2, 3) is illustrated by the dashed boxes. The vertical dashed lines give the position of rmia. .. .. ..... ..... ..... .... .. 63 4.8 (a) Entropy H = H(Xn) of the logistic map with r = cte and A(r) > 0. The entropy was calculated by propagating about 10 000 points initially located in a very narrow interval, and then by calculating a coarse-grained distribution. Inside the typical region where the distribution is away from the initial distribution (lower left part) and the uniform distribution (upper right part), H(Xn+i) - H(Xn) ~ A(r).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    77 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us