Maximum-Entropy Method Applied to Micro- and Nanolasers

Total Page:16

File Type:pdf, Size:1020Kb

Maximum-Entropy Method Applied to Micro- and Nanolasers Maximum-Entropy Method Applied to Micro- and Nanolasers Dissertation zur Erlangung des akademischen Grades doctor rerum naturalium (Dr. rer. nat.) genehmigt durch die Fakultät für Naturwissenschaften der Otto-von-Guericke-Universität Magdeburg von M. Sc. Boris Melcher geb. am 04. August 1990 in Kara-Balta Gutachter: Prof. Dr. Jan Wiersig Jun.-Prof. Dr. Doris Reiter eingereicht am: 15. Dezember 2020 verteidigt am: 28. April 2021 Abstract During the last decades, the principle of maximum entropy established itself as a fundamen- tal principle of consistent reasoning and hence has been successfully applied in many fields inside and outside of physics. Whenever only partial information is available it is an ideal tool to derive full statistical distributions or, in the case of quantum-mechanical descriptions, the full density matrix in a least-biased manner by exactly reproducing the given a-priori information while being maximally non-committal otherwise. In this thesis the maximum-entropy method is applied to the context of micro- and nanolasers. While tremendous improvements concerning the miniaturization and efficiency of these lasers have been achieved in the recent years, their light is still most often characterized in terms of the first statistical moments of the photon-number distribution, e.g., the light intensity and auto-correlation function. Lately, however, direct measurement of the full photon distribution became available experimentally, and theoretically its availability is highly desirable since it contains the full information about the system and therefore enables a deeper insight into the underlying physical processes. The application of the maximum-entropy method in this work is threefold. Firstly, a birth- death model that describes the interaction of a cavity light-field with a quantum-dot system is considered as a benchmark. The full photon-number distribution which is available via conventional methods here is compared to the maximum-entropy distributions that are obtained when known photon moment values are used as input information. Secondly, a combination of the maximum-entropy method and equation-of-motion approaches is proposed where output quantities of the latter are used as input for the former. Apart from the access to the full photon-number distribution, the method also provides the possibility to define an unambiguous threshold even for cases where usual definitions fail. Lastly, the maximum-entropy method is used as a stand-alone approach without the need for moment values from external sources. Instead, knowledge of the stationarity of several distinct observables is used as an input to derive the least biased steady state. By doing so, the many-particle hierarchy problem that arises in conventional equation-of-motion techniques is circumvented. Moreover, the numerical determination of the full density matrix is enabled and access to all relevant expectation values and the full statistics is obtained. The results are compared to the numerically exact solution of the von Neumann-Lindblad equation for a four-level system resonantly coupled to a single cavity mode. Moreover, usage of the maximum-entropy method as a trial-and-error approach to identify the relevant processes of quantum systems is exemplified. iii Contents 1 Introduction 1 2 Maximum-Entropy Method 5 2.1 Historical Overview . 5 2.2 Maximum-Entropy Method with Moment Constraints . 14 2.2.1 Shannon Entropy . 15 2.2.2 The Maximum-Entropy Distribution . 18 2.2.3 Bounds on the Maximum-Entropy Distribution . 20 2.2.4 Implementation and Numerical Remarks . 23 2.3 Maximum-Entropy Method for Quantum Systems . 26 2.3.1 Von Neumann Entropy . 26 2.3.2 The Maximum-Entropy Density Matrix . 28 2.3.3 Many-Particle Hierarchy Problem and Least Biased Steady State . 30 2.3.4 Implementation and Numerical Remarks . 33 3 Maximum-Entropy Method with Moment Constraints 37 3.1 Benchmark Model . 37 3.1.1 Birth-Death Model . 38 3.1.2 Photon and Emitter Statistics . 40 3.1.3 Existence of the Distribution . 43 3.1.4 Threshold Definition Comparison . 46 3.1.5 Characterization of the Emitted Light by Entropy . 49 3.2 Superthermal Photon Bunching . 52 3.2.1 Superthermal Photon Bunching for Single-Mode Distributions . 53 3.2.2 Anti-Correlated Two-Mode Distributions . 56 3.3 Chapter Conclusion . 58 4 Combining Equation-of-Motion Techniques and the Maximum-Entropy Method 61 4.1 Semiconductor Model . 61 4.2 Nanolasers with Extended Gain Media . 66 4.3 Chapter Conclusion . 73 v 5 Maximum-Entropy Method for Quantum Systems 75 5.1 Benchmark Model . 76 5.1.1 Four-Level Quantum-Dot Microcavity Laser . 76 5.1.2 Input Information and Observation Levels . 78 5.1.3 Numerical Results . 81 5.2 Outlook . 87 5.2.1 Bimodal Quantum-Dot Microcavity Laser . 87 5.2.2 Excitation with a Second Light Field . 91 5.2.3 General Outlook . 92 5.3 Chapter Conclusion . 94 6 Conclusion 95 A Appendix 97 A.1 Upper Bound of the Second-Order Maximum-Entropy Distribution . 97 A.2 Choice of Selfadjoint Operators as Input Information . 98 A.3 Choice of the Maximum Photon Number for Numerical Calculations . 100 A.4 Entropy of a Thermal Distribution . 102 Bibliography 103 vi Introduction 1 In the middle of the 19th century, a time when heat machines were already widely used, 28-year-old Sadi Carnot not only laid the foundation for the physical sub-field of thermo- dynamics but also was the first one to encounter the concept of entropy (Carnot, 1824; Volkenstein, 2009; Carnot, 2012). Afterwards, the attempt to describe the dynamics of heat phenomena quickly expanded to also include entropy’s statistical nature as well as its role as an uncertainty measure. Many well-known scientists such as Clausius, Boltzmann, Maxwell and Gibbs in the earlier years and Shannon, von Neumann and Jaynes in the later years were involved in this process that finally also lead to the principle of maximum entropy (Clausius, 1850a; Clausius, 1850b; Clausius, 1865; Gibbs, 1902; Shannon, 1948; Bril- louin, 1951; Jaynes, 1957a; Jaynes, 1957b; Jaynes, 1965; Ehrenberg, 1967; Garber, 1970; Tribus & McIrvine, 1971; Von Neumann, 1996; Lieb & Yngvason, 1999; Jaynes, 2003; Downarowicz, 2007; Griffin, 2008; Volkenstein, 2009; Pressé et al., 2013; Uffink, 2017; Chehade & Vershynina, 2019). With the more recent discoveries, the procedure of entropy maximization is no longer merely seen as maximizing uncertainty or minimizing bias but rather as a way to draw consistent inferences from partial data (Griffin, 2008; Volkenstein, 2009; Pressé et al., 2013; Uffink, 2017; Jizba & Korbel, 2019; Jizba & Korbel, 2020). With that, the maximum-entropy principle exceeded the realm of physics and established itself as a form of “extended logic”1. Because the maximum-entropy method is an ideal tool to handle incomplete information and deduce self-consistent conclusions without artificially including any bias, it has since then been used in various fields inside and outside of physics, such as astronomy, biology, economics, finance, and quantum chemistry among others (Pressé et al., 2013; Zhou et al., 2013; He & Kolovos, 2018; Jizba & Korbel, 2019; Tsallis, 2019; Chanda et al., 2020). In the same time that was required to arrive at today’s notion of the principle of maximum entropy, an enormous information technological transformation took place. While in 1824, the year of Carnot’s publication, great effort was put into the development of the first telegraph (Meadow, 2002), we are now living in the digital Zettabyte Era where the annual IP traffic exceeds at least one Zettabyte (1 ZB = 109 TB = 1021 B) (Cisco, 2019; Ma & Oulton, 2019). Handling this huge amount of digital information accordingly consumes a substantial percentage of the worldwide electricity (Malmodin & Lundén, 2018; Ma & Oulton, 2019) and has a large impact on the environment with an estimated 2.3 % of 1Jaynes, 2003, p. xxii. 1 the global green house gas emission stemming from the information and communication technologies sector in 2020 (Global e-Sustainability Initiative, 2012). Fortunately, the ongoing effort to decrease the emission already resulted in a more optimistic trend with an estimation of a percentage of roughly 2.0 % for 2030 (Global e-Sustainability Initiative, 2015). Nevertheless, the search for energy-efficient technologies is still of major interest both ecologically and economically. One of its many aspects is the ongoing miniaturization of lasing devices that are, for instance, used for data transmission between individual chips or different sections within chips (Ma & Oulton, 2019). Concerning laser systems a plethora of different aspects, e.g., the light-matter coupling strength (Vahala, 2003; He et al., 2013; Dovzhenko et al., 2018; Frisk Kockum et al., 2019), the quality factor of the cavity mode (Vahala, 2003; Reitzenstein & Forchel, 2010; He et al., 2013), and the power efficiency (Ma & Oulton, 2019) play a role and are currently exploited to improve the desired properties of those systems. As gain medium quantum-dots are among the structures considered most often (Noda, 2006; Norman et al., 2019). In those nanometer-sized objects the charge carriers are locally confined in all three spatial dimensions which can lead to strong light-matter interaction and high gain (Noda, 2006; Reitzenstein, 2012; Norman et al., 2019). Apart from that, the confining potential and the discrete energy levels of these artificial atoms can be tailored by, e.g., different choices of material, the quantum dot size, shape or by application of strain (Reitzenstein, 2012; Huo et al., 2013; Huo et al., 2014a; Huo et al., 2014b; Goldmann, 2014; Schumann, 2016; Melcher, 2016; Norman et al., 2019). The advances in the area of laser cavities, on the other hand, where improved manufacturing techniques allowed for increasingly higher quality factors and smaller mode volumes, further facilitated the possibility to build lasers where the better part of the spontaneous emission is directed into the laser mode (Vahala, 2003; Noda, 2006; Strauf et al., 2006; Thyrrestrup et al., 2010; He et al., 2013; Ma & Oulton, 2019).
Recommended publications
  • A Modern History of Probability Theory
    A Modern History of Probability Theory Kevin H. Knuth Depts. of Physics and Informatics University at Albany (SUNY) Albany NY USA 4/29/2016 Knuth - Bayes Forum 1 A Modern History of Probability Theory Kevin H. Knuth Depts. of Physics and Informatics University at Albany (SUNY) Albany NY USA 4/29/2016 Knuth - Bayes Forum 2 A Long History The History of Probability Theory, Anthony J.M. Garrett MaxEnt 1997, pp. 223-238. Hájek, Alan, "Interpretations of Probability", The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2012/entries/probability-interpret/>. 4/29/2016 Knuth - Bayes Forum 3 … la théorie des probabilités n'est, au fond, que le bon sens réduit au calcul … … the theory of probabilities is basically just common sense reduced to calculation … Pierre Simon de Laplace Théorie Analytique des Probabilités 4/29/2016 Knuth - Bayes Forum 4 Taken from Harold Jeffreys “Theory of Probability” 4/29/2016 Knuth - Bayes Forum 5 The terms certain and probable describe the various degrees of rational belief about a proposition which different amounts of knowledge authorise us to entertain. All propositions are true or false, but the knowledge we have of them depends on our circumstances; and while it is often convenient to speak of propositions as certain or probable, this expresses strictly a relationship in which they stand to a corpus of knowledge, actual or hypothetical, and not a characteristic of the propositions in themselves. A proposition is capable at the same time of varying degrees of this relationship, depending upon the knowledge to which it is related, so that it is without significance to call a John Maynard Keynes proposition probable unless we specify the knowledge to which we are relating it.
    [Show full text]
  • A Brief Overview of Probability Theory in Data Science by Geert
    A brief overview of probability theory in data science Geert Verdoolaege 1Department of Applied Physics, Ghent University, Ghent, Belgium 2Laboratory for Plasma Physics, Royal Military Academy (LPP–ERM/KMS), Brussels, Belgium Tutorial 3rd IAEA Technical Meeting on Fusion Data Processing, Validation and Analysis, 27-05-2019 Overview 1 Origins of probability 2 Frequentist methods and statistics 3 Principles of Bayesian probability theory 4 Monte Carlo computational methods 5 Applications Classification Regression analysis 6 Conclusions and references 2 Overview 1 Origins of probability 2 Frequentist methods and statistics 3 Principles of Bayesian probability theory 4 Monte Carlo computational methods 5 Applications Classification Regression analysis 6 Conclusions and references 3 Early history of probability Earliest traces in Western civilization: Jewish writings, Aristotle Notion of probability in law, based on evidence Usage in finance Usage and demonstration in gambling 4 Middle Ages World is knowable but uncertainty due to human ignorance William of Ockham: Ockham’s razor Probabilis: a supposedly ‘provable’ opinion Counting of authorities Later: degree of truth, a scale Quantification: Law, faith ! Bayesian notion Gaming ! frequentist notion 5 Quantification 17th century: Pascal, Fermat, Huygens Comparative testing of hypotheses Population statistics 1713: Ars Conjectandi by Jacob Bernoulli: Weak law of large numbers Principle of indifference De Moivre (1718): The Doctrine of Chances 6 Bayes and Laplace Paper by Thomas Bayes (1763): inversion
    [Show full text]
  • Maximum Entropy: the Universal Method for Inference
    MAXIMUM ENTROPY: THE UNIVERSAL METHOD FOR INFERENCE by Adom Giffin A Dissertation Submitted to the University at Albany, State University of New York in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy College of Arts & Sciences Department of Physics 2008 Abstract In this thesis we start by providing some detail regarding how we arrived at our present understanding of probabilities and how we manipulate them – the product and addition rules by Cox. We also discuss the modern view of entropy and how it relates to known entropies such as the thermodynamic entropy and the information entropy. Next, we show that Skilling's method of induction leads us to a unique general theory of inductive inference, the ME method and precisely how it is that other entropies such as those of Renyi or Tsallis are ruled out for problems of inference. We then explore the compatibility of Bayes and ME updating. After pointing out the distinction between Bayes' theorem and the Bayes' updating rule, we show that Bayes' rule is a special case of ME updating by translating information in the form of data into constraints that can be processed using ME. This implies that ME is capable of reproducing every aspect of orthodox Bayesian inference and proves the complete compatibility of Bayesian and entropy methods. We illustrated this by showing that ME can be used to derive two results traditionally in the domain of Bayesian statistics, Laplace's Succession rule and Jeffrey's conditioning rule. The realization that the ME method incorporates Bayes' rule as a special case allows us to go beyond Bayes' rule and to process both data and expected value constraints simultaneously.
    [Show full text]
  • UC Berkeley UC Berkeley Electronic Theses and Dissertations
    UC Berkeley UC Berkeley Electronic Theses and Dissertations Title Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints Permalink https://escholarship.org/uc/item/1sx6w3qg Author Albanna, Badr Faisal Publication Date 2013 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints by Badr Faisal Albanna A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Physics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Michael R. DeWeese, Chair Professor Ahmet Yildiz Professor David Presti Fall 2013 Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints Copyright 2013 by Badr Faisal Albanna 1 Abstract Bounds on the Entropy of a Binary System with Known Mean and Pairwise Constraints by Badr Faisal Albanna Doctor of Philosophy in Physics University of California, Berkeley Professor Michael R. DeWeese, Chair Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. In this dissertation, I provide lower and upper bounds on the entropy for both the minimum and maximum entropy distributions over binary units with any fixed set of mean values and pairwise correlations, and we construct distributions for several relevant cases. Surprisingly, the minimum entropy solution has entropy scaling logarithmically with system size, unlike the possible linear behavior of the maximum entropy solution, for any set of first- and second-order statistics consistent with arbitrarily large systems.
    [Show full text]
  • Nature, Science, Bayes' Theorem, and the Whole of Reality
    Nature, Science, Bayes' Theorem, and the Whole of Reality Moorad Alexanian Department of Physics and Physical Oceanography University of North Carolina Wilmington Wilmington, NC 28403-5606 Abstract A fundamental problem in science is how to make logical inferences from scientific data. Mere data does not suffice since additional information is necessary to select a domain of models or hypotheses and thus determine the likelihood of each model or hypothesis. Thomas Bayes' Theorem relates the data and prior information to posterior probabilities associated with differing models or hypotheses and thus is useful in identifying the roles played by the known data and the assumed prior information when making inferences. Scientists, philosophers, and theologians accumulate knowledge when analyzing different aspects of reality and search for particular hypotheses or models to fit their respective subject matters. Of course, a main goal is then to integrate all kinds of knowledge into an all-encompassing worldview that would describe the whole of reality. A generous description of the whole of reality would span, in the order of complexity, from the purely physical to the supernatural. These two extreme aspects of reality are bridged by a nonphysical realm, which would include elements of life, man, consciousness, rationality, mental and mathematical abstractions, etc. An urgent problem in the theory of knowledge is what science is and what it is not. Albert Einstein's notion of science in terms of sense perception is refined by defining operationally the data that makes up the subject matter of science. It is shown, for instance, that theological considerations included in the prior information assumed by Isaac Newton is irrelevant in relating the data logically to the model or hypothesis.
    [Show full text]
  • Distribution Transformer Semantics for Bayesian Machine Learning
    Distribution Transformer Semantics for Bayesian Machine Learning Johannes Borgstrom¨ 1, Andrew D. Gordon1, Michael Greenberg2, James Margetson1, and Jurgen Van Gael3 1 Microsoft Research 2 University of Pennsylvania 3 Microsoft FUSE Labs Abstract. The Bayesian approach to machine learning amounts to inferring pos- terior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of vari- ables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we pro- pose a core functional calculus with primitives for sampling prior distributions and observing variables. We define novel combinators for distribution transform- ers, based on theorems in measure theory, and use these to give a rigorous se- mantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid distributions, and observations of zero-probability events. We compile our core language to a small imperative lan- guage that in addition to the distribution transformer semantics also has a straight- forward semantics via factor graphs, data structures that enable many efficient inference algorithms. We then use an existing inference engine for efficient ap- proximate inference of posterior marginal distributions, treating thousands of ob- servations per second for large instances of realistic models. 1 Introduction In the past 15 years, statistical machine learning has unified many seemingly unrelated methods through the Bayesian paradigm. With a solid understanding of the theoreti- cal foundations, advances in algorithms for inference, and numerous applications, the Bayesian paradigm is now the state of the art for learning from data.
    [Show full text]
  • Report from the Chair by Robert H
    HistoryN E W S L E T T E R of Physics A F O R U M O F T H E A M E R I C A N P H Y S I C A L S O C I E T Y • V O L U M E I X N O . 5 • F A L L 2 0 0 5 Report From The Chair by Robert H. Romer, Amherst College, Forum Chair 2005, the World Year of Physics, has been a good one for the The Forum sponsored several sessions of invited lectures at History Forum. I want to take advantage of this opportunity to the March meeting (in Los Angeles) and the April meeting (in describe some of FHP’s activities during recent months and to Tampa), which are more fully described elsewhere in this Newslet- look forward to the coming year. ter. At Los Angeles we had two invited sessions under the general The single most important forum event of 2005 was the pre- rubric of “Einstein and Friends.” At Tampa, we had a third such sentation of the fi rst Pais Prize in the History of Physics to Martin Einstein session, as well as a good session on “Quantum Optics Klein of Yale University. It was only shortly before the award Through the Lens of History” and then a fi nal series of talks on ceremony, at the Tampa meeting in April, that funding reached “The Rise of Megascience.” A new feature of our invited sessions the level at which this honor could be promoted from “Award” to this year is the “named lecture.” The purpose of naming a lecture “Prize.” We are all indebted to the many generous donors and to is to pay tribute to a distinguished physicist while simultaneously the members of the Pais Award Committee and the Pais Selection encouraging donations to support the travel expenses of speak- Committee for their hard work over the last several years that ers.
    [Show full text]
  • Topics in Inference and Modeling in Physics a Dissertation Presented to the Faculty of the Graduate School In
    Respect Your Data: Topics in Inference and Modeling in Physics A Dissertation Presented to the Faculty of the Graduate School in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Colin Clement December 2019 c 2019 Colin Clement ALL RIGHTS RESERVED 2 ABSTRACT Respect Your Data: Topics in Inference and Modeling in Physics Colin Clement, Ph.D. Cornell University 2019 We discuss five topics related to inference and modeling in physics: image registration, magnetic image deconvolution, effective models of spin glasses, the two-dimensional Ising model, and a benchmark dataset of the arXiv pre-print service. First, we solve outstanding problems with image registration (which aims to infer the rigid shift relating two or more noisy shifted images), obtaining the information-theoretic limit in the precision of image shift estimation. Then, we use Bayesian inference and develop new physically-motivated priors in order to solve the ill-posed deconvolution problem of reconstructing electric currents from a magnetic images. After that, we apply machine learning and information geometry to study a spin glass model, finding that this model of canonical complexity is sloppy and thus allows for lower-dimensional effective descriptions. Next, we address outstanding questions regarding the corrections to scaling of the two dimensional Ising i model by applying Normal Form Theory of dynamical systems to the Renormalization Group (RG) flows and raise important questions about the RG in various statistical ensembles. Finally, we develop tools and practices to cast the entire arXiv pre-print service into a benchmark dataset for studying models on graphs with multi-modal features.
    [Show full text]
  • THE ISBA NEWSLETTER Vol
    THE ISBA NEWSLETTER Vol. 5 No. 2 September 1998 The official newsletter of the International Society for Bayesian Analysis President: Susie Bayarri President-Elect: John Geweke Past-President: Steve Feinberg Treasurer: Rob McCulloch Executive Secretary: Mike Evans Board Members: Alan Gelfand, Jay Kadane, Robert Kass, Luis Pericchi, Phil Dawid, Enrique de Alba, Ed George, Malay Ghosh, Alicia Carriquiry, David Rios Insua, Peter Mueller, William Strawderman Message from the President report on all sorts of possible publications for ISBA, including electronic (to be chaired by John Geweke). It seems that I can not escape from it any longer! I You can find information on our web page. I am always have been nicely, but insistently reminded that I surprised that people are willing to devote much-needed should address you at least once. Well, this is it! time and effort to keep ISBA rolling. I am indeed most grateful to all of you who said yes when asked to serve I am about three-quarters through my term as ISBA in some way. President, and it has been a most interesting, and rewarding experience. Our society is still very young, ISBA has been most lucky in counting on the effort, time so there are many (many, many) things to do. I have and dedication of Mike Evans. He has done a few of them, following the way so efficiently done a superb job as Executive Secretary, fixing all the lead by the former President Steve Fienberg (who has millions of details that are always pending. If this was a larger and older society to rule these days), but I am little, he also launched our new web page with plenty of leaving most of them to our next President, John yummy information (I won't tell you about it, so go and Geweke.
    [Show full text]
  • This Is IT: a Primer on Shannon's Entropy and Information
    Information Theory, 49–86 © The Author(s), under exclusive license to Springer Poincar´eSeminar 2018 Nature Switzerland AG 2021 This is IT: A Primer on Shannon’s Entropy and Information Olivier Rioul I didn’t like the term ‘information theory’. Claude [Shannon] didn’t like it either. You see, the term ‘information theory’ suggests that it is a theory about information – but it’s not. It’s the transmission of information, not information. Lots of people just didn’t understand this. – Robert Fano, 2001 Abstract. What is Shannon’s information theory (IT)? Despite its continued impact on our digital society, Claude Shannon’s life and work is still unknown to numerous people. In this tutorial, we review many aspects of the concept of entropy and information from a historical and mathematical point of view. The text is structured into small, mostly independent sections, each covering a particular topic. For simplicity we restrict our attention to one-dimensional variables and use logarithm and exponential notations log and exp without specifying the base. We culminate with a simple exposition of a recent proof (2017) of the entropy power inequality (EPI), one of the most fascinating inequalities in the theory. 1. Shannon’s Life as a Child Claude Elwood Shannon was born in 1916 in Michigan, U.S.A., and grew up in the small town of Gaylord. He was a curious, inventive, and playful child, and probably remained that way throughout his life. He built remote-controlled models and set up his own barbed-wire telegraph system to a friend’s house [48].
    [Show full text]
  • Intelligent Machines in the 21St Century: Automating the Processes of Inference and Inquiry by Kevin H
    Intelligent machines in the 21st century: automating the processes of inference and inquiry By Kevin H. Knuth Computational Sciences Div., NASA Ames Research Center, Moffett Field, CA The last century saw the application of Boolean algebra toward the construction of computing machines, which work by applying logical transformations to infor- mation contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these comput- ing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. However, modern intelligent machines work by inferring knowledge using only their pre-programmed prior knowledge and the data provided. They lack the ability to ask questions, or request data that would aid their inferences. Recent advances in understanding the foundations of probability theory have re- vealed implications for areas other than logic. Of relevance to intelligent machines, we identified the algebra of questions as the free distributive algebra, which now allows us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper we describe this logic of inference and inquiry using the mathematics of partially ordered sets and the scaffolding of lattice theory, discuss the far-reaching implications of the methodol- ogy, and demonstrate its application with current examples in machine learning. Automation of both inference and inquiry promises to allow robots to perform sci- ence in the far reaches of our solar system and in other star systems by enabling them to not only make inferences from data, but also decide which question to ask, experiment to perform, or measurement to take given what they have learned and what they are designed to understand.
    [Show full text]
  • Samenvatting
    UvA-DARE (Digital Academic Repository) Maximum caliber approach to reweight dynamics of non-equilibrium steady states Bause, M. Publication date 2021 Document Version Other version License Other Link to publication Citation for published version (APA): Bause, M. (2021). Maximum caliber approach to reweight dynamics of non-equilibrium steady states. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:26 Sep 2021 7 Summary The present thesis adds a new option for the sparse field of non-equilibrium steady states (NESS) reweighting of dynamics. It is built on the basis of Jaynes Maxi- mum Caliber, an ensemble description of NESS by global balance an local entropy productions, dynamical information drawn from stochastic thermodynamics and coarse-grained dynamics by a Markovian assumption.
    [Show full text]