AN ABSTRACT Macfflne BASED EXECUTION MODEL for COMPUTER ARCHITECTURE DESIGN and EFFICIENT IMPLEMENTATION of LOGIC PROGRAMS in PARALLEL
Total Page:16
File Type:pdf, Size:1020Kb
AN ABSTRACT MACfflNE BASED EXECUTION MODEL FOR COMPUTER ARCHITECTURE DESIGN AND EFFICIENT IMPLEMENTATION OF LOGIC PROGRAMS IN PARALLEL Manuel V. Hermenegildo Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188 TR-86-20 August 1986 Copyright © 1986 Manuel V. Hermenegildo AN ABSTRACT MACHINE BASED EXECUTION MODEL FOR COMPUTER ARCHITECTURE DESIGN AND EFFICIENT IMPLEMENTATION OF LOGIC PROGRAMS IN PARALLEL by MANUEL V. HERMENEGILDO, E.E., M.S. DISSERTATION Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY THE UNIVERSITY OF TEXAS AT AUSTIN August, 1986 Copyright by Manuel V. Hermenegildo 1986 ACKNOWLEDGEMENT I would first like to thank my advisor, Professor G. J. Lipovski, for his continuous support and guidance. He brought the subject to my attention and has always made himself available for discussion and advice. I would also like to thank the members of the Committee, Professors J. C. Browne, J. K. Aggarwal, C. K. Leung, and R. M. Jenevein, for their helpful observations, interest and encouragement, and for the time devoted to the reading of this document. I am also indebted to Richard Warren and Roger Nasr for many hours of interesting discussion and for their friendship. Richard Warren's implementation of the model described in this document was very useful in proving its viability and in providing performance data. I am grateful to all the other members of the MCC Parallel Processing and Artificial Intelligence groups for their encouragement and to Steve Lundstrom (MCC) and the Fulbright Foundation for their support. Special thanks to Professor David H. D. Warren for many interesting discussions, for his perceptive comments and suggestions, and for his hospitality. He has been a continuous source of encouragement and inspiration for me. Finally I would like to thank the many friends who have helped me and encouraged me during these years at The University of Texas. I am very specially indebted to Ian Walker, Pat Cericola, and Evan Tick for their patience in wading through so many earlier drafts of this document and, most importantly, for their invaluable friendship. Manuel V. Hermenegildo The University of Texas at Austin August, 1986 PREFACE "Sorry to interrupt the festivities ", said HAL, "but we have a problem. " Arthur C. Clarke; 2001, A Space Odyssey A Word for the Non-Initiated Scientific reports have an unmistakable tendency to be of a very detailed nature and limited scope. This is largely a consequence of the fact that as our knowledge is broadened we only seem to further unveil reality's inherent complexity. It is in order to confront this complexity that scientific research has moved more and more towards super specialization. However, even though superspecialization seems to be here to stay, the activities of the researcher are hard to justify unless they are motivated (secretly or openly) by some higher-level goal. Of course, such higher level goals often appear obvious to most researchers in their respective fields (in the case in hand, those of Computer Engineering and Computer Science). Computer Scientists and Engineers are therefore urged at this point to skip the rest of this preface and jump with the author into the first chapter. This preface is not intended for them. Instead, it will attempt to offer the "uninitiated" reader in the mysteries of computers, declarative languages and parallelism, both a simple introduction to the subjects treated in the rest of this document and, hopefully, some justification as to why it may make sense to explore these subjects at all. Computers Need to be Faster and Easier to Use It is perhaps the fact that computers offer promise to one day mimic at least some of the simpler functions of the human mind (itself undoubtedly one of the most VI intriguing "mechanisms" with which we are co^. .^ed) that has always drawn our attention towards them. However, computer research is, as so many areas of science and engineering, still far from its most idealistic goals, and, in particular, from that of achieving any kind of "intelligent" behavior from an automaton. Research in "Artificial Intelligence" is faced today with a number of limitations. Firstly, we still lack a clear understanding of how such behavior could be obtained from a machine. Secondly, we do not know how to build computers that are fast enough that they could provide responses according to that behavior in a reasonable amount of time, and which are at the same time easy enough to use that the associated programming tasks would represent feasible endeavors. The first of the problems mentioned above is one of the many subjects of artificial intelligence research. Instead, and as the subject of this dissertation, we will be interested in addressing the second of those limitations, i.e. providing computers that are at the same time more powerful, and friendlier to the user. Fortunately enough, we do not need to resort to any futuristic quest for intelligence to understand the usefulness of such an endeavor: we already need faster, easier to use machines today, not only for the advancement of artificial intelligence, but also in most other current computer application areas. 1.- Making Computers Easier to Use Let us consider the issue of making machines easier to use first . At our (relatively modest) current state of development in human interaction with computers, our main means for instructing them what to do is by writing a program, i.e. a list of instructions which are to be executed by it. These instructions are expressed in a particular language that the computer can understand: a programming language. Conventional programming languages generally express these instructions as a series of precise "actions" that are to be performed by the computer one after the We will treat the issue of computational power in the next section. vn other" . This is known as an imperative style, a program being a sequence of commands or statements. Programs today come in this format largely as a consequence of the fact that the first computers were no more than the equivalent of one of today's hand-held calculators and that early programs were just sequences of the basic instructions that a particular machine could directly execute . Programming languages then emerged as a tool for making it it easier for a human being to express the actions required from the computer. However, it is a fact that computers already existed in a particular form before these languages were designed, and this undoubtedly invited a "machine-oriented" style in these designs which still lingers in today's programming languages. These languages are often so apart from the natural way in which humans think and express themselves that programming a computer is frequently a difficult and error-prone task for any sizeable problem. The question of course is, can we design a computer language which is free For example, suppose that we want to program the computer to simply generate the squares of all positive integers. One way of doing this is by specifying the actions that may be involved in obtaining such a list: 1. Start with the number 0, 2. find the square of the number by multiplying it by itself, 3. print the square, 4- compute the next number by adding 1 to the previous number, 5. go to step 2. This would be expressed in less verbose terms (in a conventional programming language) more or less as follows: Number = 0; loop: Square = Number * Number; print( Number ); Number = Number + 1; goto loop: 3 For example, for an actual hand-held calculator a "program" for adding "4" and "3" would be: press 3 press + press 4 press = Vlll from the imperative style? If we avoid any machine oriented considerations, the first language to come to mind is, of course, the human natural language: the user's mother tongue. Such a language though presents a number of serious drawbacks. These drawbacks include its verboseness, only made worse by its vagueness and ambiguity if not provided with a suitable context or a great deal of (normally assumed) knowledge. This fact was already realized by mathematicians long before computers came to being and they devised "Logic" as a means of clarifying and/or formalizing the human thought process. Logic lets us express facts and rules about the world in a precise and concise way and draw conclusions from them which can be formally proven to be correct. Thus, Logic would tells us, for example, that the assumptions Aristotle makes cookies, and Plato is a friend of anyone who makes cookies. imply the conclusion Plato is a friend of Aristotle. Symbolic logic is simply a shorthand for expressing conventional Logic: if we agree that makes(X, cookies) means "X makes cookies", V X means "for all X", X —• Y means "if X then Y", and friend(X,Y) means "X is a friend of Y", then the example above can be expressed in symbolic logic as makes(Aristotle, cookies) V X, makes(X, cookies) —• friend(Plato, X) and the conclusion as friend(Plato, Aristotle) Clearly, one can mechanically translate from Symbolic Logic to natural language by using a "conversion table" for the symbols like the one provided above. It is this ability of symbolic logic to express knowledge in a way that is very precise and compact, while at the same time close to the natural way in which humans express themselves that led to the concept of using Logic as a means for IX programming computers. This idea was first proposed in a formal manner by Kowalski [39] not many years ago and has since received wide acceptance as one of the most promising programming paradigms for future computers .