Concurrent Pascal

Total Page:16

File Type:pdf, Size:1020Kb

Concurrent Pascal 199 lEEE 2, JUNE 1975 The Programming Language Concurrent Pascal PER BRINCH HANSEN Abstract—The paper describes a new programming language for structured programming of computer operating systems. It ex- tends the sequential programming language Pascal with concurrent programming tools called processes andmonitors. Section I explains these concepts informally by means of pictures illustrating a hier- archical design of a simple spooling system. Section IIuses the same example to introduce the language notation. The main contribution of Concurrent Pascal is to extend the monitor concept with an ex- plicit hierarchy of access rights to shared data structures that can be statedin the program text and checked by a compiler. Index Terms Abstract data types, access rights, classes, con- current processes,— concurrent programming languages, hierarchical operating systems, monitors, scheduling, structured multiprogram- ming. I. THE PURPOSE OF CONCURRENT PASCAL A. Background Fig. 2. Process. 1972 I have been working on a newprogramming programming computer SINCElanguage for structured of The next picture shows a process component in more operating systems. This language is called Concurrent detail (Fig. 2). language Pascal. It extends the sequential programming A process consists of a private data structure and a processes Pascal with concurrent programming tools called sequential program that can operate on the data. One and monitors [l]-[3]. process cannot operate on the private data of another Pascal. It This is an informal description of Concurrent process. But concurrent processes can share certain data to bring out the uses examples, pictures, and words crea- structures (such as a disk buffer) The access rights of a concepts without getting . tive aspects of newprogramming process mention the shared data it can operate on. into their finer details. I plan to define these concepts preciselyand introduce a notation for themin laterpapers. C. Monitors may imprecise from a formal This form of presentation be A disk buffer is a data structure shared by two concur- perhaps more effective from a human point of view, but is rent processes. The details of how such a buffer is con- point of view. structed are irrelevant to its users. All the processes need to they can send and receive data through it. B. Processes know is that If they try to operate on the buffer in any other way it is We will study concurrent processes inside an operating probably either a programming mistake or an example of system and look at one small problem only: how can large tricky programming. In both cases, one would like a amounts of data be transmitted from one process to compiler to detect such misuse of a shared data structure. another by means of a buffer stored on a disk? To make this possible, we must introduce a language Fig. 1 shows this little system and its three components : construct that will enable aprogrammerto tell a compiler a process that produces data, a process that consumes how a shared data structure can be used by processes. data, and a disk buffer that connects them. This kind of system component is called a monitor. A Thecircles are system components and the arrows are the monitor can synchronize concurrent processes and trans- access rights of these components. They show that both mit data between them. It can also control the order in processes can use the buffer (but they do not show that which competing processes use shared, physical resources. data flows from the producer to the consumer) . Thiskind Fig. 3 shows a monitor in detail. of picture is an access graph. A monitor defines a shared data structure and all the operations processes can perform on it. These synchroniz- Manuscript received February 1, 1975. This project is supported ing operations are called monitor procedures. A monitor by the National Science Foundation under Grant DCR74- 17331. also defines an initial operation that will be executed when The author is with the Department of Information California Institute of Technology, Pasadena, Calif. 91125. its data structure is created. Copyright © 1976by The Institute of Electrical and Electronlci Engineers, Inc. Printed in U.S.A. Annals No. 5075E007 TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-1,NO. aa Science, 200 lEEE TRANSACTIONS JUNE 1975 PASCAL 201 Access rights Shored data Virtual disk Card reader Disk buffers Line printer Console Synchronizing operations Console resource Disk buffer Initial operation Fig. Virtual consoles Input process Job process Output process 5. Buffer refinement. Fig. 3. Monitor. Fig. 4. Spooling system. Fig. 7. Decomposition of virtual consoles. We can define Virtual consoles a disk buffer as a monitor. Within this the disk. So a monitor can also have access rights to other monitor there will be shared variables that define the system components (see Fig. 3) . location and length of the buffer on the disk. There will Disk also be two monitor procedures, send and receive. The D - system Design Console resource initial operation Disk resource will make sure that the buffer starts as A process executes a sequential program-itis active an empty one. an component. A monitor is just a collection of procedures Virtuol consoles Processes cannot operate directly on Virtual disks shared data. They that do nothing until they are called by processes-it is a can only call monitor procedures that have access to passive component, But there are strong similarities Fig. 6. Decomposition of virtual disks. shared data. A monitor procedure is executed as part of a between a process and a monitor: both define a data calling process (just like any other procedure) . structure (private or shared) and the meaningful opera- lf concurrent processes Each virtual disk is only used by a single disk buffer simultaneously call monitor tions on it, The main difference between processes procedures and (Fig. 5). A system component that cannot be that operate on the same shared data these monitors is the way they are scheduled for execution called procedures must be executed simultaneously by several other components will be called strictly one at a time. Other- It seems natural therefore to regard processes and wise, the results of monitor a class. A class defines a data structure and the possible calls will be unpredictable, monitors as abstract data types defined in terms of the This means that the machine operations on it (just like a monitor). The exclusive must be able to delay operations one can perform on them. If a compiler can processesfor short periods access of class procedures to class variables can be guaran- of time until it is their turn to check that these operations are the only ones carried execute monitor procedures. teed completely at compile time. The virtual machine We will not be concerned out on the data structures, then we may be able to build about how this is done, does not have to schedule simultaneous calls of class but will just notice that a monitor very reliable, concurrent programs in which controlled Input process Job process Output process procedure procedures at run time, because such calls cannot occur. has exclusive access to shared data while it is access to data and physical resources is guaranteedbefore being This makes class calls considerably faster than monitor Fig. 8. Hierarchicalsystem structure. executed. these programs are put into operation. We have then to bo the (virtual) calls. machine on which concurrentprograms some extent solved the resource protection problem in the run The spooling system includes two will handle short-term scheduling of simultaneous cheapest possible manner (without hardware virtual disks but to control disk scheduling within the monitor illusory. To mechanisms onlyonereal disk. So we a monitor calls. But the programmer must also be able to and run time overhead). need single disk resource monitor give the programmercomplete control of disk scheduling, delayprocesses longer to control the order in which competing processes use processes to enter queue for periods of time if their requests So we will define processes and monitors as data types the should be able the disk during for data disk (Fig. 6). This monitor defines two procedures, request and other resources cannot be satisfied immedi- and make it possible to use several instances of the same disk transfers. Since arrival and service in the disk queueing ately. If, for example, a process and release access, to be called by a virtual disk before and system are tries to receive data from component type in a system. We can, for example, use potentially simultaneous 'Operations they must an empty disk buffer after each disk transfer. be handled by different it must be delayed until another two disk buffers to build a spooling system with an input system components, as shown in process sends more data. It would seem simpler to replace the virtual disks and Fig. 6. procesS) a job procesS; and an Qutput prQcegs (Rg 4) j Concurrent Pascal includes a simple the disk resource by a single monitor that has exclusive If the disk fails persistently during input/output this data type, called will distinguish between definitions and instances of com- a queue, that can be access to the disk and does the input/output. This would on an operator's used by monitorprocedures tocontrol ponents by calling them system types and system compo- should be reported console. Fig. 6 shows medium-term scheduling processes. certainly guarantee that processes use the disk one at a two instances of a class type, a They of A monitor can either nents. Access graphs (such as Fig. 4) will always show called virtual console. delay a process time. But this would be done according to the built-in give calling in a queue or continue another system components (not system types) the virtual disks the illusion that they have their process that is short-term scheduling policy of monitor calls.
Recommended publications
  • Concurrent Programming Introduction
    v1.0 20130407 Programmazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci [module 2.1] CONCURRENT PROGRAMMING INTRODUCTION PAP LM - ISI - Cesena - UNIBO !1 SUMMARY • Concurrent programming – motivations: HW evolution – basic jargon • processes interaction, cooperation, competition, • mutual exclusion, synchronization • problems: deadlocks, starvation, livelocks • A little bit of history – Dijkstra, Hoare, Brinch-Hansen • Concurrent languages, mechanisms, abstractions – overview PAP LM - ISI - Cesena - UNIBO Introduction !2 CONCURRENCY AND CONCURRENT SYSTEMS • Concurrency as a main concept of many domains and systems – operating systems, multi-threaded and multi-process programs, distributed systems, control systems, real-time systems,... • General definitions – “In computer science, concurrency is a property of systems in which several computational processes are executing at the same time, and potentially interacting with each other.” [ROS-97] – “Concurrency is concerned with the fundamental aspects of systems of multiple, simultaneously active computing agents, that interact with one another” [CLE-96] • Common aspects – systems with multiple activities or processes whose execution overlaps in time – activities can have some kind of dependencies, therefore can interact PAP LM - ISI - Cesena - UNIBO Introduction !3 CONCURRENT PROGRAMMING • Concurrent programming – building programs in which multiple computational activities overlap in time and typically interact in some way • Concurrent program – finite set of sequential programs that can be executed in parallel, i.e. overlapped in time • a sequential program specifies sequential execution of a list of statements • the execution of a sequential program is called process • a concurrent program specifies two or more sequential programs that may be executed concurrently as parallel processes – the execution of a concurrent program is called concurrent computation or elaboration PAP LM - ISI - Cesena - UNIBO Introduction !4 CONCURRENT PROGRAMMING VS.
    [Show full text]
  • The Programming Language Concurrent Pascal
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-I, No.2, JUNE 1975 199 The Programming Language Concurrent Pascal PER BRINCH HANSEN Abstract-The paper describes a new programming language Disk buffer for structured programming of computer operating systems. It e.lt­ tends the sequential programming language Pascal with concurx:~t programming tools called processes and monitors. Section I eltplains these concepts informally by means of pictures illustrating a hier­ archical design of a simple spooling system. Section II uses the same enmple to introduce the language notation. The main contribu~on of Concurrent Pascal is to extend the monitor concept with an .ex­ Producer process Consumer process plicit hierarchy Of access' rights to shared data structures that can Fig. 1. Process communication. be stated in the program text and checked by a compiler. Index Terms-Abstract data types, access rights, classes, con­ current processes, concurrent programming languages, hierarchical operating systems, monitors, scheduling, structured multiprogram­ ming. Access rights Private data Sequential 1. THE PURPOSE OF CONCURRENT PASCAL program A. Background Fig. 2. Process. INCE 1972 I have been working on a new programming .. language for structured programming of computer S The next picture shows a process component in more operating systems. This language is called Concurrent detail (Fig. 2). Pascal. It extends the sequential programming language A process consists of a private data structure and a Pascal with concurrent programming tools called processes sequential program that can operate on the data. One and monitors [1J-[3]' process cannot operate on the private data of another This is an informal description of Concurrent Pascal.
    [Show full text]
  • The Programmer As a Young Dog
    The Programmer as a Young Dog (1976) This autobiographical sketch describes the beginning of the author’s career at the Danish computer company Regnecentralen from 1963 to 1970. (The title is inspired by James Joyce’s “A Portrait of the Artist as a Young Man” and Dylan Thomas’s “Portrait of the Artist as a Young Dog.”) After three years in Regnecentralen’s compiler group, the author got the chance to design the architecture of the RC 4000 computer. In 1967, he became Head of the Software Development group that designed the RC 4000 multiprogramming system. The article was written in memory of Niels Ivar Bech, the dynamic director of Regnecentralen, who inspired a generation of young Danes to make unique contributions to computing. I came to Regnecentralen in 1963 to work with Peter Naur and Jrn Jensen. The two of them worked so closely together that they hardly needed to say anything to solve a problem. I remember a discussion where Peter was writing something on the blackboard, when Jrn suddenly said “but Peter ...” and immediately was interrupted with the reply “yes, of course, Jrn.” I swear that nothing else was said. It made quite an impression on me, especially since I didn’t even know what the discussion was about in the rst place. After a two-year apprenticeship as a systems programmer, I wanted to travel abroad and work for IBM in Winchester in Southern England. At that time Henning Isaksson was planning to build a process control computer for Haldor Topse, Ltd. Henning had asked Niels Ivar Bech for a systems programmer for quite some time.
    [Show full text]
  • Concurrent Pascal
    LIBP.ARY CLEARANCE 1. (a) I give my permission for my thesis, entitled .....~D.~~ r.r-.m1 .... P.C?.~~.: .. .) Q?y).~-tJ.\.1Ati~0.. A,:&. U~oJe- .................................................................... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. to be made available to readers in the Library under the conditions determined by the Librarian. (b) I agree to my thesis, if asked for by another institution, being sent away on temporary loan under conditions determined by the Librarian. (e) I also agree that oy thesis may be copied for Library use. 2. I do not wish m . .. .. .. .. .. .. ......................................... ...................~--- ~-........................ .. .. .. .. .. .. ~ ................... ~- ...................... •. '" -, to be made available to re~~,or to be sent ~ll.er ins-ti-tut.ions without my written consent withiri'--the next two years~ Signed .. 1?~~-............... Date ... ?!.? I 9./.$~.................. Strike out the se»tence or phrase which does not apply. The Library Massey University Palmerston North, N.Z. The copyright of this thesis belongs to the author. Readers must ~ign their name in the space below to show that they recognise this. They are asked to add their permanent address. Name and Address Date ................................................................................. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ••••••••••••••o•••••••••••• .. •••••••••••••••••••••••••••••••••••••••••••••••••••• ................................................................................
    [Show full text]
  • Part Iii Harnessing Parallelism
    PART III HARNESSING PARALLELISM The primary impetus behind the study of parallel programming, or concurrent programming as some would call it, was the necessity to write operating systems. While the first computers were simple enough to be considered sequential in nature, the advent of faster central processing units and the accompanying discrepancy in speed between the central processor and periph­ eral devices forced the consideration of possible concurrent activity of these mechanisms. The operating system was asked to control concurrent activity, and to control it in such a way that concurrent activity was maximized This desire for concurrent activity resulted in machine features such as the channel and the interrupt-the mechanism by which termination of an I/O activity is signaled The main problem with the interrupt, as far as the operating system designer was concerned, was its inherent unpredictability. Not knowing exactly when an interrupt could occur, the operating system had to be designed to expect an interrupt at any time. With the advent of multiprogramming and interactive, on-line processing, it became clear that the original motivation for parallelism-the control and exploitation of concurrent I/O-was only a special case, and some fundamen­ tal concepts began to emerge for study and use in more general parallel programming situations. Thus emerged the general idea of a set of processes executing concurrently, with the need for synchronization and mutual exclu­ sion (with respect to time) of some of their activities. The past 10-15 years have seen an enormous amount of activity in studying the general problem, a very small part of which is presented here.
    [Show full text]
  • Pascal Versus C : a Subjective Comparison
    PROCEEDINGS OF THE SYMPOSIUM ON LANGUAGE DESIGN AND PROGRAMMING METHODOLOGY SYDNEY, 10-11 SEPTEMBER~ 1979 PASCAL VERSUS C A SUBJECTIVE COMPARISON Prabhaker Mateti Department of Computer Science University of Melbourne ABSTRACT The two programming languages Pascal and C are subjectively compared. While the two languages have comparable data and control structures, the program structure of C appears superior. However, C has many potentially dangerous features, and requires great caution from its programmers. Other psychological effects that the various structures in these languages have on the process of programming are also conjectured. "At first sight, the idea of any rules or principles being superimposed on the creative mind seems more likely to hinder than to help, but this is really quite untrue in practice. Disciplined thinking focusses inspiration rather than blinkers it." - G. L. Glegg, The Design of Design. I Introduction Pascal has become one of the most widely accepted languages for the teaching of programming. It is also one of the most thoroughly studied languages. Several large programs have been written in Pascal and its derivatives. The programming language C has gained much prominence in recent years. The successful Unix operating system and most of its associated software are written in C. This paper confines itself to a subjective comparison of the two languages, and conjectures about the effect various structures of the languages have on the way one programs. While we do occasionally refer to the various extensions and compilers of the languages, the comparison is between the languages as they are now, and in the context of general programming.
    [Show full text]
  • A Programmer's Story. the Life of a Computer Pioneer
    A PROGRAMMER’S STORY The Life of a Computer Pioneer PER BRINCH HANSEN FOR CHARLES HAYDEN Copyright c 2004 by Per Brinch Hansen. All rights reserved. Per Brinch Hansen 5070 Pine Valley Drive, Fayetteville, NY 13066, USA CONTENTS Acknowledgments v 1 Learning to Read and Write 1938–57 1 Nobody ever writes two books – My parents – Hitler occupies Denmark – Talking in kindergarten – A visionary teacher – The class newspaper – “The topic” – An elite high school – Variety of teachers – Chemical experiments – Playing tennis with a champion – Listening to jazz – “Ulysses” and other novels. 2 Choosing a Career 1957–63 17 Advice from a professor – Technical University of Denmark – rsted’s inuence – Distant professors – Easter brew – Fired for being late – International exchange student – Masers and lasers – Radio talk — Graduation trip to Yugoslavia – An attractive tourist guide – Master of Science – Professional goals. 3 Learning from the Masters 1963–66 35 Regnecentralen – Algol 60 – Peter Naur and Jrn Jensen – Dask and Gier Algol – The mysterious Cobol 61 report – I join the compiler group – Playing roulette at Marienlyst resort – Jump- starting Siemens Cobol at Mogenstrup Inn – Negotiating salary – Compiler testing in Munich – Naur and Dijkstra smile in Stock- holm – The Cobol compiler is nished – Milena and I are married in Slovenia. 4 Young Man in a Hurr 1966–70 59 Naur’s vision of datalogy – Architect of the RC 4000 computer – Programming a real-time system – Working with Henning Isaks- son, Peter Kraft, and Charles Simonyi – Edsger Dijkstra’s inu- ence – Head of software development – Risking my future at Hotel Marina – The RC 4000 multiprogramming system – I meet Edsger Dijkstra, Niklaus Wirth, and Tony Hoare – The genius of Niels Ivar Bech.
    [Show full text]
  • Virtual Organization Clusters: Self-Provisioned Clouds on the Grid Michael Murphy Clemson University, [email protected]
    Clemson University TigerPrints All Dissertations Dissertations 5-2010 Virtual Organization Clusters: Self-Provisioned Clouds on the Grid Michael Murphy Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_dissertations Part of the Computer Sciences Commons Recommended Citation Murphy, Michael, "Virtual Organization Clusters: Self-Provisioned Clouds on the Grid" (2010). All Dissertations. 511. https://tigerprints.clemson.edu/all_dissertations/511 This Dissertation is brought to you for free and open access by the Dissertations at TigerPrints. It has been accepted for inclusion in All Dissertations by an authorized administrator of TigerPrints. For more information, please contact [email protected]. Virtual Organization Clusters: Self-Provisioned Clouds on the Grid A Dissertation Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Computer Science by Michael A. Murphy May 2010 Accepted by: Dr. Sebastien Goasguen, Committee Chair Dr. Jim Martin Dr. Walt Ligon Dr. Robert Geist Abstract Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model.
    [Show full text]
  • Downloading from Naur's Website: 19
    1 2017.04.16 Accepted for publication in Nuncius Hamburgensis, Band 20. Preprint of invited paper for Gudrun Wolfschmidt's book: Vom Abakus zum Computer - Begleitbuch zur Ausstellung "Geschichte der Rechentechnik", 2015-2019 GIER: A Danish computer from 1961 with a role in the modern revolution of astronomy By Erik Høg, lektor emeritus, Niels Bohr Institute, Copenhagen Abstract: A Danish computer, GIER, from 1961 played a vital role in the development of a new method for astrometric measurement. This method, photon counting astrometry, ultimately led to two satellites with a significant role in the modern revolution of astronomy. A GIER was installed at the Hamburg Observatory in 1964 where it was used to implement the entirely new method for the meas- urement of stellar positions by means of a meridian circle, then the fundamental instrument of as- trometry. An expedition to Perth in Western Australia with the instrument and the computer was a suc- cess. This method was also implemented in space in the first ever astrometric satellite Hipparcos launched by ESA in 1989. The Hipparcos results published in 1997 revolutionized astrometry with an impact in all branches of astronomy from the solar system and stellar structure to cosmic distances and the dynamics of the Milky Way. In turn, the results paved the way for a successor, the one million times more powerful Gaia astrometry satellite launched by ESA in 2013. Preparations for a Gaia suc- cessor in twenty years are making progress. Zusammenfassung: Eine elektronische Rechenmaschine, GIER, von 1961 aus Dänischer Herkunft spielte eine vitale Rolle bei der Entwiklung einer neuen astrometrischen Messmethode.
    [Show full text]
  • A Secure Computing Platform for Building Automation Using Microkernel-Based Operating Systems Xiaolong Wang University of South Florida, [email protected]
    University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School November 2018 A Secure Computing Platform for Building Automation Using Microkernel-based Operating Systems Xiaolong Wang University of South Florida, [email protected] Follow this and additional works at: https://scholarcommons.usf.edu/etd Part of the Computer Sciences Commons Scholar Commons Citation Wang, Xiaolong, "A Secure Computing Platform for Building Automation Using Microkernel-based Operating Systems" (2018). Graduate Theses and Dissertations. https://scholarcommons.usf.edu/etd/7589 This Dissertation is brought to you for free and open access by the Graduate School at Scholar Commons. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. A Secure Computing Platform for Building Automation Using Microkernel-based Operating Systems by Xiaolong Wang A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science and Engineering Department of Computer Science and Engineering College of Engineering University of South Florida Major Professor: Xinming Ou, Ph.D. Jarred Ligatti, Ph.D. Srinivas Katkoori, Ph.D. Nasir Ghani, Ph.D. Siva Raj Rajagopalan, Ph.D. Date of Approval: October 26, 2018 Keywords: System Security, Embedded System, Internet of Things, Cyber-Physical Systems Copyright © 2018, Xiaolong Wang DEDICATION In loving memory of my father, to my beloved, Blanka, and my family. Thank you for your love and support. ACKNOWLEDGMENTS First and foremost, I would like to thank my advisor, Dr. Xinming Ou for his guidance, encouragement, and unreserved support throughout my PhD journey.
    [Show full text]
  • BACI Pascal Compiler User Guide
    USER'S GUIDE BACI Pascal Compiler and Concurrent PCODE Interpreter Bill Bynum/Tracy Camp College of William and Mary/Colorado School of Mines July 14, 2003 BACI Pascal Compiler User's Guide 1 Contents 1 Introduction 2 2 Pascal Compiler Syntax 2 3 Concurrency Constructs 4 3.1 COBEGIN—COEND . 4 3.2 Semaphores . 4 3.2.1 Initializing a Semaphore . 4 3.2.2 P (or WAIT) and V (or SIGNAL) Procedures . 5 3.2.3 Examples of Semaphore Usage . 6 3.3 Monitors . 7 3.3.1 Condition Variables . 7 3.3.2 WAITC and SIGNALC Procedures . 7 3.3.3 Immediate Resumption Requirement . 8 3.3.4 An Example of a Monitor . 8 3.4 Other Concurrency Constructs . 8 3.4.1 ATOMIC Keyword . 9 3.4.2 PROCEDURE suspend; . 9 3.4.3 PROCEDURE revive( process number : INTEGER); . 9 3.4.4 FUNCTION which proc : INTEGER; . 9 3.4.5 FUNCTION random( range : INTEGER ): INTEGER; . 9 4 Built-in String Handling Functions 10 4.1 PROCEDURE stringCopy(dest : STRING; src : STRING); . 10 4.2 PROCEDURE stringConcat(dest : STRING; src : STRING); . 10 4.3 FUNCTION stringCompare(x : STRING; y : STRING): INTEGER; . 10 4.4 FUNCTION stringLength(x : STRING): INTEGER; . 10 4.5 FUNCTION sscanf(x : STRING; fmt : RAWSTRING ; ...): INTEGER; . 10 4.6 PROCEDURE sprintf(x : STRING; x, fmt : RAWSTRING,...); . 11 5 Using the BACI Pascal Compiler and PCODE Interpreter 11 6 Sample program and output 12 BACI Pascal Compiler User's Guide 2 1 Introduction The purpose of this document is to provide a brief description of the BACI Pascal Compiler and Concur- rent PCODE Interpreter programs and a description of how to use them.
    [Show full text]
  • Kernel Operating System
    International Journal of Advanced Technology in Engineering and Science www.ijates.com Volume No.02, Special Issue No. 01, September 2014 ISSN (online): 2348 – 7550 KERNEL OPERATING SYSTEM Manjeet Saini1, Abhishek Jain2, Ashish Chauhan3 Department Of Computer Science And Engineering, Dronacharya College Of Engineering Khentawas, Farrukh Nagar, Gurgaon, Haryana, (India) ABSTRACT The central module of an operating system (OS) is the Kernel. It is the part of the operating system that loads first, and it remains in main memory. It is necessary for the kernel to be very small while still providing all the essential services needed by other parts of the OS because it stays in the memory. To prevent kernel code from being overwritten by programs or other parts of the operating system it is loaded into a protected area of memory. The presence of an operating system kernel is not a necessity to run a computer. Directly loading and executing the programs on the "bare metal" machine is possible, provided that the program authors are willing to do without any OS support or hardware abstraction. Many video game consoles and embedded systems still constitute the “bare metal” approach. But in general, newer systems use kernels and operating systems. Keywords: Scalability, Multicore Processors, Message Passing I. INTRODUCTION In computing, the kernel is a computer program that manages input/output requests from software, and translates them into data processing instructions for the central processing unit and other electronic components of a computer. When a computer program (in this context called a process) makes requests of the kernel, the request is called a system call.
    [Show full text]