Parallex: an Advanced Parallel Execution Model for Scaling- Impaired Applications

Total Page:16

File Type:pdf, Size:1020Kb

Parallex: an Advanced Parallel Execution Model for Scaling- Impaired Applications ParalleX: An Advanced Parallel Execution Model for Scaling- Impaired Applications Hartmut Kaiser Maciej Brodowicz Thomas Sterling Center for Computation and Center for Computation and Center for Computation and Technology Technology Technology Louisiana State University Louisiana State University Department of Computer Science [email protected] [email protected] Louisiana State University [email protected] 1. ABSTRACT is numerical relativity used to model colliding neutron stars to simulate gamma ray bursts (GRB) and simulta- High performance computing (HPC) is experiencing neously identify the gravitational wave signature for a phase change with the challenges of programming detection with such massive instruments as LIGO (Laser and management of heterogeneous multicore sys- Interferometer Gravitational Observatory). These codes tems architectures and large scale system configu- exploit the efficiencies of Adaptive Mesh Refinement rations. It is estimated that by the end of the next (AMR) algorithms to concentrate processing effort at decade Exaflops computing systems requiring hun- the most active parts of the computation space at any dreds of millions of cores demanding multi-billion- one time. However, conventional parallel programming way parallelism with a power budget of 50 methods using MPI [1] and systems such as distributed Gflops/watt may emerge. At the same time, there memory MPPs and Linux clusters exhibit poor efficiency are many scaling-challenged applications that al- and constrained scalability, severely limiting scientific though taking many weeks to complete, cannot advancement. Many other applications exhibit similar scale even to a thousand cores using conventional properties. To achieve dramatic improvements for such distributed programming models. This paper de- problems and prepare them for exploitation of Peta- scribes an experimental methodology, ParalleX, that flops systems comprising millions of cores, a new execu- addresses these challenges through a change in the tion model and programming methodology is required fundamental model of parallel computation from [2]. This paper briefly presents such a model, ParalleX, that of the communicating sequential processes and provides early results from an experimental im- (e.g., MPI) to an innovative synthesis of concepts plementation of the HPX runtime system that suggests involving message-driven work-queue execution in the future promise of such a computing strategy. the context of a global address space. The focus of It is recognized that technology trends have forced high this work is a new runtime system required to test, performance system architectures into the new regime validate, and evaluate the use of ParalleX concepts of heterogeneous multicore structures. With multicore for extreme scalability. This paper describes the becoming the new Moore’s Law, performance advances ParalleX model and the HPX runtime system and even for general commercial applications are requiring discusses how both strategies contribute to the goal parallelism in what was once the domain of purely se- of extreme computing through dynamic asynchron- quential computing. In addition, accelerators including, ous execution. The paper presents the first early but not limited, to GPUs are being applied for significant experimental results of tests using a proof-of- performance gains, at least for certain applications concept runtime-system implementation. These where inner loops exhibit numeric intensive operation results are very promising and are guiding future on relatively small data. Future high end systems will work towards a full scale parallel programming and integrate thousands of “nodes”, each comprising many runtime environment. hundreds of cores by means of system area networks. 2. INTRODUCTION Future applications like AMR algorithms will involve the processing of large time-varying graphs with embedded An important class of parallel applications is emerging meta-data. Also of this general class are informatics as scaling impaired. These are problems that require problems that are of increasing importance for know- substantial execution time, sometimes exceeding a ledge management and national security. month, but which are unable to make effective use of Critical bottlenecks to the effective use of new genera- more than a few hundred processors. One such example tion HPC systems include: 1 Starvation – due to lack of usable application paral- flow, and employ adaptive scheduling and routing to lelism and means of managing it, mitigate contention (e.g., memory bank conflicts). Sca- Overhead – reduction to permit strong scalability, lability will be dramatically increased, at least for cer- improve efficiency, and enable dynamic resource tain classes of problems, through data directed compu- management, ting using message-driven computation and lightweight Latency – from remote access across system or to synchronization mechanisms that will exploit the paral- local memories, lelism intrinsic to dynamic directed graphs through Contention – due to multicore chip I/O pins, memo- their meta-data. As a consequence, sustained perfor- ry banks, and system interconnects. mance will be dramatically improved both in absolute terms through extended scalability for those applica- The ParalleX model has been devised to address these tions currently constrained, and in relative terms due to challenges by enabling a new computing dynamic enhanced efficiency achieved. Finally, power reductions through the application of message-driven computation will be achieved by reducing extraneous calculations in a global address space context with lightweight syn- and data movements. Speculative execution and specul- chronization. This paper describes the ParalleX model ative prefetching are largely eliminated while dynamic through the syntax of the PXI API and presents a run- adaptive methods and multithreading in combination time system architecture that delivers the middleware serve many of the purposes these conventionally pro- mechanisms required to support the parallel execution, vide. synchronization, resource allocation, and name space management. Section 3 describes the ParalleX model ParalleX replaces the conventional communicating se- and the performance drivers motivating it. Section 4 quential processes model to address the challenges im- describes the HPX runtime system architecture and par- posed by post-Petascale computing, multicore arrays, ticular implementation details important to the results. heterogeneous accelerators, and the specific class of Section 5 describes early experiments that demonstrate dynamic graph problems while exploiting the opportun- key functional attributes of the runtime system. Section ities of multithreaded multicore technologies and archi- 6 presents and discusses the results, with concluding tectures. Instead of statically allocated processes using comments and future work presented in Section 7. message-passing for communication and synchroniza- tion, dynamically scheduled multiple threads using 3. THE PARALLEX MODEL OF PARALLEL message-driven mechanisms for moving the work to the EXECUTION data and local control objects for synchronization are employed. The result is a new dynamic based on local A phase change in high performance computing is dri- synchrony and global asynchronous operation. Key to ven by significant advances in enabling technologies the efficiency and latency hiding of ParalleX is the mes- requiring new computer architectures to exploit the sage-driven work-queue methodology of applying user performance benefits of the technologies, while com- tasks to physical processing resources. This separates pensating for the challenges they present. Fundamental the work from the resources and given sufficient paral- to such effective change, the architectures that reflect lelism allows the processor cores to continue to do use- them, and the programming models that enable access ful work even in the presence of remote service re- and user exploitation of them is the transformative ex- quests and data accesses. This results in system-wide ecution model that establishes the semantic framework latency hiding intrinsic to the paradigm. tying all levels together. Historically, HPC has expe- rienced at least five such phase changes in models of Global barriers are essentially eliminated as the prin- computation over the last six decades each enabling a cipal means of synchronization, and instead replaced by successive technology generation. Currently significant lightweight Local Control Objects (LCOs) that can be advances in technology are forcing dramatic changes in used for a plethora of purposes from simple mutex con- their usage as reflected by early multicore heterogene- structs to sophisticated futures semantics for anonym- ous architectures suggesting the need for new ap- ous producer-consumer operation. LCOs enable event- proaches to application programming. driven thread creation and can support in-place data structure protection and on-the-fly scheduling. One The goal of the ParalleX model of computation is to ad- consequence of this is the elimination of conventional dress the key challenges of efficiency, scalability, sus- critical sections with equivalent functionality achieved tained performance, and power consumption with re- local to the data structure contended for by embedded spect to the limitations of conventional programming local control objects and dynamic futures. The paral- practices (e.g., MPI). ParalleX
Recommended publications
  • An Execution Model for Serverless Functions at the Edge
    An Execution Model for Serverless Functions at the Edge Adam Hall Umakishore Ramachandran Georgia Institute of Technology Georgia Institute of Technology Atlanta, Georgia Atlanta, Georgia ach@gatech:edu rama@gatech:edu ABSTRACT 1 INTRODUCTION Serverless computing platforms allow developers to host single- Enabling next generation technologies such as self-driving cars or purpose applications that automatically scale with demand. In con- smart cities via edge computing requires us to reconsider the way trast to traditional long-running applications on dedicated, virtu- we characterize and deploy the services supporting those technolo- alized, or container-based platforms, serverless applications are gies. Edge/fog environments consist of many micro data centers intended to be instantiated when called, execute a single function, spread throughout the edge of the network. This is in stark contrast and shut down when finished. State-of-the-art serverless platforms to the cloud, where we assume the notion of unlimited resources achieve these goals by creating a new container instance to host available in a few centralized data centers. These micro data center a function when it is called and destroying the container when it environments must support large numbers of Internet of Things completes. This design allows for cost and resource savings when (IoT) devices on limited hardware resources, processing the mas- hosting simple applications, such as those supporting IoT devices sive amounts of data those devices generate while providing quick at the edge of the network. However, the use of containers intro- decisions to inform their actions [44]. One solution to supporting duces some overhead which may be unsuitable for applications emerging technologies at the edge lies in serverless computing.
    [Show full text]
  • CS 2210: Theory of Computation
    CS 2210: Theory of Computation Spring, 2019 Administrative Information • Background survey • Textbook: E. Rich, Automata, Computability, and Complexity: Theory and Applications, Prentice-Hall, 2008. • Book website: http://www.cs.utexas.edu/~ear/cs341/automatabook/ • My Office Hours: – Monday, 6:00-8:00pm, Searles 224 – Tuesday, 1:00-2:30pm, Searles 222 • TAs – Anjulee Bhalla: Hours TBA, Searles 224 – Ryan St. Pierre, Hours TBA, Searles 224 What you can expect from the course • How to do proofs • Models of computation • What’s the difference between computability and complexity? • What’s the Halting Problem? • What are P and NP? • Why do we care whether P = NP? • What are NP-complete problems? • Where does this make a difference outside of this class? • How to work the answers to these questions into the conversation at a cocktail party… What I will expect from you • Problem Sets (25%): – Goal: Problems given on Mondays and Wednesdays – Due the next Monday – Graded by following Monday – A learning tool, not a testing tool – Collaboration encouraged; more on this in next slide • Quizzes (15%) • Exams (2 non-cumulative, 30% each): – Closed book, closed notes, but… – Can bring in 8.5 x 11 page with notes on both sides • Class participation: Tiebreaker Other Important Things • Go to the TA hours • Study and work on problem sets in groups • Collaboration Issues: – Level 0 (In-Class Problems) • No restrictions – Level 1 (Homework Problems) • Verbal collaboration • But, individual write-ups – Level 2 (not used in this course) • Discussion with TAs only – Level 3 (Exams) • Professor clarifications only Right now… • What does it mean to study the “theory” of something? • Experience with theory in other disciplines? • Relationship to practice? – “In theory, theory and practice are the same.
    [Show full text]
  • The Problem with Threads
    The Problem with Threads Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2006-1 http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html January 10, 2006 Copyright © 2006, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement This work was supported in part by the Center for Hybrid and Embedded Software Systems (CHESS) at UC Berkeley, which receives support from the National Science Foundation (NSF award No. CCR-0225610), the State of California Micro Program, and the following companies: Agilent, DGIST, General Motors, Hewlett Packard, Infineon, Microsoft, and Toyota. The Problem with Threads Edward A. Lee Professor, Chair of EE, Associate Chair of EECS EECS Department University of California at Berkeley Berkeley, CA 94720, U.S.A. [email protected] January 10, 2006 Abstract Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems. Languages require little or no syntactic changes to sup- port threads, and operating systems and architectures have evolved to efficiently support them. Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures.
    [Show full text]
  • Toward IFVM Virtual Machine: a Model Driven IFML Interpretation
    Toward IFVM Virtual Machine: A Model Driven IFML Interpretation Sara Gotti and Samir Mbarki MISC Laboratory, Faculty of Sciences, Ibn Tofail University, BP 133, Kenitra, Morocco Keywords: Interaction Flow Modelling Language IFML, Model Execution, Unified Modeling Language (UML), IFML Execution, Model Driven Architecture MDA, Bytecode, Virtual Machine, Model Interpretation, Model Compilation, Platform Independent Model PIM, User Interfaces, Front End. Abstract: UML is the first international modeling language standardized since 1997. It aims at providing a standard way to visualize the design of a system, but it can't model the complex design of user interfaces and interactions. However, according to MDA approach, it is necessary to apply the concept of abstract models to user interfaces too. IFML is the OMG adopted (in March 2013) standard Interaction Flow Modeling Language designed for abstractly expressing the content, user interaction and control behaviour of the software applications front-end. IFML is a platform independent language, it has been designed with an executable semantic and it can be mapped easily into executable applications for various platforms and devices. In this article we present an approach to execute the IFML. We introduce a IFVM virtual machine which translate the IFML models into bytecode that will be interpreted by the java virtual machine. 1 INTRODUCTION a fundamental standard fUML (OMG, 2011), which is a subset of UML that contains the most relevant The software development has been affected by the part of class diagrams for modeling the data apparition of the MDA (OMG, 2015) approach. The structure and activity diagrams to specify system trend of the 21st century (BRAMBILLA et al., behavior; it contains all UML elements that are 2014) which has allowed developers to build their helpful for the execution of the models.
    [Show full text]
  • A Brief History of Just-In-Time Compilation
    A Brief History of Just-In-Time JOHN AYCOCK University of Calgary Software systems have been using “just-in-time” compilation (JIT) techniques since the 1960s. Broadly, JIT compilation includes any translation performed dynamically, after a program has started execution. We examine the motivation behind JIT compilation and constraints imposed on JIT compilation systems, and present a classification scheme for such systems. This classification emerges as we survey forty years of JIT work, from 1960–2000. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processors; K.2 [History of Computing]: Software General Terms: Languages, Performance Additional Key Words and Phrases: Just-in-time compilation, dynamic compilation 1. INTRODUCTION into a form that is executable on a target platform. Those who cannot remember the past are con- What is translated? The scope and na- demned to repeat it. ture of programming languages that re- George Santayana, 1863–1952 [Bartlett 1992] quire translation into executable form covers a wide spectrum. Traditional pro- This oft-quoted line is all too applicable gramming languages like Ada, C, and in computer science. Ideas are generated, Java are included, as well as little lan- explored, set aside—only to be reinvented guages [Bentley 1988] such as regular years later. Such is the case with what expressions. is now called “just-in-time” (JIT) or dy- Traditionally, there are two approaches namic compilation, which refers to trans- to translation: compilation and interpreta- lation that occurs after a program begins tion. Compilation translates one language execution. into another—C to assembly language, for Strictly speaking, JIT compilation sys- example—with the implication that the tems (“JIT systems” for short) are com- translated form will be more amenable pletely unnecessary.
    [Show full text]
  • A Parallel Program Execution Model Supporting Modular Software Construction
    A Parallel Program Execution Model Supporting Modular Software Construction Jack B. Dennis Laboratory for Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 U.S.A. [email protected] Abstract as a guide for computer system design—follows from basic requirements for supporting modular software construction. A watershed is near in the architecture of computer sys- The fundamental theme of this paper is: tems. There is overwhelming demand for systems that sup- port a universal format for computer programs and software The architecture of computer systems should components so users may benefit from their use on a wide reflect the requirements of the structure of pro- variety of computing platforms. At present this demand is grams. The programming interface provided being met by commodity microprocessors together with stan- should address software engineering issues, in dard operating system interfaces. However, current systems particular, the ability to practice the modular do not offer a standard API (application program interface) construction of software. for parallel programming, and the popular interfaces for parallel computing violate essential principles of modular The positions taken in this presentation are contrary to or component-based software construction. Moreover, mi- much conventional wisdom, so I have included a ques- croprocessor architecture is reaching the limit of what can tion/answer dialog at appropriate places to highlight points be done usefully within the framework of superscalar and of debate. We start with a discussion of the nature and VLIW processor models. The next step is to put several purpose of a program execution model. Our Parallelism processors (or the equivalent) on a single chip.
    [Show full text]
  • Computer Architecture: Dataflow (Part I)
    Computer Architecture: Dataflow (Part I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture n These slides are from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 22: Dataflow I n Video: n http://www.youtube.com/watch? v=D2uue7izU2c&list=PL5PHm2jkkXmh4cDkC3s1VBB7- njlgiG5d&index=19 2 Some Required Dataflow Readings n Dataflow at the ISA level q Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. q Arvind and Nikhil, “Executing a Program on the MIT Tagged- Token Dataflow Architecture,” IEEE TC 1990. n Restricted Dataflow q Patt et al., “HPS, a new microarchitecture: rationale and introduction,” MICRO 1985. q Patt et al., “Critical issues regarding HPS, a high performance microarchitecture,” MICRO 1985. 3 Other Related Recommended Readings n Dataflow n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Lee and Hurson, “Dataflow Architectures and Multithreading,” IEEE Computer 1994. n Restricted Dataflow q Sankaralingam et al., “Exploiting ILP, TLP and DLP with the Polymorphous TRIPS Architecture,” ISCA 2003. q Burger et al., “Scaling to the End of Silicon with EDGE Architectures,” IEEE Computer 2004. 4 Today n Start Dataflow 5 Data Flow Readings: Data Flow (I) n Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. n Treleaven et al., “Data-Driven and Demand-Driven Computer Architecture,” ACM Computing Surveys 1982. n Veen, “Dataflow Machine Architecture,” ACM Computing Surveys 1986. n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Arvind and Nikhil, “Executing a Program on the MIT Tagged-Token Dataflow Architecture,” IEEE TC 1990.
    [Show full text]
  • Ece585 Lec2.Pdf
    ECE 485/585 Microprocessor System Design Lecture 2: Memory Addressing 8086 Basics and Bus Timing Asynchronous I/O Signaling Zeshan Chishti Electrical and Computer Engineering Dept Maseeh College of Engineering and Computer Science Source: Lecture based on materials provided by Mark F. Basic I/O – Part I ECE 485/585 Outline for next few lectures Simple model of computation Memory Addressing (Alignment, Byte Order) 8088/8086 Bus Asynchronous I/O Signaling Review of Basic I/O How is I/O performed Dedicated/Isolated /Direct I/O Ports Memory Mapped I/O How do we tell when I/O device is ready or command complete? Polling Interrupts How do we transfer data? Programmed I/O DMA ECE 485/585 Simplified Model of a Computer Control Control Data, Address, Memory Data Path Microprocessor Keyboard Mouse [Fetch] Video display [Decode] Printer [Execute] I/O Device Hard disk drive Audio card Ethernet WiFi CD R/W DVD ECE 485/585 Memory Addressing Size of operands Bytes, words, long/double words, quadwords 16-bit half word (Intel: word) 32-bit word (Intel: doubleword, dword) 0x107 64-bit double word (Intel: quadword, qword) 0x106 Note: names are non-standard 0x105 SUN Sparc word is 32-bits, double is 64-bits 0x104 0x103 Alignment 0x102 Can multi-byte operands begin at any byte address? 0x101 Yes: non-aligned 0x100 No: aligned. Low order address bit(s) will be zero ECE 485/585 Memory Operand Alignment …Intel IA speak (i.e. word = 16-bits = 2 bytes) 0x107 0x106 0x105 0x104 0x103 0x102 0x101 0x100 Aligned Unaligned Aligned Unaligned Aligned Unaligned word word Double Double Quad Quad address address word word word word -----0 address address address address -----00 ----000 ECE 485/585 Memory Operand Alignment Why do we care? Unaligned memory references Can cause multiple memory bus cycles for a single operand May also span cache lines Requiring multiple evictions, multiple cache line fills Complicates memory system and cache controller design Some architectures restrict addresses to be aligned Even in architectures without alignment restrictions (e.g.
    [Show full text]
  • CNT6008 Network Programming Module - 11 Objectives
    CNT6008 Network Programming Module - 11 Objectives Skills/Concepts/Assignments Objectives ASP.NET Overview • Learn the Framework • Understand the different platforms • Compare to Java Platform Final Project Define your final project requirements Section 21 – Web App Read Sections 21 and 27, pages 649 to 694 and 854 Development and ASP.NET to 878. Section 27 – Web App Development with ASP.NET Overview of ASP.NET Section Goals Goal Course Presentation Understanding Windows Understanding .NET Framework Foundation Project Concepts Creating a ASP.NET Client and Server Application Understanding the Visual Creating a ASP Project Studio Development Environment .NET – What Is It? • Software platform • Language neutral • In other words: • .NET is not a language (Runtime and a library for writing and executing written programs in any compliant language) What Is .NET • .Net is a new framework for developing web-based and windows-based applications within the Microsoft environment. • The framework offers a fundamental shift in Microsoft strategy: it moves application development from client-centric to server- centric. .NET – What Is It? .NET Application .NET Framework Operating System + Hardware Framework, Languages, And Tools VB VC++ VC# JScript … Common Language Specification Visual Studio.NET Visual ASP.NET: Web Services Windows and Web Forms Forms ADO.NET: Data and XML Base Class Library Common Language Runtime The .NET Framework .NET Framework Services • Common Language Runtime • Windows Communication Framework (WCF) • Windows® Forms • ASP.NET (Active Server Pages) • Web Forms • Web Services • ADO.NET, evolution of ADO • Visual Studio.NET Common Language Runtime (CLR) • CLR works like a virtual machine in executing all languages. • All .NET languages must obey the rules and standards imposed by CLR.
    [Show full text]
  • The CLR's Execution Model
    C01621632.fm Page 3 Thursday, January 12, 2006 3:49 PM Chapter 1 The CLR’s Execution Model In this chapter: Compiling Source Code into Managed Modules . 3 Combining Managed Modules into Assemblies . 6 Loading the Common Language Runtime. 8 Executing Your Assembly’s Code. 11 The Native Code Generator Tool: NGen.exe . 19 Introducing the Framework Class Library . 22 The Common Type System. 24 The Common Language Specification . 26 Interoperability with Unmanaged Code. 30 The Microsoft .NET Framework introduces many new concepts, technologies, and terms. My goal in this chapter is to give you an overview of how the .NET Framework is designed, intro- duce you to some of the new technologies the framework includes, and define many of the terms you’ll be seeing when you start using it. I’ll also take you through the process of build- ing your source code into an application or a set of redistributable components (files) that contain types (classes, structures, etc.) and then explain how your application will execute. Compiling Source Code into Managed Modules OK, so you’ve decided to use the .NET Framework as your development platform. Great! Your first step is to determine what type of application or component you intend to build. Let’s just assume that you’ve completed this minor detail; everything is designed, the specifications are written, and you’re ready to start development. Now you must decide which programming language to use. This task is usually difficult because different languages offer different capabilities. For example, in unmanaged C/C++, you have pretty low-level control of the system.
    [Show full text]
  • 10. Assembly Language, Models of Computation
    10. Assembly Language, Models of Computation 6.004x Computation Structures Part 2 – Computer Architecture Copyright © 2015 MIT EECS 6.004 Computation Structures L10: Assembly Language, Models of Computation, Slide #1 Beta ISA Summary • Storage: – Processor: 32 registers (r31 hardwired to 0) and PC – Main memory: Up to 4 GB, 32-bit words, 32-bit byte addresses, 4-byte-aligned accesses OPCODE rc ra rb unused • Instruction formats: OPCODE rc ra 16-bit signed constant 32 bits • Instruction classes: – ALU: Two input registers, or register and constant – Loads and stores: access memory – Branches, Jumps: change program counter 6.004 Computation Structures L10: Assembly Language, Models of Computation, Slide #2 Programming Languages 32-bit (4-byte) ADD instruction: 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 opcode rc ra rb (unused) Means, to the BETA, Reg[4] ß Reg[2] + Reg[3] We’d rather write in assembly language: Today ADD(R2, R3, R4) or better yet a high-level language: Coming up a = b + c; 6.004 Computation Structures L10: Assembly Language, Models of Computation, Slide #3 Assembly Language Symbolic 01101101 11000110 Array of bytes representation Assembler 00101111 to be loaded of stream of bytes 10110001 into memory ..... Source Binary text file machine language • Abstracts bit-level representation of instructions and addresses • We’ll learn UASM (“microassembler”), built into BSim • Main elements: – Values – Symbols – Labels (symbols for addresses) – Macros 6.004 Computation Structures L10: Assembly Language, Models
    [Show full text]
  • Executing UML Models
    Executing UML Models Miguel Pinto Luz1, Alberto Rodrigues da Silva1 1Instituto Superior Técnico Av. Rovisco Pais 1049-001 Lisboa – Portugal {miguelluz, alberto.silva}@acm.org Abstract. Software development evolution is a history of permanent seeks for raising the abstraction level to new limits overcoming new frontiers. Executable UML (xUML) comes this way as the expectation to achieve the next level in abstraction, offering the capability of deploying a xUML model in a variety of software environments and platforms without any changes. This paper comes as a first expedition inside xUML, exploring the main aspects of its specification including the action languages support and the fundamental MDA compliance. In this paper is presented a future new xUML tool called XIS-xModels that gives Microsoft Visio new capabilities of running and debugging xUML models. This paper is an outline of the capabilities and main features of the future application. Keywords: UML, executable UML, Model Debugging, Action Language. 1. Introduction In a dictionary we find that an engineer is a person who uses scientific knowledge to solve practical problems, planning and directing, but an information technology engineer is someone that spends is time on implementing lines of code, instead of being focus on planning and projecting. Processes, meetings, models, documents, or even code are superfluous artifacts for organizations: all they need is well design and fast implemented working system, that moves them towards a new software development paradigm, based on high level executable models. According this new paradigm it should be possible to reduce development time and costs, and to bring new product quality warranties, only reachable by executing, debugging and testing early design stages (models).
    [Show full text]