{A Domain-Specific Approach to Heterogeneous Parallelism}

Total Page:16

File Type:pdf, Size:1020Kb

{A Domain-Specific Approach to Heterogeneous Parallelism} A Domain-Specific Approach To Heterogeneous Parallelism Hassan Chafi Arvind K. Sujeeth Kevin J. Brown HyoukJoong Lee Anand R. Atreya Kunle Olukotun Pervasive Parallelism Laboratory Stanford University {hchafi, asujeeth, kjbrown, hyouklee, aatreya, kunle}@stanford.edu Abstract gramming models are available, each with their own set of trade- Exploiting heterogeneous parallel hardware currently requires offs. Emerging heterogeneous systems further complicate this chal- mapping application code to multiple disparate programming mod- lenge as each accelerator vendor usually provides a distinct driver els. Unfortunately, general-purpose programming models available API and programming model to interface with the device. today can yield high performance but are too low-level to be acces- It is not realistic to expect the average programmer to deal with sible to the average programmer. We propose leveraging domain- all this complexity. Moreover, exposing the programmer directly specific languages (DSLs) to map high-level application code to to the various models supported by each compute device will ulti- heterogeneous devices. To demonstrate the potential of this ap- mately be detrimental to application portability, forward scalability proach we present OptiML, a DSL for machine learning. OptiML and maintenance. As new system configurations emerge, applica- programs are implicitly parallel and can achieve high performance tions will constantly need to be rewritten to take advantage of any on heterogeneous hardware with no modification required to the new capabilities. It is essential to develop appropriate abstractions source code. For such a DSL-based approach to be tractable at so that programmers can write high-level code and not worry about large scales, better tools are required for DSL authors to simplify low-level details that negatively impact productivity. Thus, there is language creation and parallelization. To address this concern, we a need for parallel heterogeneous programming models that target introduce Delite, a system designed specifically for DSLs that is average programmers who are not interested in becoming paral- both a framework for creating an implicitly parallel DSL as well lel/heterogeneous programming experts. This mass market parallel as a dynamic runtime providing automated targeting to heteroge- heterogeneous programming model should be driven by the follow- neous parallel hardware. We show that OptiML running on Delite ing goals: achieves single-threaded, parallel, and GPU performance superior • Productivity: the application developer can, ideally, write pro- to explicitly parallelized MATLAB code in nearly all cases. grams without having to use any explicit parallel or heteroge- Categories and Subject Descriptors D.1.3 [Programming Tech- neous constructs. niques]: Concurrent Programming – Parallel programming; D.3.4 • Performance: the application should achieve good perfor- [Programming Languages]: Processors – Code generation, Opti- mance without sacrificing productivity. The system metric mization, Run-time environments should be performance per man-hour. General Terms Languages, Performance • Portability and Forward Scalability: the application should leverage the varying amount of compute resources across dif- Keywords Parallel Programming, Domain-Specific Languages, ferent systems, both existing and emerging. The forward scala- Dynamic Optimizations bility goal manifests itself across two dimensions: the number of a particular compute resource and the diversity of compute 1. Introduction resource types. Current industry trends favor chip multiprocessors consisting of There has been a resurgence in research aimed at simplifying simpler cores[18, 29] as well as heterogeneous systems consisting parallel programming [8] and delivering on these goals. This paper of general-purpose processors, SIMD units and accelerator devices describes key elements of an ongoing effort to create a develop- such as GPUs[3, 31]. Existing applications can no longer take ad- ment environment that uses a domain-specific approach to solve vantage of the additional compute power available in these new and the issues relating to heterogeneous parallelism. The components emerging systems without a significant parallel programming ef- of this environment are shown in Figure 1. The environment con- fort. Writing parallel programs, however, is not straightforward be- sists of four main components: applications composed of multiple cause in contrast to the familiar and standard von Neumann model domain-specific languages (DSLs), DSLs embedded in the Scala for sequential programming, a variety of incompatible parallel pro- programming language [28], a Scala-based framework that simpli- fies the parallelization of DSLs and a runtime for DSL paralleliza- tion and mapping to heterogeneous architectures. A domain-specific approach to parallel programming can ad- Permission to make digital or hard copies of all or part of this work for personal or dress all of the goals of a mass market parallel heterogeneous pro- classroom use is granted without fee provided that copies are not made or distributed gramming model. A domain-specific language is a computer pro- for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute gramming language of restricted expressiveness focused on a par- to lists, requires prior specific permission and/or a fee. ticular domain[35]. DSLs are in widespread use in a variety of do- PPoPP’11, February 12–16, 2011, San Antonio, Texas, USA. mains and are becoming more popular. Examples of widely used Copyright c 2011 ACM 978-1-4503-0119-0/11/02. $10.00 DSLs are TeX and LaTeX for typesetting academic papers, SQL Applications Scientific Virtual Personal Data Engineering Worlds Robotics informatics Domain Machine Specific Rendering Physics Scripting Probabilistic Learning Languages (OptiML) Domain Embedding Language (Scala) Parallelization Framework (Delite) Static Domain Specific Opt. DSL Infrastructure Parallel Runtime (Delite) Dynamic Domain Spec. Opt. Task & Data Parallelism Locality Aware Scheduling Hardware Architecture Heterogeneous Hardware OOO Cores SIMD Cores Threaded Cores Specialized Cores Figure 1: An environment for domain-specific programming of heterogeneous parallel architectures. for database querying, Rails for web application development and Since interesting applications might leverage a variety of DSLs, VHDL for hardware design. OpenGL can also be viewed as a DSL. it is critical to not only simplify the development of DSLs by cre- By exposing an interface for specifying polygons and the rules to ating a shared infrastructure, but also to allow these DSLs to inter- shade them, OpenGL created a high-level programming model for operate. Our current approach is to embed these DSLs in a com- real-time graphics decoupled from the hardware or software used mon embedding language. Scala, our choice for the embedding to render it, allowing for aggressive performance gains as graphics language, provides features that simplify this task [9, 16]. This ap- hardware evolves. The use of DSLs can provide significant gains in proach should be applicable to any sufficiently expressive embed- the productivity and creativity of application developers, the porta- ding language. bility of applications, and application performance. We exploit this The ability to easily embed DSLs simplifies the task of a DSL trend towards DSLs and propose an approach to parallel hetero- developer. However, assistance in parallelizing and targeting het- geneous programming that hides the complexity of the underlying erogeneous resources is also needed. Delite, our framework and machine behind a collection of DSLs. A programmer using one or runtime for building and executing parallel DSLs provides facil- more of these DSLs writes her programs using domain-specific no- ities that allow DSL developers to easily parallelize their DSLs. tation and constructs. The programs appear sequential and all paral- Using Delite, a DSL developer implicitly exposes task level par- lelism and use of the heterogeneous machine resources is implicit. allelism by enabling a run-ahead model, similar to recent propos- DSLs raise the level of abstraction and can provide a sequential als [13, 19], across each invocation of the DSL’s operations. Delite model which satisfies the productivity goal. also allows the developer to express data-level parallelism available An additional benefit of using a domain-specific approach is the within DSL operations. Using such a runtime allows us to deliver ability to use domain knowledge to apply static and dynamic opti- on our portability and forward scalability goal. We provide details mizations to a program written using a DSL. Most of these domain- of the Delite framework and runtime in Section 3. Our specific con- specific optimizations would not be possible if the program was tributions are: written in a general-purpose language. General-purpose languages are limited when it comes to optimization for at least two reasons. • We present OptiML, a DSL for machine learning, which pro- First, they must produce correct code across a very wide range of vides implicitly parallel domain-specific abstractions. We show applications. This makes it difficult to apply aggressive optimiza- that such a DSL can be used to simplify programming hetero- tions. Compiler developers must err on the side of correctness. Sec- geneous parallel systems.
Recommended publications
  • A Taxonomy of Accelerator Architectures and Their
    A taxonomy of accelerator C. Cas$caval S. Chatterjee architectures and their H. Franke K. J. Gildea programming models P. Pattnaik As the clock frequency of silicon chips is leveling off, the computer architecture community is looking for different solutions to continue application performance scaling. One such solution is the multicore approach, i.e., using multiple simple cores that enable higher performance than wide superscalar processors, provided that the workload can exploit the parallelism. Another emerging alternative is the use of customized designs (accelerators) at different levels within the system. These are specialized functional units integrated with the core, specialized cores, attached processors, or attached appliances. The design tradeoff is quite compelling because current processor chips have billions of transistors, but they cannot all be activated or switched at the same time at high frequencies. Specialized designs provide increased power efficiency but cannot be used as general-purpose compute engines. Therefore, architects trade area for power efficiency by placing in the design additional units that are known to be active at different times. The resulting system is a heterogeneous architecture, with the potential of specialized execution that accelerates different workloads. While designing and building such hardware systems is attractive, writing and porting software to a heterogeneous platform is even more challenging than parallelism for homogeneous multicore systems. In this paper, we propose a taxonomy that allows us to define classes of accelerators, with the goal of focusing on a small set of programming models for accelerators. We discuss several types of currently popular accelerators and identify challenges to exploiting such accelerators in current software stacks.
    [Show full text]
  • End-User Debugging for E-Commerce
    End-User Debugging for E-Commerce Henry Lieberman Earl Wagner MIT Media Lab 20 Ames St, Cambridge, MA 02139 USA {lieber, ewagner}@media.mit.edu ABSTRACT another phone number to be dialed. It might ask for card numbers or transaction numbers that aren’t readily at hand, and have to One of the biggest unaddressed challenges for the digital looked up offline. If someone in customer service is successfully economy is what to do when electronic transactions go wrong. reached, that person (often a low-paid worker in a high-pressure Consumers are frustrated by interminable phone menus, and long call center) may specify a tedious process to be performed. They delays to problem resolution. Businesses are frustrated by the may not be empowered to actually understand or fix the problem high cost of providing quality customer service. themselves. Customers find themselves bounced endlessly from We believe that many simple problems, such as mistyped one support person to another. All of us have had these kinds of numbers or lost orders, could be easily diagnosed if users were experiences. supplied with end-user debugging tools, analogous to tools for Customer service problems are incredibly frustrating. Not only do software debugging. These tools can show the history of actions they cause frustration about the immediate transaction, they also and data, and provide assistance for keeping track of and testing poison the relationship between customers and vendors. hypotheses. These tools would benefit not only users, but Customers feel like they are being deflected, that they are not businesses as well by decreasing the need for customer service.
    [Show full text]
  • Relationships Between Category Theory and Functional Programming with an Application
    Turkish Journal of Mathematics Turk J Math (2019) 43: 1566 – 1577 http://journals.tubitak.gov.tr/math/ © TÜBİTAK Research Article doi:10.3906/mat-1807-189 Relationships between category theory and functional programming with an application Alper ODABAŞ∗,, Elis SOYLU YILMAZ, Department of Mathematics and Computer Sciences, Faculty of Arts and Sciences, Eskişehir Osmangazi University, Eskişehir, Turkey Received: 25.07.2018 • Accepted/Published Online: 08.04.2019 • Final Version: 29.05.2019 Abstract: The most recent studies in mathematics are concerned with objects, morphisms, and the relationship between morphisms. Prominent examples can be listed as functions, vector spaces with linear transformations, and groups with homomorphisms. Category theory proposes and constitutes new structures by examining objects, morphisms, and compositions. Source and target of a morphism in category theory corresponds to input and output in programming language. Thus, a connection can be obtained between category theory and functional programming languages. From this point, this paper constructs a small category implementation in a functional programming language called Haskell. Key words: Category theory, functional programming, Haskell 1. Introduction Eilenberg and MacLane ([7]) are the pioneers who built the structures of the categories, functors, and natural transformations which are revealed first in 1945. A broader literature review reveals an important connection between homology and theoretical homology theory. These findings relieve mathematics from theoretical constraint and enables branches of science to involve the above relationship. The most significant transition in computer science is between category theory and computation. Oneof the most important aspects of computation is composing the new functions or modules by using the primitive functions, recursive structures, etc.
    [Show full text]
  • WWW 2013 22Nd International World Wide Web Conference
    WWW 2013 22nd International World Wide Web Conference General Chairs: Daniel Schwabe (PUC-Rio – Brazil) Virgílio Almeida (UFMG – Brazil) Hartmut Glaser (CGI.br – Brazil) Research Track: Ricardo Baeza-Yates (Yahoo! Labs – Spain & Chile) Sue Moon (KAIST – South Korea) Practice and Experience Track: Alejandro Jaimes (Yahoo! Labs – Spain) Haixun Wang (MSR – China) Developers Track: Denny Vrandečić (Wikimedia – Germany) Marcus Fontoura (Google – USA) Demos Track: Bernadette F. Lóscio (UFPE – Brazil) Irwin King (CUHK – Hong Kong) W3C Track: Marie-Claire Forgue (W3C Training, USA) Workshops Track: Alberto Laender (UFMG – Brazil) Les Carr (U. of Southampton – UK) Posters Track: Erik Wilde (EMC – USA) Fernanda Lima (UNB – Brazil) Tutorials Track: Bebo White (SLAC – USA) Maria Luiza M. Campos (UFRJ – Brazil) Industry Track: Marden S. Neubert (UOL – Brazil) Proceedings and Metadata Chair: Altigran Soares da Silva (UFAM - Brazil) Local Arrangements Committee: Chair – Hartmut Glaser Executive Secretary – Vagner Diniz PCO Liaison – Adriana Góes, Caroline D’Avo, and Renato Costa Conference Organization Assistant – Selma Morais International Relations – Caroline Burle Technology Liaison – Reinaldo Ferraz UX Designer / Web Developer – Yasodara Córdova, Ariadne Mello Internet infrastructure - Marcelo Gardini, Felipe Agnelli Barbosa Administration– Ana Paula Conte, Maria de Lourdes Carvalho, Beatriz Iossi, Carla Christiny de Mello Legal Issues – Kelli Angelini Press Relations and Social Network – Everton T. Rodrigues, S2Publicom and EntreNós PCO – SKL Eventos
    [Show full text]
  • On the Cognitive Prerequisites of Learning Computer Programming
    On the Cognitive Prerequisites of Learning Computer Programming Roy D. Pea D. Midian Kurland Technical Report No. 18 ON THE COGNITIVE PREREQUISITES OF LEARNING COMPUTER PROGRAMMING* Roy D. Pea and D. Midian Kurland Introduction Training in computer literacy of some form, much of which will consist of training in computer programming, is likely to involve $3 billion of the $14 billion to be spent on personal computers by 1986 (Harmon, 1983). Who will do the training? "hardware and software manu- facturers, management consultants, -retailers, independent computer instruction centers, corporations' in-house training programs, public and private schools and universities, and a variety of consultants1' (ibid.,- p. 27). To date, very little is known about what one needs to know in order to learn to program, and the ways in which edu- cators might provide optimal learning conditions. The ultimate suc- cess of these vast training programs in programming--especially toward the goal of providing a basic computer programming compe- tency for all individuals--will depend to a great degree on an ade- quate understanding of the developmental psychology of programming skills, a field currently in its infancy. In the absence of such a theory, training will continue, guided--or to express it more aptly, misguided--by the tacit Volk theories1' of programming development that until now have served as the underpinnings of programming instruction. Our paper begins to explore the complex agenda of issues, promise, and problems that building a developmental science of programming entails. Microcomputer Use in Schools The National Center for Education Statistics has recently released figures revealing that the use of micros in schools tripled from Fall 1980 to Spring 1983.
    [Show full text]
  • Matrox Imaging Library (MIL) 9.0 Update 58
    ------------------------------------------------------------------------------- Matrox Imaging Library (MIL) 9.0 Update 58. Release Notes (Whatsnew) September 2012 (c) Copyright Matrox Electronic Systems Ltd., 1992-2012. ------------------------------------------------------------------------------- For more information and what's new in processing, display, drivers, Linux, ActiveMIL, and all MIL 9 updates, consult their respective readme files. Main table of contents Section 1 : What's new in Mil 9.0 Update 58 Section 2 : What's new in MIL 9.0 Release 2. Section 3 : What's new in MIL 9.0. Section 4 : Differences between MIL Lite 8.0 and 7.5 Section 5 : Differences between MIL Lite 7.5 and 7.1 Section 6 : Differences between MIL Lite 7.1 and 7.0 ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Section 1: What's new in MIL 9.0 Update 58. Table of Contents for Section 1 1. Overview. 2. Mseq API function definition 2.1 MseqAlloc 2.2 MseqControl 2.3 MseqDefine 2.4 MseqFeed 2.5 MseqFree 2.6 MseqGetHookInfo 2.7 MseqHookFunction 2.8 MseqInquire 2.9 MseqProcess 3. Examples 4. Operating system information 1. Overview. The main goal for MIL 9.0 Update 58 is to add a new module called Mseq, which offers a user-friendly interface for H.264 compression. 2. Mseq API function definition 2.1 MseqAlloc - Synopsis: Allocate a sequence context. - Syntax: MIL_ID MseqAlloc( MIL_ID SystemID, MIL_INT64 SequenceType, MIL_INT64 Operation, MIL_UINT32 OutputFormat, MIL_INT64 InitFlag, MIL_ID* ContextSeqIdPtr) - Parameters: * SystemID: Specifies the identifier of the system on which to allocate the sequence context. This parameter must be given a valid system identifier. * SequenceType: Specifies the type of sequence to allocate: Values: M_DEFAULT - Specifies the sequence as a context in which the related operation should be performed.
    [Show full text]
  • Should C Replace FORTRAN As the Language of Scientific Programming?
    Should C Replace FORTRAN as the Language of Scientific Programming? Linda Wharton CSCI 5535 Fall 1995 Abstract Anti-FORTRAN sentiment has recently become more prevalent. Where does the attitude originate? The most probable source is academia, where C and C++ are the languages of choice. Is there a fact based justification for the attitude? FORTRAN and C are evaluated to determine whether C is a better language than FORTRAN for scientific programming. The features of FORTRAN 77, FORTRAN 90, C and C++ are compared, and evaluated as to how well they meet the requirements of the scientific programming domain. FORTRAN was designed specifically for numerical programming, and thus better meets the requirements. Three algorithms in the scientific domain are coded in both FORTRAN and C. They are evaluated on performance, readability of the code and optimization potential. In all cases the FORTRAN implementations proved superior. Is there evidence to mandate that all upgrades and new development should be done in C, rather than FORTRAN? A good computer programmer can solve any given problem in any language, however it is best to code in the language specifically designed for the problem domain. In the case of scientific programming, that language is FORTRAN. 1 Introduction In the computer arena related to scientific programming, a prevalent attitude seems to be that FORTRAN is obsolete, and C should be used as a replacement language. I am employed as a programmer that supports meteorological research. Most of the programming code I work with is written in FORTRAN. Within the course of my work, I continually encounter prejudice against FORTRAN.
    [Show full text]
  • An FPGA-Accelerated Embedded Convolutional Neural Network
    Master Thesis Report ZynqNet: An FPGA-Accelerated Embedded Convolutional Neural Network (edit) (edit) 1000ch 1000ch FPGA 1000ch Network Analysis Network Analysis 2x512 > 1024 2x512 > 1024 David Gschwend [email protected] SqueezeNet v1.1 b2a ext7 conv10 2x416 > SqueezeNet SqueezeNet v1.1 b2a ext7 conv10 2x416 > SqueezeNet arXiv:2005.06892v1 [cs.CV] 14 May 2020 Supervisors: Emanuel Schmid Felix Eberli Professor: Prof. Dr. Anton Gunzinger August 2016, ETH Zürich, Department of Information Technology and Electrical Engineering Abstract Image Understanding is becoming a vital feature in ever more applications ranging from medical diagnostics to autonomous vehicles. Many applications demand for embedded solutions that integrate into existing systems with tight real-time and power constraints. Convolutional Neural Networks (CNNs) presently achieve record-breaking accuracies in all image understanding benchmarks, but have a very high computational complexity. Embedded CNNs thus call for small and efficient, yet very powerful computing platforms. This master thesis explores the potential of FPGA-based CNN acceleration and demonstrates a fully functional proof-of-concept CNN implementation on a Zynq System-on-Chip. The ZynqNet Embedded CNN is designed for image classification on ImageNet and consists of ZynqNet CNN, an optimized and customized CNN topology, and the ZynqNet FPGA Accelerator, an FPGA-based architecture for its evaluation. ZynqNet CNN is a highly efficient CNN topology. Detailed analysis and optimization of prior topologies using the custom-designed Netscope CNN Analyzer have enabled a CNN with 84.5 % top-5 accuracy at a computational complexity of only 530 million multiply- accumulate operations. The topology is highly regular and consists exclusively of convolu- tional layers, ReLU nonlinearities and one global pooling layer.
    [Show full text]
  • AI Chips: What They Are and Why They Matter
    APRIL 2020 AI Chips: What They Are and Why They Matter An AI Chips Reference AUTHORS Saif M. Khan Alexander Mann Table of Contents Introduction and Summary 3 The Laws of Chip Innovation 7 Transistor Shrinkage: Moore’s Law 7 Efficiency and Speed Improvements 8 Increasing Transistor Density Unlocks Improved Designs for Efficiency and Speed 9 Transistor Design is Reaching Fundamental Size Limits 10 The Slowing of Moore’s Law and the Decline of General-Purpose Chips 10 The Economies of Scale of General-Purpose Chips 10 Costs are Increasing Faster than the Semiconductor Market 11 The Semiconductor Industry’s Growth Rate is Unlikely to Increase 14 Chip Improvements as Moore’s Law Slows 15 Transistor Improvements Continue, but are Slowing 16 Improved Transistor Density Enables Specialization 18 The AI Chip Zoo 19 AI Chip Types 20 AI Chip Benchmarks 22 The Value of State-of-the-Art AI Chips 23 The Efficiency of State-of-the-Art AI Chips Translates into Cost-Effectiveness 23 Compute-Intensive AI Algorithms are Bottlenecked by Chip Costs and Speed 26 U.S. and Chinese AI Chips and Implications for National Competitiveness 27 Appendix A: Basics of Semiconductors and Chips 31 Appendix B: How AI Chips Work 33 Parallel Computing 33 Low-Precision Computing 34 Memory Optimization 35 Domain-Specific Languages 36 Appendix C: AI Chip Benchmarking Studies 37 Appendix D: Chip Economics Model 39 Chip Transistor Density, Design Costs, and Energy Costs 40 Foundry, Assembly, Test and Packaging Costs 41 Acknowledgments 44 Center for Security and Emerging Technology | 2 Introduction and Summary Artificial intelligence will play an important role in national and international security in the years to come.
    [Show full text]
  • Is Intermediate-Level Conversation the Key to the Pair Programming Success Story?
    ‘Talking the talk’: Is intermediate-level conversation the key to the pair programming success story? S. Freudenberg (née Bryant), P. Romero, B. du Boulay IDEAS Laboratory, University of Sussex [email protected] Abstract One possible method of taming the complexity of software development may be to work collaboratively. Pair programming claims to provide benefits over In fact, one form of collaborative programming has and above those offered by a programmer working now been formalised as ‘pair programming’, one of the alone. In particular, a number of studies have core practices of the Extreme Programming (XP) suggested that pair programming improves software methodology. In pair programming, “all production quality. The literature speculates that the ‘driver’ (the code is written with two people working at one programmer currently typing in the code) and machine, with one keyboard and one mouse” (Beck, ‘navigator’ work together in a complimentary manner, 2000). and that the nature of these roles may be key in A wide range of studies have considered the realizing the reported benefits. Here we dispute two of benefits of pair programming in terms of its effect on these existing claims: (i) That the navigator providing the quality of the resulting software. These studies a ‘continual review’ of the drivers work and have taken place in both academic and commercial highlighting errors (i.e. acting as a reviewer); (ii) That environments. In the commercial arena two studies are the navigator is focused on a higher level of particularly note-worthy: Nosek (1998), who showed abstraction that the driver (i.e. acting as a foreman).
    [Show full text]
  • A Language and System for Composing Autonomous, Heterogeneous and Distributed Megamodules
    A Language and System for Composing Autonomous, Heterogeneous and Distributed Megamodules Dorothea Beringer, Catherine Tornabene, Pankaj Jain, Gio Wiederhold Stanford University, Computer Science Departement, Stanford, CA 94306, USA {beringer, catherin, pjain, gio}@db.stanford.edu Abstract characteristics: The components and the client program using these components are written in the same language, New levels of software composition become possible or at least in languages on the same abstraction level. The through advances in distributed communication services. components are operated and maintained together with the In this paper we focus on the composition of megamodules, application using them. Also, the components are often which are large distributed components or computation created together, and they form a coherent library. The servers that are autonomously operated and maintained. application and the components share a common ontology The composition of megamodules offers various and computing infrastructure. In case of distributed challenges. Megamodules are not necessarily all components, one common distribution system is used, e.g. accessible by the same distribution protocol (such as either DCE [2], CORBA [3], RMI [4], or DCOM [5]. CORBA, DCOM, RMI and DCE). Their concurrent nature and potentially long duration of service execution Megamodules differ from these kind of components in necessitates asynchronous invocation and collection of various aspects. Since they are larger, and marketed as results. Novel needs and opportunities for optimization services by autonomous providers, we must assume that arise when composing megamodules. In order to meet megamodules not only encapsulate data and procedures, these challenges, we have defined a purely compositional they encapsulate data, behavior, knowledge, concurrency language called CHAIMS, and are now developing the and ontology [6].
    [Show full text]
  • A Little Language for Testing
    A Little Language for Testing Alex Groce and Jervis Pinto School of Electrical Engineering and Computer Science Oregon State University, Corvallis, OR Abstract. The difficulty of writing test harnesses is a major obstacle to the adoption of automated testing and model checking. Languages designed for harness definition are usually tied to a particular tool and unfamiliar to programmers; moreover, such languages can limit expres- siveness. Writing a harness directly in the language of the software under test (SUT) makes it hard to change testing algorithms, offers no support for the common testing idioms, and tends to produce repetitive, hard- to-read code. This makes harness generation a natural fit for the use of an unusual kind of domain-specific language (DSL). This paper defines a template scripting testing language, TSTL, and shows how it can be used to produce succinct, readable definitions of state spaces. The concepts underlying TSTL are demonstrated in Python but are not tied to it. 1 Introduction Building a test harness is an often irksome task many users of formal methods or automated testing face from time to time [18,12]. The difficulty of harness gen- eration is one reason for the limited adoption of sophisticated testing and model checking by the typical developer who writes unit tests. This is unfortunate, as even simple random testing can often uncover subtle faults. The \natural" way to write a test harness is as code in the language of the Software Under Test (SUT). This is obviously how most unit tests are written, as witnessed by the proliferation of tools like JUnit [3] and its imitators (e.g., PyUnit, HUnit, etc.).
    [Show full text]