Installing Platform LSF on UNIX and Linux
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Reconfigurable Embedded Control Systems: Problems and Solutions
RECONFIGURABLE EMBEDDED CONTROL SYSTEMS: PROBLEMS AND SOLUTIONS By Dr.rer.nat.Habil. Mohamed Khalgui ⃝c Copyright by Dr.rer.nat.Habil. Mohamed Khalgui, 2012 v Martin Luther University, Germany Research Manuscript for Habilitation Diploma in Computer Science 1. Reviewer: Prof.Dr. Hans-Michael Hanisch, Martin Luther University, Germany, 2. Reviewer: Prof.Dr. Georg Frey, Saarland University, Germany, 3. Reviewer: Prof.Dr. Wolf Zimmermann, Martin Luther University, Germany, Day of the defense: Monday January 23rd 2012, Table of Contents Table of Contents vi English Abstract x German Abstract xi English Keywords xii German Keywords xiii Acknowledgements xiv Dedicate xv 1 General Introduction 1 2 Embedded Architectures: Overview on Hardware and Operating Systems 3 2.1 Embedded Hardware Components . 3 2.1.1 Microcontrollers . 3 2.1.2 Digital Signal Processors (DSP): . 4 2.1.3 System on Chip (SoC): . 5 2.1.4 Programmable Logic Controllers (PLC): . 6 2.2 Real-Time Embedded Operating Systems (RTOS) . 8 2.2.1 QNX . 9 2.2.2 RTLinux . 9 2.2.3 VxWorks . 9 2.2.4 Windows CE . 10 2.3 Known Embedded Software Solutions . 11 2.3.1 Simple Control Loop . 12 2.3.2 Interrupt Controlled System . 12 2.3.3 Cooperative Multitasking . 12 2.3.4 Preemptive Multitasking or Multi-Threading . 12 2.3.5 Microkernels . 13 2.3.6 Monolithic Kernels . 13 2.3.7 Additional Software Components: . 13 2.4 Conclusion . 14 3 Embedded Systems: Overview on Software Components 15 3.1 Basic Concepts of Components . 15 3.2 Architecture Description Languages . 17 3.2.1 Acme Language . -
Platform LSF Foundations Chapter 1
Platform LSF Version 9 Release 1.3 Foundations SC27-5304-03 Platform LSF Version 9 Release 1.3 Foundations SC27-5304-03 Note Before using this information and the product it supports, read the information in “Notices” on page 21. First edition This edition applies to version 9, release 1 of IBM Platform LSF (product number 5725G82) and to all subsequent releases and modifications until otherwise indicated in new editions. Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. If you find an error in any Platform Computing documentation, or you have a suggestion for improving it, please let us know. In the IBM Knowledge Center, add your comments and feedback to any topic. You can also send your suggestions, comments and questions to the following email address: [email protected] Be sure include the publication title and order number, and, if applicable, the specific location of the information about which you have comments (for example, a page number or a browser URL). When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright IBM Corporation 1992, 2014. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Chapter 1. IBM Platform LSF: An Job submission .............12 Overview ..............1 Job scheduling and dispatch .........13 Introduction to IBM Platform LSF .......1 Host selection .............15 LSF cluster components...........2 Job execution environment .........15 Chapter 2. -
IBM Platform Computing Cloud Service Ready to Use Platform LSF & Symphony Clusters in the Softlayer Cloud
Platform Computing IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 © 2014 IBM Corporation Platform Computing Agenda v Mapping clients needs to cloud technologies v Addressing your pain points v Introducing IBM Platform Computing Cloud Service v Product features and benefits v Use cases v Performance benchmarks 2 © 2014 IBM Corporation Platform Computing HPC cloud characteristics and economics are different than general-purpose computing • High-end hardware and special purpose devices (e.g. GPUs) are typically used to supply the needed processing, memory, network, and storage capabilities • The performance requirements of technical computing and service-oriented workloads means that performance may be impacted in a virtualized cloud environment, especially when latency or I/O is a constraint • HPC cluster/grid utilization is usually in the 70-90% range, removing a major potential advantage of a public cloud service provider for stable workload volumes HPC Workloads Recommended for Private Cloud HPC Workloads with Best Potential for Virtualized Public & Hybrid Cloud Primary HPC Workloads 3 © 2014 IBM Corporation Platform Computing IBM’s HPC cloud strategy provides a flexible approach to address a variety of client needs Private Hybrid Public Clouds Clouds Clouds Evolve existing infrastructure to Enable integrated HPC Cloud to enhance approach to improve Access additional responsiveness, HPC cost and HPC capacity with flexibility, and capability variable cost model -
Logca: a High-Level Performance Model for Hardware Accelerators Muhammad Shoaib Bin Altaf ∗ David A
LogCA: A High-Level Performance Model for Hardware Accelerators Muhammad Shoaib Bin Altaf ∗ David A. Wood AMD Research Computer Sciences Department Advanced Micro Devices, Inc. University of Wisconsin-Madison [email protected] [email protected] ABSTRACT 10 With the end of Dennard scaling, architects have increasingly turned Unaccelerated Accelerated 1 to special-purpose hardware accelerators to improve the performance and energy efficiency for some applications. Unfortunately, accel- 0.1 erators don’t always live up to their expectations and may under- perform in some situations. Understanding the factors which effect Time (ms) 0.01 Break-even point the performance of an accelerator is crucial for both architects and 0.001 programmers early in the design stage. Detailed models can be 16 64 highly accurate, but often require low-level details which are not 256 1K 4K 16K 64K available until late in the design cycle. In contrast, simple analytical Offloaded Data (Bytes) models can provide useful insights by abstracting away low-level system details. (a) Execution time on UltraSPARC T2. In this paper, we propose LogCA—a high-level performance 100 model for hardware accelerators. LogCA helps both programmers SPARC T4 UltraSPARC T2 GPU and architects identify performance bounds and design bottlenecks 10 early in the design cycle, and provide insight into which optimiza- tions may alleviate these bottlenecks. We validate our model across Speedup 1 Break-even point a variety of kernels, ranging from sub-linear to super-linear com- plexities on both on-chip and off-chip accelerators. We also describe the utility of LogCA using two retrospective case studies. -
Foreign Library Interface by Daniel Adler Dia Applications That Can Run on a Multitude of Plat- Forms
30 CONTRIBUTED RESEARCH ARTICLES Foreign Library Interface by Daniel Adler dia applications that can run on a multitude of plat- forms. Abstract We present an improved Foreign Function Interface (FFI) for R to call arbitary na- tive functions without the need for C wrapper Foreign function interfaces code. Further we discuss a dynamic linkage framework for binding standard C libraries to FFIs provide the backbone of a language to inter- R across platforms using a universal type infor- face with foreign code. Depending on the design of mation format. The package rdyncall comprises this service, it can largely unburden developers from the framework and an initial repository of cross- writing additional wrapper code. In this section, we platform bindings for standard libraries such as compare the built-in R FFI with that provided by (legacy and modern) OpenGL, the family of SDL rdyncall. We use a simple example that sketches the libraries and Expat. The package enables system- different work flow paths for making an R binding to level programming using the R language; sam- a function from a foreign C library. ple applications are given in the article. We out- line the underlying automation tool-chain that extracts cross-platform bindings from C headers, FFI of base R making the repository extendable and open for Suppose that we wish to invoke the C function sqrt library developers. of the Standard C Math library. The function is de- clared as follows in C: Introduction double sqrt(double x); We present an improved Foreign Function Interface The .C function from the base R FFI offers a call (FFI) for R that significantly reduces the amount of gate to C code with very strict conversion rules, and C wrapper code needed to interface with C. -
Hardware, Firmware, Devices Disks Kernel, Boot, Swap Files, Volumes
hardware, kernel, boot, firmware, disks files, volumes swap devices software, security, patching, networking references backup tracing, logging TTAASSKK\ OOSS AAIIXX AA//UUXX DDGG//UUXX FFrreeeeBBSSDD HHPP--UUXX IIRRIIXX Derived from By IBM, with Apple 1988- 4.4BSD-Lite input from 1995. Based and 386BSD. System V, BSD, etc. on AT&T Data General This table SysV.2.2 with was aquired does not Hewlett- SGI. SVR4- OS notes Runs mainly extensions by EMC in include Packard based on IBM from V.3, 1999. external RS/6000 and V.4, and BSD packages related 4.2 and 4.3 from hardware. /usr/ports. /usr/sysadm ssmmiitt ssaamm /bin/sysmgr (6.3+) ssmmiittttyy ttoooollcchheesstt administrativ /usr/Cadmin/ wwssmm FFiinnddeerr ssyyssaaddmm ssyyssiinnssttaallll ssmmhh (11.31+) e GUI bin/* /usr/sysadm/ useradd (5+) FFiinnddeerr uusseerraadddd aadddduusseerr uusseerraadddd privbin/ addUserAcco userdell (5+) //eettcc//aadddduusseerr uusseerrddeell cchhppaassss uusseerrddeell unt usermod edit rrmmuusseerr uusseerrmmoodd managing (5+) /etc/passwd users llssuusseerr ppww ggeettpprrppww ppaassssmmggmmtt mmkkuusseerr vviippww mmooddpprrppww /usr/Cadmin/ cchhuusseerr ppwwggeett bin/cpeople rmuser usrck TASK \ OS AAIIXX AA//UUXX DDGG//UUXX FFrreeeeBBSSDD HHPP--UUXX IIRRIIXX pprrttccoonnff uunnaammee iioossccaann hhiinnvv dmesg (if (if llssccffgg ssyyssccttll--aa you're lucky) llssaattttrr ddmmeessgg aaddbb ssyyssiinnffoo--vvvv catcat lsdev /var/run/dm model esg.boot stm (from the llssppaatthh ppcciiccoonnff--ll SupportPlus CDROM) list hardware dg_sysreport - bdf (like -
Platform LSF Quick Reference IBM Platform LSF 9.1.2 Quick Reference
Platform LSF Version 9 Release 1.2 Quick Reference GC27-5309-02 Platform LSF Version 9 Release 1.2 Quick Reference GC27-5309-02 Note Before using this information and the product it supports, read the information in “Notices” on page 9. First edition This edition applies to version 9, release 1 of IBM Platform LSF (product number 5725G82) and to all subsequent releases and modifications until otherwise indicated in new editions. Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. If you find an error in any Platform Computing documentation, or you have a suggestion for improving it, please let us know. Send your suggestions, comments and questions to the following email address: [email protected] Be sure include the publication title and order number, and, if applicable, the specific location of the information about which you have comments (for example, a page number or a browser URL). When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright IBM Corporation 1992, 2013. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices ...............9 Privacy policy considerations ........11 Trademarks ..............11 © Copyright IBM Corp. 1992, 2013 iii iv Platform LSF Quick Reference IBM Platform LSF 9.1.2 Quick Reference Sample UNIX installation directories Daemon error log files Daemon error log files are stored in the directory defined by LSF_LOGDIR in lsf.conf. -
Basics of UNIX
Basics of UNIX August 23, 2012 By UNIX, I mean any UNIX-like operating system, including Linux and Mac OS X. On the Mac you can access a UNIX terminal window with the Terminal application (under Applica- tions/Utilities). Most modern scientific computing is done on UNIX-based machines, often by remotely logging in to a UNIX-based server. 1 Connecting to a UNIX machine from {UNIX, Mac, Windows} See the file on bspace on connecting remotely to SCF. In addition, this SCF help page has infor- mation on logging in to remote machines via ssh without having to type your password every time. This can save a lot of time. 2 Getting help from SCF More generally, the department computing FAQs is the place to go for answers to questions about SCF. For questions not answered there, the SCF requests: “please report any problems regarding equipment or system software to the SCF staff by sending mail to ’trouble’ or by reporting the prob- lem directly to room 498/499. For information/questions on the use of application packages (e.g., R, SAS, Matlab), programming languages and libraries send mail to ’consult’. Questions/problems regarding accounts should be sent to ’manager’.” Note that for the purpose of this class, questions about application packages, languages, li- braries, etc. can be directed to me. 1 3 Files and directories 1. Files are stored in directories (aka folders) that are in a (inverted) directory tree, with “/” as the root of the tree 2. Where am I? > pwd 3. What’s in a directory? > ls > ls -a > ls -al 4. -
Administering IBM Platform LSF Chapter 1
Platform LSF Version 9 Release 1.3 Administering Platform LSF SC27-5302-03 Platform LSF Version 9 Release 1.3 Administering Platform LSF SC27-5302-03 Note Before using this information and the product it supports, read the information in “Notices” on page 827. First edition This edition applies to version 9, release 1 of IBM Platform LSF (product number 5725G82) and to all subsequent releases and modifications until otherwise indicated in new editions. Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. If you find an error in any Platform Computing documentation, or you have a suggestion for improving it, please let us know. In the IBM Knowledge Center, add your comments and feedback to any topic. You can also send your suggestions, comments and questions to the following email address: [email protected] Be sure include the publication title and order number, and, if applicable, the specific location of the information about which you have comments (for example, a page number or a browser URL). When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright IBM Corporation 1992, 2014. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Chapter 1. Managing Your Cluster . 1 Job Directories and Data..........441 Working with Your Cluster .........1 Resource -
Experimental Study of Remote Job Submission and Execution on LRM Through Grid Computing Mechanisms
Experimental Study of Remote Job Submission and Execution on LRM through Grid Computing Mechanisms Harshadkumar B. Prajapati Vipul A. Shah Information Technology Department Instrumentation & Control Engineering Dharmsinh Desai University Department Nadiad, INDIA Dharmsinh Desai University e-mail: [email protected], Nadiad, INDIA [email protected] e-mail: [email protected], [email protected] Abstract —Remote job submission and execution is The fundamental unit of work-done in Grid fundamental requirement of distributed computing done computing is successful execution of a job. A job in using Cluster computing. However, Cluster computing Grid is generally a batch-job, which does not interact limits usage within a single organization. Grid computing with user while it is running. Higher-level Grid environment can allow use of resources for remote job services and applications use job submission and execution that are available in other organizations. This execution facilities that are supported by a Grid paper discusses concepts of batch-job execution using LRM and using Grid. The paper discusses two ways of middleware. Therefore, it is very important to preparing test Grid computing environment that we use understand how a job is submitted, executed, and for experimental testing of concepts. This paper presents monitored in Grid computing environment. experimental testing of remote job submission and Researchers and scientists who are working in higher- execution mechanisms through LRM specific way and level Grid services and applications do not pay much Grid computing ways. Moreover, the paper also discusses attention to the underlying Grid infrastructure in their various problems faced while working with Grid published work. -
Oracle1® VM Virtualbox1® User Manual
Oracle R VM VirtualBox R User Manual Version 6.1.0_RC1 c 2004-2019 Oracle Corporation http://www.virtualbox.org Contents Preface i 1 First Steps 1 1.1 Why is Virtualization Useful?.............................2 1.2 Some Terminology..................................2 1.3 Features Overview..................................3 1.4 Supported Host Operating Systems.........................5 1.5 Host CPU Requirements...............................6 1.6 Installing Oracle VM VirtualBox and Extension Packs...............6 1.7 Starting Oracle VM VirtualBox............................7 1.8 Creating Your First Virtual Machine.........................8 1.9 Running Your Virtual Machine............................ 11 1.9.1 Starting a New VM for the First Time................... 12 1.9.2 Capturing and Releasing Keyboard and Mouse.............. 12 1.9.3 Typing Special Characters.......................... 13 1.9.4 Changing Removable Media........................ 14 1.9.5 Resizing the Machine’s Window...................... 14 1.9.6 Saving the State of the Machine...................... 15 1.10 Using VM Groups................................... 16 1.11 Snapshots....................................... 17 1.11.1 Taking, Restoring, and Deleting Snapshots................ 17 1.11.2 Snapshot Contents............................. 18 1.12 Virtual Machine Configuration............................ 19 1.13 Removing and Moving Virtual Machines...................... 19 1.14 Cloning Virtual Machines............................... 20 1.15 Importing and Exporting Virtual Machines.................... -