Dependability Mechanisms for Desktop Grids

Total Page:16

File Type:pdf, Size:1020Kb

Dependability Mechanisms for Desktop Grids UNIVERSIDADE DE COIMBRA FACULDADE DE CIÊNCIAS E DE TECNOLOGIA DEPARTAMENTO DE ENGENHARIA INFORMÁTICA DEPENDABILITY MECHANISMS FOR DESKTOP GRIDS Patrício Rodrigues Domingues DOUTORAMENTO EM ENGENHARIA INFORMÁTICA 2008 UNIVERSIDADE DE COIMBRA FACULDADE DE CIÊNCIAS E DE TECNOLOGIA DEPARTAMENTO DE ENGENHARIA INFORMÁTICA DEPENDABILITY MECHANISMS FOR DESKTOP GRIDS Patrício Rodrigues Domingues DOUTORAMENTO EM ENGENHARIA INFORMÁTICA 2008 Tese orientada pelo Prof. Doutor Luís Moura e Silva This work was partially supported by the Programa de Formação Avançada de Docentes do Ensino Superior Medida 5/Acção 5.3 (PRODEP III), by the Por- tuguese Foundation for Science and Technology under the POSI programme, by the FEDER programme of the European Union through the R&D unit 326/94 (CISUC) and by the CoreGRID programme funded by the European Commission (Contract IST-2002-004265). Abstract It is a well-known fact that most of the computing power spread over the In- ternet simply goes unused, with CPU and other resources sitting idle most of the time: on average less than 5% of the CPU time is effectively used. Desktop grids are software infrastructures that aim to exploit the otherwise idle processing power, making it available to users that require computational resources to solve long- running applications. The outcome of some efforts to harness idle machines can be seen in public projects such as SETI@home and Folding@home that boost im- pressive performance figures, in the order of several hundreds of TFLOPS each. At the same time, many institutions, both academic and corporate, run their own desk- top grid platforms. However, while desktop grids provide free computing power, they need to face important issues like fault tolerance and security, two of the main problems that harden the widespread use of desktop grid computing. In this thesis, we aim to exploit a set of fault tolerance techniques, such as checkpointing and redundant executions, to promote faster turnaround times. We start with an experimental study, where we analyze the availability of the comput- ing resources of an academic institution. We then focus on the benefits of sharing checkpoints in both institutional and wide-scale environments. We also explore hy- brid schemes, where the traditional centralized desktop grid organization is com- plemented with peer-to-peer resources. Another major issue regarding desktop grids is related with the level of trust that can be achieved relatively to the volunteered hosts that carry out the execu- tions. We propose and explore several mechanisms aimed at reducing the waste of computational resources needed to detect incorrect computations. For this purpose, we detail a checkpoint-based scheme for early detection of errors. We also propose and analyze an invitation-based strategy coupled to a credit rewarding scheme, to allow the enrollment and filtering of more trustworthy and more motivated resource donors. To summarize, we propose and study several fault tolerance methodologies oriented toward a more efficient usage of resources, resorting to techniques such as checkpointing, replication and sabotage tolerance to fasten and to make more reliable executions that are carried over desktop grid resources. The usage of tech- niques like these ones will be of ultimate importance for the wider deployment of applications over desktop grids. KEYWORDS: Fault tolerance, desktop grids, volunteer computing, checkpoint- ing, scheduling, sabotage-tolerance. Resumo Estendido Introdução É um facto sabido que uma vasta percentagem do poder computacional de com- putadores pessoais acaba perdida. De facto, no âmbito desta dissertação constatou- se que a percentagem média de inactividade dos processadores de mais de centena e meia de computadores de uma instituição académica era de 97,93%. Este valor confirma a regra empírica dos 5/95%, regra essa que afirma que um CPU tem uma taxa média de utilização de cerca 5%, sendo os restantes 95% desaproveita- dos. Ironicamente, o constante aumento do poder computacional leva a que o poder computacional desaproveitado aumente de ano para ano. O recente aparec- imento de arquitecturas que integram vários núcleos independentes (multi-cores) num mesmo processador leva a que possam existir vários cores que, grande parte do tempo, não estão a ser usados pelo utilizador da máquina. Deste modo, perde-se poder computacional, uma vez que recursos como memória e CPU não utilizados num dado instante não podem ser armazenados para posterior uso. Se a maioria dos utilizadores de computadores pessoais apenas recorre à to- talidade das capacidades da máquina por curtos períodos de tempos para suprir necessidades pontuais, outros utilizadores estão fortemente dependentes de eleva- dos recursos computacionais para a execução de aplicações nas mais diversas áreas do conhecimento, desde o cálculo das propriedades de compostos químicos, a con- strução de imagens 3-D, a detecção de ondas gravíticas, o experimentar de modelos de propagação de doenças, ou para uma das muitas outras áreas do saber que estão fortemente dependentes de poder computacional. Para estes utilizadores, todo o poder computacional que possam usar é bem-vindo. Obviamente, essa classe de utilizadores preferia ter acesso a recursos computacionais dedicados, mas esses ou não existem ou não estão disponíveis devido a restrições orçamentais. i O estudo de técnicas orientadas para o aproveitamento de recursos computa- cionais não dedicados data dos finais da década de 80. A generalização do com- putador pessoal e da Internet contribuíram ainda mais para o despontar de sistemas distribuídos de larga escala orientados para o aproveitamento de recursos denom- inados de voluntários. Esta designação advém do facto dos donos/responsáveis destes recursos os disponibilizarem voluntariamente. Estes sistemas, também des- ignados pelo termo anglo-saxónico volunteer computing, foram popularizados pelo seu uso em projectos de computação voluntária tais como o distributed.net, o SETI@home, o Folding@home, entre muitos outros. Na sequência do enorme sucesso do projecto SETI@home, e cientes das dificuldades técnicas que sentiram na criação e manutenção do referido projecto, os seus promotores implementaram a plataforma Berkeley Open Infrastructure for Network Computing (BOINC). Desta forma, o BOINC foi desenvolvido com o intuito de tornar mais simples e flexível a instalação e manutenção de um projecto de computação voluntária. O BOINC é hoje umas das principais plataformas para computação voluntária para ambientes de larga escala, sendo empregue em cerca de trinta projectos públicos de com- putação voluntária tais como o Einstein@home, o SIMAP@home, o Rosetta@home e o SZTAKI Desktop Grid, entre outros. O XtremWeb é outro exemplo de plataforma para aproveitamento de recursos não dedicados, embora esteja mais vocacionado como plataforma para experimentação de conceitos e técnicas relacionadas com o desktop grid. Apesar dos custos associados ao uso de recursos voluntários serem reduzidos relativamente ao poder computacional que pode ser alcançado, algumas limitações restringem o seu uso. As limitações dos ambientes de computação voluntária relacionam-se com (1) a baixa prioridade de execução concedida às aplicações externas, e com (2) as assimetrias das redes de computadores que levam a que, muitas vezes, duas ou mais máquinas não possam comunicar directamente entre si. De facto, os recursos não dedicados executam as aplicações externas com um nível de prioridade inferior ao empregue para as aplicações dos utilizadores inter- activos, e, nalguns casos, as aplicações externas apenas são executadas quando não existe uso interactivo dos recursos, isto é, quando nenhum utilizador está a usar a máquina. Nestes casos, a execução de uma aplicação externa é suspensa logo que seja detectada utilização interactiva da máquina. Adicionalmente, um recurso pode repentinamente ser desligado por um período de tempo indeterminado ou mesmo por tempo infinito (por exemplo, o dono do recurso decidiu cessar a partilha do mesmo). A conjugação destes dois factores – baixa prioridade no acesso aos re- ii cursos e imprevisibilidade da disponibilidade dos recursos – leva a que os recursos não dedicados apresentem elevada volatilidade, uma característica a ponderar por quem adapta as aplicações externas a ambientes não dedicados. A técnica habitual para lidar com este tipo de situações é a da salvaguarda periódica do estado da apli- cação para suporte persistente (checkpointing). Na sequência de uma interrupção, e logo que o recurso esteja de novo disponível, a execução da aplicação reinicia-se a partir do último ponto de execução salvaguardado. No que respeita à assimetria das comunicações, os mecanismos de firewalls e de translação de endereços (vulgo Network Address Translation, NAT) originam re- des assimétricas, nas quais, as máquinas não podem comunicar directamente umas com as outras, ou então, a comunicação até pode ser feita directamente, mas apenas pode ocorrer num determinado sentido. É aliás a assimetria nas comunicações que leva a que os ambientes de computação desktop grids sejam baseados no modelo de master-worker, no qual a iniciativa da comunicação parte do lado do worker. Outra limitação dos canais de comunicação relaciona-se com a largura de banda. De facto, para além da heterogeneidade ao nível da capacidade dos canais de co- municação, com a existência de recursos com diferentes velocidades de acesso à rede, existe ainda assimetria nas larguras de banda, com débitos de envio difer- entes dos débitos
Recommended publications
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • “Freedom” Koan-Sin Tan [email protected] OSDC.Tw, Taipei Apr 11Th, 2014
    Understanding Android Benchmarks “freedom” koan-sin tan [email protected] OSDC.tw, Taipei Apr 11th, 2014 1 disclaimers • many of the materials used in this slide deck are from the Internet and textbooks, e.g., many of the following materials are from “Computer Architecture: A Quantitative Approach,” 1st ~ 5th ed • opinions expressed here are my personal one, don’t reflect my employer’s view 2 who am i • did some networking and security research before • working for a SoC company, recently on • big.LITTLE scheduling and related stuff • parallel construct evaluation • run benchmarking from time to time • for improving performance of our products, and • know what our colleagues' progress 3 • Focusing on CPU and memory parts of benchmarks • let’s ignore graphics (2d, 3d), storage I/O, etc. 4 Blackbox ! • google image search “benchmark”, you can find many of them are Android-related benchmarks • Similar to recently Cross-Strait Trade in Services Agreement (TiSA), most benchmarks on Android platform are kinda blackbox 5 Is Apple A7 good? • When Apple released the new iPhone 5s, you saw many technical blog showed some benchmarks for reviews they came up • commonly used ones: • GeekBench • JavaScript benchmarks • Some graphics benchmarks • Why? Are they right ones? etc. e.g., http://www.anandtech.com/show/7335/the-iphone-5s-review 6 open blackbox 7 Android Benchmarks 8 http:// www.anandtech.com /show/7384/state-of- cheating-in-android- benchmarks No, not improvement in this way 9 Assuming there is not cheating, what we we can do? Outline • Performance benchmark review • Some Android benchmarks • What we did and what still can be done • Future 11 To quote what Prof.
    [Show full text]
  • Getting Started with Blackfin Processors, Revision 6.0, September 2010
    Getting Started With Blackfin® Processors Revision 6.0, September 2010 Part Number 82-000850-01 Analog Devices, Inc. One Technology Way Norwood, Mass. 02062-9106 a Copyright Information ©2010 Analog Devices, Inc., ALL RIGHTS RESERVED. This document may not be reproduced in any form without prior, express written consent from Analog Devices. Printed in the USA. Disclaimer Analog Devices reserves the right to change this product without prior notice. Information furnished by Analog Devices is believed to be accurate and reliable. However, no responsibility is assumed by Analog Devices for its use; nor for any infringement of patents or other rights of third parties which may result from its use. No license is granted by implication or oth- erwise under the patent rights of Analog Devices. Trademark and Service Mark Notice The Analog Devices logo, Blackfin, the Blackfin logo, CROSSCORE, EZ-Extender, EZ-KIT Lite, and VisualDSP++ are registered trademarks of Analog Devices. EZ-Board is a trademark of Analog Devices. All other brand and product names are trademarks or service marks of their respective owners. CONTENTS PREFACE Purpose of This Manual .................................................................. xi Intended Audience ......................................................................... xii Manual Contents ........................................................................... xii What’s New in This Manual ........................................................... xii Technical or Customer Support ....................................................
    [Show full text]
  • Benchmarking-HOWTO.Pdf
    Linux Benchmarking HOWTO Linux Benchmarking HOWTO Table of Contents Linux Benchmarking HOWTO.........................................................................................................................1 by André D. Balsa, [email protected] ..............................................................................................1 1.Introduction ..........................................................................................................................................1 2.Benchmarking procedures and interpretation of results.......................................................................1 3.The Linux Benchmarking Toolkit (LBT).............................................................................................1 4.Example run and results........................................................................................................................2 5.Pitfalls and caveats of benchmarking ..................................................................................................2 6.FAQ .....................................................................................................................................................2 7.Copyright, acknowledgments and miscellaneous.................................................................................2 1.Introduction ..........................................................................................................................................2 1.1 Why is benchmarking so important ? ...............................................................................................3
    [Show full text]
  • Toward a Trustable, Self-Hosting Computer System
    Toward a Trustable, Self-Hosting Computer System Gabriel L. Somlo CERT – SEI Carnegie Mellon University Pittsburgh, PA 15213 Email: [email protected] Abstract—Due to the extremely rapid growth of the the software could, in theory, be presumed to be perfectly computing and IT technology market, commercial hard- secure and free of bugs. ware made for the civilian, consumer sector is increasingly We propose to build trustable computer systems on (and inevitably) deployed in security-sensitive environ- top of Field Programmable Gate Arrays (FPGA), which, ments. With the growing threat of hardware Trojans and due to their generic nature, make it qualitatively harder backdoors, an adversary could perpetrate a full system compromise, or privilege escalation attack, even if the to conceal intentional backdoors implemented in silicon. software is presumed to be perfectly secure. We propose a From there, we leverage the transparency and auditability method of field stripping a computer system by empirically of Free and Open Source (FOSS) hardware, software, proving an equivalence between the trustability of the and compiler toolchains to configure FPGAs to act fielded system on one hand, and its comprehensive set as Linux-capable Systems-on-Chip (SoC) that are as of sources (including those of all toolchains used in its trustworthy as the comprehensive sources used in both construction) on the other. In the long run, we hope their specification and construction. to facilitate comprehensive verification and validation of The remainder of this paper is laid out as follows: fielded computer systems from fully self-contained hard- ware+software sources, as a way of mitigating against Section II presents a brief overview of the hardware the lack of control over (and visibility into) the hardware development lifecycle, pointing out similarities and con- supply chain.
    [Show full text]
  • Android Benchmarking for Architectural Research Ira Ray Jenkins
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2012 Android Benchmarking for Architectural Research Ira Ray Jenkins Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES ANDROID BENCHMARKING FOR ARCHITECTURAL RESEARCH By IRA RAY JENKINS A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science Degree Awarded: Summer Semester, 2012 Ira Ray Jenkins defended this thesis on July 2, 2012. The members of the supervisory committee were: Gary Tyson Professor Directing Thesis David Whalley Committee Member Piyush Kumar Committee Member The Graduate School has verified and approved the above-named committee members, and certifies that the thesis has been approved in accordance with the university requirements. ii To Nana Bo, a woman among women, the grandest of grandmothers, the surest of hind catchers and my eternal best friend. iii TABLE OF CONTENTS ListofTables........................................ v ListofFigures ....................................... vi List of Abbreviations . vii Abstract........................................... viii 1 Introduction 1 1.1 Benchmarking................................... 1 1.2 MobileComputing ................................ 2 1.3 Goal........................................ 2 2 Challenges 4 2.1 Environment ................................... 4 2.2 Android Limitations . 5 3 Design 7 3.1 Proposed Solution . 7 3.2 BenchmarkFramework.............................. 8 3.3 Initial Analysis . 10 4 Implementation 12 4.1 Benchmarks.................................... 12 4.1.1 Micro-benchmarks . 12 4.1.2 API Demos . 14 4.2 EventRecording ................................. 16 4.3 Packaging . 17 5 Conclusion 18 A Benchmark Profiles 19 Bibliography ........................................ 24 Biographical Sketch . 28 iv LIST OF TABLES 3.1 ProgressBar Profile .
    [Show full text]
  • Xuantie-910: a Commercial Multi-Core 12-Stage Pipeline Out-Of-Order 64-Bit High Performance RISC-V Processor with Vector Extension
    2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA) Xuantie-910: A Commercial Multi-Core 12-Stage Pipeline Out-of-Order 64-bit High Performance RISC-V Processor with Vector Extension Industrial Product Chen Chen, Xiaoyan Xiang, Chang Liu, Yunhai Shang, Ren Guo, Dongqi Liu, Yimin Lu, Ziyi Hao, Jiahui Luo, Zhijian Chen, Chunqiang Li, Yu Pu, Jianyi Meng*, Xiaolang Yan, Yuan Xie and Xiaoning Qi The T-Head Division, Alibaba Cloud Email: [email protected] Abstract—The open source RISC-V ISA has been quickly domain-specific workload (e.g., machine learning accelerators, gaining momentum. This paper presents Xuantie-910, an industry network processing, security enclave, storage controllers and leading 64-bit high performance embedded RISC-V processor supercomputing), thereby boosting processing efficiency and from Alibaba T-Head division. It is fully based on the RV64GCV instruction set and it features custom extensions to arithmetic reducing design cost; iii) RISC-V is becoming a mainline plat- operation, bit manipulation, load and store, TLB and cache form in Unix/Linux OS. The toolchains like GNU/GCC/GDB operations. It also implements the 0.7.1 stable release of RISC- and LLVM are getting mature, further improving software V vector extension specification for high efficiency vector pro- experience and driving down software development cost. cessing. Xuantie-910 supports multi-core multi-cluster SMP with cache coherence. Each cluster contains 1 to 4 core(s) capable of Admittedly, compared with the X86, ARM, MIPS [16]– booting the Linux operating system. Each single core utilizes the [18], PowerPC, SPARC [11], [20], [21], [23], [30], openRISC state-of-the-art 12-stage deep pipeline, out-of-order, multi-issue [14], [24], [26] and other ISAs under the hood of popular superscalar architecture, achieving a maximum clock frequency GPUs and DSPs, RISC-V is still in its infancy.
    [Show full text]
  • Performance Benchmarking Physical and Virtual Linux Environments
    PERFORMANCE BENCHMARKING PHYSICAL AND VIRTUAL LINUX ENVIRONMENTS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE, FACULTY OF SCIENCE UNIVERSITY OF CAPE TOWN IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE (MIT) BY MARIO FISHER (FSHMAR003) 2012 SUPERVISED BY PROFESSOR KEN MacGREGOR Acknowledgments I would like to take this opportunity to thank the following people, without whom this dissertation would not have been possible: My wife, Jill Combrinck, for your tireless motivation, inspiration and support. My parents, Derek and Devine Fisher for all your encouragement. Professor Ken MacGregor for your supervision and wisdom. The developers and contributors of Debian GNU/Linux, Xen and OpenVZ. The developers and contributors of lmbench3, BYTEMark* Native Mode Benchmark and UnixBench. i for Emma ii Declaration I, Mario Fisher (FSHMAR003), hereby declare that the work on which this dissertation is based is my original work (except where acknowledgements indicate otherwise) and that neither the whole work nor any part of it has been, is being or is to be submitted for another degree in this or any other University. I empower the University of Cape Town to reproduce for the purpose of research either the whole or any portion of the contents in any manner whatsoever. M Fisher February 2012 iii Abstract Virtualisation is a method of partitioning one physical computer into multiple “virtual” computers, giving each the appearance and capabilities of running on its own dedicated hardware. Each virtual system functions as a full-fledged computer and can be independently shutdown and restarted. Xen is a form of paravirtualisation developed by the University of Cambridge Computer Laboratory and is available under both a free and commercial license.
    [Show full text]
  • Introduction
    EEMBC® FPMARK™ THE EMBEDDED INDUSTRY’S FIRST STANDARDIZED FLOATING-POINT BENCHMARK SUITE Supporting Both Single- and Double-Precision Floating-Point Performance Quick Background: Industry-Standard Benchmarks for the Embedded Industry • EEMBC formed in 1997 as non-profit consortium • Defining and developing application-specific benchmarks • Targeting processors and systems • Expansive Industry Support – >47 members – >90 commercial licensees – >120 university licensees WHY FLOATING POINT MATH? • Use exponents to shift the decimal point – Store more accurate fractional values than fixed point numbers • Floating point representation makes numerical computation much easier – Integer representations are tedious and error-prone, especially when scaling. • FP implementations of many algorithms take fewer cycles to execute than fixed point code (assuming the fixed- point code offers similar precision) JUSTIFYING THE NEED FOR FPMARK • Floating point arithmetic appearing in many embedded applications such as audio, DSP/math, graphics, automotive, motor control • In the same way that CoreMark® was intended to be a “better Dhrystone”, FPMark provides something better than the “somewhat quirky” Whetstone and Linpack • Several FP benchmarks are already in general use (i.e. Linpack, Nbench, Livermore loops) – Each with multiple versions – No standardized way of running them or reporting results • The industry lacks a good floating-point benchmark INCREASING NUMBER OF PROCESSORS WITH FP SUPPORT • Processors with FPU backends with multi-issue / speculation / out-of-order execution (for example, high end Cortex-A cores) • FPU capability in low cost microcontroller parts (e.g. Cortex-M4) much faster than doing it using a sw library • Vector instruction sets and FMA (Intel) • Ability to do single cycle FP MAC with dual memory access (ADI) • MIPS with new FPU as part as the proAptiv core.
    [Show full text]
  • Symbols & Numbers AB
    Anderson_Index 6/17/04 10:35 AM Page 387 Index Symbols & Numbers ALE (Application Link AutoController, 95, 145–48, Enabling), 231, 321 152, 158, 243, 256, 263, /n, 181 Allocation unit, 27, 85, 170, 268–69, 300, 303, 306–8, /o, 181 204 315, 322 %pc, 338–39 AMD, 227, 366 client driver, 271 % privileged time, 313 Apples-to-apples testing, 27, console, 272 % user time, 313 48, 60, 73, 78–79, 83, 121, virtual client, 269, 272 9iRAC, 132, 156 210, 232, 292, 316, 320, AutoIT, 136–139, 157, 183, A 325, 351, 354, 363 198, 232 ABAP, 21, 39–40, 61, 177, 185, API. See Application Automated data collection 212, 238, 285, 312, 360 Programming Interface (scripting), 93, 160, 165, ABAP dumps, 285 (API) 180, 182, 260–61, 323–24 ABAPers, 7, 313 APO (Advanced Planner and AutoTester ONE (AT1), Active users, 44, 285, 305, 324, Optimizer), 4, 18, 25, 74, 182–183, 249–50, 253, 256, 351 88, 225–26, 235 263, 268, 275, 283, 289 ActiveX, 123, 137, 139, 142, Application layer AutoTester ONE CTRL+S, 263 144, 157 heterogeneous system Availability, 7, 30–31, 59, Advanced Planner and landscape, 226 65–68, 94, 98, 101–2, 121, Optimizer (APO). See SAP: Application Link Enabling. See 132, 150, 171, 178–81, APO (Advanced Planner ALE 183–84, 201, 222, 226, 244, and Optimizer) Application Programming 288, 292, 299 AGate (Application Gate, Interface (API) Availability through component of SAP ITS), defined, 95 Redundancy (ATR), 369 204, 302 SAP, 132, 182–83, 241, 255, Average load testing, 19 agent, 122, 147, 163–64, 268, 271, 290, 338 176–79, 184, 322, 352 Application server response B CEN, 150, 183–184 time, 80 Background jobs, 67, 77, 183, defined, 184 Application Servers, 77, 104, 331, 336, 338–39 OpenView (HP) SMART 124–25, 127, 136, 147, 150, Background noise, 241 Plug–Ins, 177–78 172, 195, 203, 210, 214–15, Background work process, 336 AIX (IBM), 103, 168, 313, 218, 228, 238, 281–82, Bar charts, 115 377 353 Baselines, 43, 48, 54, 57, 62, AL02.
    [Show full text]
  • Getting Started with Blackfin Processors Iii
    Getting Started With Blackfin® Processors Revision 5.0, April 2010 Part Number 82-000850-01 Analog Devices, Inc. One Technology Way Norwood, Mass. 02062-9106 a Copyright Information ©2010 Analog Devices, Inc., ALL RIGHTS RESERVED. This document may not be reproduced in any form without prior, express written consent from Analog Devices. Printed in the USA. Disclaimer Analog Devices reserves the right to change this product without prior notice. Information furnished by Analog Devices is believed to be accurate and reliable. However, no responsibility is assumed by Analog Devices for its use; nor for any infringement of patents or other rights of third parties which may result from its use. No license is granted by implication or oth- erwise under the patent rights of Analog Devices. Trademark and Service Mark Notice The Analog Devices logo, Blackfin, the Blackfin logo, CROSSCORE, EZ-Extender, EZ-KIT Lite, SHARC, TigerSHARC, and VisualDSP++ are registered trademarks of Analog Devices. EZ-Board is a trademark of Analog Devices. All other brand and product names are trademarks or service marks of their respective owners. CONTENTS PREFACE Purpose of This Manual .................................................................. xi Intended Audience ......................................................................... xii Manual Contents ........................................................................... xii What’s New in This Manual ........................................................... xii Technical or Customer Support ....................................................
    [Show full text]
  • D5.7 – First Report on Provided Testbed Components for Running Services and Pilots
    D 5 . 7 – F IRST REPORT ON PROVIDED TESTBED COM P O N E N T S FOR RUNNING SERVICES A N D PILOTS Grant Agreement 676547 Project Acronym CoeGSS Project Title Centre of Excellence for Global Systems Science Topic EINFRA-5-2015 Project website http://www.coegss-project.eu Start Date of project October 1, 2015 Duration 36 months Deliverable due date 31.03.2017 Actual date of submission 18.04.2017 Dissemination level Public Nature Report Version 1.22 Work Package WP5 (Centre Operation) Lead beneficiary PSNC Responsible scientist/administrator Norbert Meyer (PSNC) Michael Gienger (HLRS), Sebastian Petruczynik (PSNC), Radosław Januszewski (PSNC), Damian Kaliszan (PSNC), Sergiy Contributor(s) Gogolenko (HLRS), Steffen Fuerst (GCF), Michał Pałka (Chalmers), Enrico Ubaldi (ISI) Leonardo Camiciotti (Top-IX), Internal reviewer Steffen Fuerst (GCF) D5.7 – First report on provided testbed components for running services and pilots GSS, Benchmarks, HPC, scalability, Keywords efficiency, exascale Total number of pages: 75 2 D5.7 – First report on provided testbed components for running services and pilots Copyright (c) 2015 Members of the CoeGSS Project. The CoeGSS (“Centre of Excellence for Global Systems Science”) project is funded by the European Union. For more information on the project please see the website http://www.coegss-project.eu/ The information contained in this document represents the views of the CoeGSS consortium as of the date they are published. The CoeGSS consortium does not guarantee that any information contained herein is error-free, or up to date. THE CoeGSS CONSORTIUM MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, BY PUBLISHING THIS DOCUMENT.
    [Show full text]