High Performance Computing and Communications Glossary Release

Total Page:16

File Type:pdf, Size:1020Kb

High Performance Computing and Communications Glossary Release High Performance Computing and Communications Glossary Release 1.3a K.A.Hawick [email protected] Northeast Parallel Architectures Center Syracuse University 24 July 1994 This is a glossary of terms on High Performance Computing and Communica- tions (HPCC) and is essentially a roadmap or information integration system for HPCC technology. The master copy of this do cumentismaintained online as HTML hy- p ertext, where cross references are hyp ertext links to other entries in the do cument. Some entries, like that for ScaLAPACK software for example, p oint to the software source itself, available on the internet from sites like the National Software Exchange. In this \pap er" version, entry headings are in b old, and cross reference entries are in italics. The maintainance of this glossary is of necessity an ongoing pro cess in an active eld like HPCC, and so entries are b eing up dated and augmented p erio dically. Check the version numbers. The master hyp ertext copy is stored at the Northeast Paral lel Architectures Centre (NPAC) at URL: http://www.npac.syr.edu/nse/hp ccgloss and NPAC's main URL is: http://www.npac.syr.edu An additional copy is also available in the Information Databases section of the National Software Exchange which can b e accessed from the URL: http://www.netlib.org/nse/home.html. Several p eople have contributed to the entries in this glossary, esp ecially JackDon- garra and Geo rey Fox and manyofmy colleagues at Edinburgh and Syracuse.Any further contributed entries, suggested revisions, or general comments will b e gladly received at the email address ab ove. This pap er do cumentisalsoavailable by anonymous ftp (ftp.npac.syr.edu) as Technical Rep ort SCCS-629, or by request from NPAC at 111 College Place, Syracuse NY 13244-4100, USA. 1 A acyclic graph (n.) a graph without any cycles. adaptive (adj.) Taking available information into account. For example, an adaptive mesh-generating algorithm generates a ner resolution mesh near to discontinuities such as b oundaries and corners. An adaptive routing algorithm may send identical messages in di erent directions at di erent times dep ending on the lo cal density information it has on message trac. See also oblivious address generation (n.) a cycle during execution of an instruction in which an e ective address is calculated by means of indexing or indirect addressing. address space (n.) A region of a computer's total memory within which addresses are contiguous and may refer to one another directly.Ashared memory computer has only one user-visible address space; a disjoint memory computer can haveseveral. adjacency matrix (n.) a Bo olean matrix indicating for eachpairofvertices I and J whether there is an edge from I to J. all-pairs shortest-path problem (n.) given a weighted graph, nd the shortest path between everypairofvertices. alpha-b eta pruning (n.) an algorithm used to determine the minimax value of a game tree. Alpha-b eta algorithm avoids searching subtrees whose evaluation cannot in uence the outcome of the search. Amdahl's Law (n.) A rule rst formalised by Gene Amdahl in 1967, which states that if F is the fraction of a calculation that is serial and 1-F the fraction that can b e parallelized, then the speedup that can b e achieved using P pro cessors is: 1/( F + (1-F)/P) which has a limiting value of 1/F for an in nite numb er of pro cessors. This no matter how many pro cessors are employed, if a calculation has a 10% serial comp onent, the maximum sp eedup obtainable is 10. AND tree (n.) a search tree whose nonterminal no des are all AND no des. AND-parallelism (n.) A form of parallelism exploited by some implementations of par- allel logic programming languages, in which the terms in conjunctions are evaluated si- multaneously, and parent computations are allowed to pro ceed only once their immediate children have completed. See also OR-paral lelism . AND/OR tree (n.) a search tree with b oth AND and OR nonterminal no des. antidep endence (n.) See dependence. applicative language a language that p erforms all computations by applying functions to values. architecture (n.) The basic plan along which a computer has b een constructed. Popu- lar parallel architectures include processor arrays, bus-based multiprocessors (with caches of various sizes and structures) and disjointmemorymulticomputers. See also Flynn's taxonomy. arity (n.) The numb er of arguments taken by a function. Arity is also sometimes used in place of valence. ARPAnet (n.) The US Advanced Research Pro ject Agency network, whichwas the rst large multi-site computer network. 2 array constant (n.) an array reference within an iterative lo op, all of whose subscripts are invariant. array pro cessor (n.) Any computer designed primarily to p erform data paral lel calcula- tions on arrays or matrices. The two principle architectures used are the processor array and the vector processor. arti cial intelligence (n.) A class of problems such as pattern recognition, decision making and learning in whichhumans are clearly very pro cient compared with current computers. asso ciative address (n.) a metho d whereby access is made to an item whose key matches an access key, as distinct from an access to an item at a sp eci c address in memory. See associative memory. asso ciative memory (n.)Memory that can b e accessed bycontent rather than by address; content addressable is often used synonymously. An asso ciative memory p ermits its user to sp ecify part of a pattern or key and retrieve the values asso ciated with that pattern. The tuple space used to implementthegenerative communication mo del is an asso ciative memory. asso ciative pro cessor (n.) a processor array with associative memory. asynchronous (adj.) Not guaranteed to enforce coincidence in clock time. In an asyn- chronous communication op eration the sender and receiver mayormay not b oth b e engaged in the op eration at the same instant in clo ck time. See also synchronous. asynchronous algorithm (n.) See relaxation method. asynchronous message passing (n.) See message passing . atomic(adj.) Not interruptible. An atomic op eration is one that always app ears to have b een executed as a unit. attached vector-pro cessor (n.) A sp ecialised pro cessor for vector computations, de- signed to b e connected to a general purp ose host pro cessor. The host pro cessor supplies I/O functions, a le system and other asp ects of a computing system environment. automatic vectorization (n.) A compiler that takes co de written in a serial language (often Fortran or C) and translates it into vector instructions. The vector instructions may b e machine sp eci c or in a source form such as array extensions or as subroutine calls to a vector library. See also compiler optimization. auxiliary memory (n.) Memory that is usually large, in capacity, but slow and inexp en- sive, often a rotating magnetic or optical memory, whose main function is to store large volumes of data and programs not b eing actively used by the pro cessors. AVL tree (n.) binary tree having the prop erty that for any no de in the tree, the di erence in heightbetween the left and right subtrees of that no de is no more than 1. B back substitution (n.) See LU decomposition. banded matrix (n.) a matrix in which the non-zero elements are clustered around the main diagonal. If all the non-zero elements in the matrix are within m columns of the main diagonal, then the bandwidth of the matrix is 2m+1. 3 bandwidth (n.) A measure of the sp eed of information transfer typically used to quantify the communication capabilityofmulticomputer and multiprocessor systems. Bandwidth can express p oint to p ointorcollective(bus)communications rates. Bandwidths are usually expressed in megabytes p er second. Another usage of the term bandwidth is literally as the width of a band of elements in a matrix. See banded matrices. bank con ict (n.) A bank "busy-wait" situation. Memory chip sp eeds are relatively slow when required to deliver a single word, so sup ercomputer memories are placed in a large numb er of indep endent banks (usually a p ower of 2). A vector of data laid out contiguously in memory with one comp onent p er successive bank, can b e accessed at one word p er cycle (despite the intrinsic slowness of the chips) through the use of pip eline d delivery of vector- comp onentwords at high bandwidth. When the numb er of banks is a p ower of two, then vectors requiring strides of a p ower of 2 can run into a bank con ict. bank cycle time (n.) The time, measured in clo ck cycles, taken by a memory bank between the honoring of one request to fetch or store a data item and accepting another such request. On most sup ercomputers this value is either four or eight clo ck cycles. barrier (n.) A p oint in a program at which barrier synchronization o ccurs. See also fuzzy barrier. barrier synchronization(n.) An event in whichtwo or more pro cesses b elonging to some implicit or explicit group block until all memb ers of the group haveblocked. They may then all pro ceed. No memb er of the group maypassabarrier until all pro cesses in the group havereached it. See also fuzzy barrier. basic blo ck (n.) A section of program that do es not cross any conditional branches, lo op b oundaries or other transfers of control.
Recommended publications
  • Quaternions and Cli Ord Geometric Algebras
    Quaternions and Cliord Geometric Algebras Robert Benjamin Easter First Draft Edition (v1) (c) copyright 2015, Robert Benjamin Easter, all rights reserved. Preface As a rst rough draft that has been put together very quickly, this book is likely to contain errata and disorganization. The references list and inline citations are very incompete, so the reader should search around for more references. I do not claim to be the inventor of any of the mathematics found here. However, some parts of this book may be considered new in some sense and were in small parts my own original research. Much of the contents was originally written by me as contributions to a web encyclopedia project just for fun, but for various reasons was inappropriate in an encyclopedic volume. I did not originally intend to write this book. This is not a dissertation, nor did its development receive any funding or proper peer review. I oer this free book to the public, such as it is, in the hope it could be helpful to an interested reader. June 19, 2015 - Robert B. Easter. (v1) [email protected] 3 Table of contents Preface . 3 List of gures . 9 1 Quaternion Algebra . 11 1.1 The Quaternion Formula . 11 1.2 The Scalar and Vector Parts . 15 1.3 The Quaternion Product . 16 1.4 The Dot Product . 16 1.5 The Cross Product . 17 1.6 Conjugates . 18 1.7 Tensor or Magnitude . 20 1.8 Versors . 20 1.9 Biradials . 22 1.10 Quaternion Identities . 23 1.11 The Biradial b/a .
    [Show full text]
  • 1 Parity 2 Time Reversal
    Even Odd Symmetry Lecture 9 1 Parity The normal modes of a string have either even or odd symmetry. This also occurs for stationary states in Quantum Mechanics. The transformation is called partiy. We previously found for the harmonic oscillator that there were 2 distinct types of wave function solutions characterized by the selection of the starting integer in their series representation. This selection produced a series in odd or even powers of the coordiante so that the wave function was either odd or even upon reflections about the origin, x = 0. Since the potential energy function depends on the square of the position, x2, the energy eignevalue was always positive and independent of whether the eigenfunctions were odd or even under reflection. In 1-D parity is the symmetry operation, x → −x. In 3-D the strong interaction is invarient under the symmetry of parity. ~r → −~r Parity is a mirror reflection plus a rotation of 180◦, and transforms a right-handed coordinate system into a left-handed one. Our Macroscopic world is clearly “handed”, but “handedness” in fundamental interactions is more involved. Vectors (tensors of rank 1), as illustrated in the definition above, change sign under Parity. Scalars (tensors of rank 0) do not. One can then construct, using tensor algebra, new tensors which reduce the tensor rank and/or change the symmetry of the tensor. Thus a dual of a symmetric tensor of rank 2 is a pseudovector (cross product of two vectors), and a scalar product of a pseudovector and a vector creates a pseudoscalar. We will construct bilinear forms below which have these rotational and reflection char- acteristics.
    [Show full text]
  • Automatic Temperature Controls
    230900S03 INSTRUMENTATION AND CONTROL FOR HVAC UK Controls Standard SECTION 230900S03 - AUTOMATIC TEMPERATURE CONTROLS PART 1 - GENERAL RELATED DOCUMENTS: Drawings and general provisions of the Contract, including General and Supplementary Conditions, General Mechanical Provisions and General Requirements, Division 1 Specification Sections apply to the work specified in this section. DESCRIPTION OF WORK: Furnish a BACnet system compatible with existing University systems. All building controllers, application controllers, and all input/output devices shall communicate using the protocols and network standards as defined by ANSI/ASHRAE Standard 135-2001, BACnet. This system shall communicate with the University of Kentucky Facility Management’s existing BACnet head-end software using BACnet/IP at the tier 1 level and BACnet/MSTP at the tier 2 level. No gateways shall be used for communication to controllers installed under section. BACnet/MSTP or BACnet/IP shall be used for all other tiers of communication. No servers shall be used for communication to controllers installed under this section. If servers are required, all hardware and operating systems must be approved by the Facilities Management Controls Engineering Manager and/or the Facilities Management Information Technology Manager. All Building Automation Devices should be located behind the University firewall, but outside of the Medical Center Firewall and on the environmental VLAN. Provide all necessary hardware and software to meet the system’s functional specifications. Provide Protocol Implementation Conformance Statement (PICS) for Windows-based control software and every controller in system, including unitary controllers. These must be in compliance with Front End systems PICS and BIBBS and attached Tridium PICS and BIBBS.
    [Show full text]
  • Symmetry and Tensors Rotations and Tensors
    Symmetry and tensors Rotations and tensors A rotation of a 3-vector is accomplished by an orthogonal transformation. Represented as a matrix, A, we replace each vector, v, by a rotated vector, v0, given by multiplying by A, 0 v = Av In index notation, 0 X vm = Amnvn n Since a rotation must preserve lengths of vectors, we require 02 X 0 0 X 2 v = vmvm = vmvm = v m m Therefore, X X 0 0 vmvm = vmvm m m ! ! X X X = Amnvn Amkvk m n k ! X X = AmnAmk vnvk k;n m Since xn is arbitrary, this is true if and only if X AmnAmk = δnk m t which we can rewrite using the transpose, Amn = Anm, as X t AnmAmk = δnk m In matrix notation, this is t A A = I where I is the identity matrix. This is equivalent to At = A−1. Multi-index objects such as matrices, Mmn, or the Levi-Civita tensor, "ijk, have definite transformation properties under rotations. We call an object a (rotational) tensor if each index transforms in the same way as a vector. An object with no indices, that is, a function, does not transform at all and is called a scalar. 0 A matrix Mmn is a (second rank) tensor if and only if, when we rotate vectors v to v , its new components are given by 0 X Mmn = AmjAnkMjk jk This is what we expect if we imagine Mmn to be built out of vectors as Mmn = umvn, for example. In the same way, we see that the Levi-Civita tensor transforms as 0 X "ijk = AilAjmAkn"lmn lmn 1 Recall that "ijk, because it is totally antisymmetric, is completely determined by only one of its components, say, "123.
    [Show full text]
  • High Performance Platforms
    High Performance Platforms Salvatore Orlando High Perfoamnce Computing - S. Orlando 1 Scope of Parallelism: the roadmap towards HPC • Conventional architectures are coarsely comprised of – processor, memory system, I/O • Each of these components presents significant performance bottlenecks. – It is important to understand each of these performance bottlenecks to reduce their effects • Parallelism addresses each of these components in significant ways to improve performance and reduce the bottlenecks • Applications utilize different aspects of parallelism, e.g., – data intensive applications utilize high aggregate memory bandwidth – server applications utilize high aggregate network bandwidth – scientific applications typically utilize high processing and memory system performance High Perfoamnce Computing - S. Orlando 2 Implicit Parallelism: Trends in Microprocessor Architectures • Microprocessor clock speeds have posted impressive gains over the past two decades (two to three orders of magnitude) – there are limits to this increase, also due to power consumption and heat dissipation • Higher levels of device integration have made available a large number of transistors – the issue is how best to transform these large amount of transistors into computational power • Single-core processors use these resources in multiple functional units and execute multiple instructions in the same cycle. • The precise manner in which these instructions are selected and executed provides impressive diversity in architectures. High Perfoamnce Computing - S. Orlando 3 ILP High Perfoamnce Computing - S. Orlando 4 Instruction Level Parallelism • Pipelining overlaps various stages of instruction execution to achieve performance • At a high level of abstraction, an instruction can be executed while the next one is being decoded and the next one is being fetched. • This is akin to an assembly line for manufacture of cars.
    [Show full text]
  • Multiprocessing Contents
    Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References .............................................
    [Show full text]
  • Darvas, G.: Quaternion/Vector Dual Space Algebras Applied to the Dirac
    Bulletin of the Transilvania University of Bra¸sov • Vol 8(57), No. 1 - 2015 Series III: Mathematics, Informatics, Physics, 27-42 QUATERNION/VECTOR DUAL SPACE ALGEBRAS APPLIED TO THE DIRAC EQUATION AND ITS EXTENSIONS Gy¨orgyDARVAS 1 Comunicated to the conference: Finsler Extensions of Relativity Theory, Brasov, Romania, August 18 - 23, 2014 Abstract The paper re-applies the 64-part algebra discussed by P. Rowlands in a series of (FERT and other) papers in the recent years. It demonstrates that the original introduction of the γ algebra by Dirac to "the quantum the- ory of the electron" can be interpreted with the help of quaternions: both the α matrices and the Pauli (σ) matrices in Dirac's original interpretation can be rewritten in quaternion forms. This allows to construct the Dirac γ matrices in two (quaternion-vector) matrix product representations - in accordance with the double vector algebra introduced by P. Rowlands. The paper attempts to demonstrate that the Dirac equation in its form modified by P. Rowlands, essentially coincides with the original one. The paper shows that one of these representations affects the γ4 and γ5 matrices, but leaves the vector of the Pauli spinors intact; and the other representation leaves the γ matrices intact, while it transforms the spin vector into a quaternion pseudovector. So the paper concludes that the introduction of quaternion formulation in QED does not provide us with additional physical informa- tion. These transformations affect all gauge extensions of the Dirac equation, presented by the author earlier, in a similar way. This is demonstrated by the introduction of an additional parameter (named earlier isotopic field-charge spin) and the application of a so-called tau algebra that governs its invariance transformation.
    [Show full text]
  • Pseudotensors
    Pseudotensors Math 1550 lecture notes, Prof. Anna Vainchtein 1 Proper and improper orthogonal transfor- mations of bases Let {e1, e2, e3} be an orthonormal basis in a Cartesian coordinate system and suppose we switch to another rectangular coordinate system with the orthonormal basis vectors {¯e1, ¯e2, ¯e3}. Recall that the two bases are related via an orthogonal matrix Q with components Qij = ¯ei · ej: ¯ei = Qijej, ei = Qji¯ei. (1) Let ∆ = detQ (2) and recall that ∆ = ±1 because Q is orthogonal. If ∆ = 1, we say that the transformation is proper orthogonal; if ∆ = −1, it is an improper orthogonal transformation. Note that the handedness of the basis remains the same under a proper orthogonal transformation and changes under an improper one. Indeed, ¯ V = (¯e1 ׯe2)·¯e3 = (Q1mem ×Q2nen)·Q3lel = Q1mQ2nQ3l(em ×en)·el, (3) where the cross product is taken with respect to some underlying right- handed system, e.g. standard basis. Note that this is a triple sum over m, n and l. Now, observe that the terms in the above some where (m, n, l) is a cyclic (even) permutation of (1, 2, 3) all multiply V = (e1 × e2) · e3 because the scalar triple product is invariant under such permutations: (e1 × e2) · e3 = (e2 × e3) · e1 = (e3 × e1) · e2. Meanwhile, terms where (m, n, l) is a non-cyclic (odd) permutations of (1, 2, 3) multiply −V , e.g. for (m, n, l) = (2, 1, 3) we have (e2 × e1) · e3 = −(e1 × e2) · e3 = −V . All other terms in the sum in (3) (where two or more of the three indices (m, n, l) are the same) are zero (why?).
    [Show full text]
  • Introduction to Vector and Tensor Analysis
    Introduction to vector and tensor analysis Jesper Ferkinghoff-Borg September 6, 2007 Contents 1 Physical space 3 1.1 Coordinate systems . 3 1.2 Distances . 3 1.3 Symmetries . 4 2 Scalars and vectors 5 2.1 Definitions . 5 2.2 Basic vector algebra . 5 2.2.1 Scalar product . 6 2.2.2 Cross product . 7 2.3 Coordinate systems and bases . 7 2.3.1 Components and notation . 9 2.3.2 Triplet algebra . 10 2.4 Orthonormal bases . 11 2.4.1 Scalar product . 11 2.4.2 Cross product . 12 2.5 Ordinary derivatives and integrals of vectors . 13 2.5.1 Derivatives . 14 2.5.2 Integrals . 14 2.6 Fields . 15 2.6.1 Definition . 15 2.6.2 Partial derivatives . 15 2.6.3 Differentials . 16 2.6.4 Gradient, divergence and curl . 17 2.6.5 Line, surface and volume integrals . 20 2.6.6 Integral theorems . 24 2.7 Curvilinear coordinates . 26 2.7.1 Cylindrical coordinates . 27 2.7.2 Spherical coordinates . 28 3 Tensors 30 3.1 Definition . 30 3.2 Outer product . 31 3.3 Basic tensor algebra . 31 1 3.3.1 Transposed tensors . 32 3.3.2 Contraction . 33 3.3.3 Special tensors . 33 3.4 Tensor components in orthonormal bases . 34 3.4.1 Matrix algebra . 35 3.4.2 Two-point components . 38 3.5 Tensor fields . 38 3.5.1 Gradient, divergence and curl . 39 3.5.2 Integral theorems . 40 4 Tensor calculus 42 4.1 Tensor notation and Einsteins summation rule .
    [Show full text]
  • Octonic Electrodynamics
    Octonic Electrodynamics Victor L. Mironov* and Sergey V. Mironov Institute for physics of microstructures RAS, 603950, Nizhniy Novgorod, Russia (Submitted: 18 February 2008; Revised: 26 June 2014) In this paper we present eight-component values “octons”, generating associative noncommutative algebra. It is shown that the electromagnetic field in a vacuum can be described by a generalized octonic equation, which leads both to the wave equations for potentials and fields and to the system of Maxwell’s equations. The octonic algebra allows one to perform compact combined calculations simultaneously with scalars, vectors, pseudoscalars and pseudovectors. Examples of such calculations are demonstrated by deriving the relations for energy, momentum and Lorentz invariants of the electromagnetic field. The generalized octonic equation for electromagnetic field in a matter is formulated. PACS numbers: 03.50 De, 02.10 De. I. INTRODUCTION Hypercomplex numbers1-4 especially quaternions are widely used in relativistic mechanics, electrodynamics, quantum mechanics and quantum field theory3-10 (see also the bibliographical review Ref. 11). The structure of quaternions with four components (scalar and vector) corresponds to the relativistic space-time structure, which allows one to realize the quaternionic generalization of quantum mechanics5-8. However quaternions do not include pseudoscalar and pseudovector components. Therefore for describing all types of physical values the eight-component hypercomplex numbers enclosing scalars, vectors, pseudoscalars
    [Show full text]
  • Quaternions Multivariate Vectors
    A Hierarchy of Symmetries Peter Rowlands Symmetry We are not very good observers – science is a struggle for us. But we have developed one particular talent along with our evolution that serves us well. This is pattern recognition. This is fortunate, for, everywhere in Nature, and especially in physics, there are hints that symmetry is the key to deeper understanding. And physics has shown that the symmetries are often ‘broken’, that is disguised or hidden. A classic example is that between space and time, which are combined in relativity, but which remain obstinately different. Some questions Which are the most fundamental symmetries? Where does symmetry come from? How do the most fundamental symmetries help to explain the subject? Why are some symmetries broken and what does broken symmetry really mean? Many symmetries are expressed in some way using integers. Which are the most important? I would like to propose here that there is a hierarchy of symmetries, emerging at a very fundamental level, all of which are interlinked. The Origin of Symmetry A philosophical starting-point. The ultimate origin of symmetry in physics is zero totality. The sum of every single thing in the universe is precisely nothing. Nature as a whole has no definable characteristic. Zero is the only logical starting-point. If we start from anywhere else we have to explain it. Zero is the only idea we couldn’t conceivably explain. Duality and anticommutativity Where do we go from zero? I will give a semi-empirical answer, though it is possible to do it more fundamentally. The major symmetries in physics begin with just two ideas: duality and anticommutativity There are only two fundamental numbers or integers: 2 and 3 Everything else is a variation of these.
    [Show full text]
  • K1DM3 Control System Software User's Guide
    K1DM3 Control System Software User's Guide William Deich University of California Observatories 07 Jan 2020 ver 3.7b Contents 1 Overview 6 1.1 Active Components of K1DM3..........................6 1.1.1 Drum....................................6 1.1.2 Swingarm..................................9 1.2 Position Summary................................. 13 1.3 Motion Controllers................................. 13 1.4 Software....................................... 14 1.4.1 Back-end Services............................. 14 1.4.2 End-User Applications........................... 15 1.5 K1DM3 Private Network.............................. 15 2 Starting, Stopping, Suspending Services 17 2.1 Starting/Stopping................................. 17 2.2 Suspending Dispatchers.............................. 19 2.3 Special No-Dock Engineering Version....................... 19 2.4 Periodic Restarts.................................. 20 3 Components, Assemblies, and Sequencers 21 3.1 Elementary Components.............................. 21 3.2 Compound Keywords............................... 22 3.3 Assemblies...................................... 22 3.4 Sequencers..................................... 23 3.5 The ACTIVATE Sequencer............................ 24 3.6 The M3AGENT Sequencer............................ 27 3.7 Monitoring Sequencer Execution......................... 27 4 Dispatcher Interfaces for Routine Operations 29 4.1 TCS Operations.................................. 31 4.2 M3AGENT..................................... 32 4.3 Hand Paddle...................................
    [Show full text]