The Quadrics Network (Qsnet): High-Performance Clustering Technology

Total Page:16

File Type:pdf, Size:1020Kb

The Quadrics Network (Qsnet): High-Performance Clustering Technology Proceedings of the 9th IEEE Hot Interconnects (HotI'01), Palo Alto, California, August 2001. The Quadrics Network (QsNet): High-Performance Clustering Technology Fabrizio Petrini, Wu-chun Feng, Adolfy Hoisie, Salvador Coll, and Eitan Frachtenberg Computer & Computational Sciences Division Los Alamos National Laboratory ¡ fabrizio,feng,hoisie,scoll,eitanf ¢ @lanl.gov Abstract tegration into large-scale systems. While GigE resides at the low end of the performance spectrum, it provides a low-cost The Quadrics interconnection network (QsNet) con- solution. GigaNet, GSN, Myrinet, and SCI add programma- tributes two novel innovations to the field of high- bility and performance by providing communication proces- performance interconnects: (1) integration of the virtual- sors on the network interface cards and implementing differ- address spaces of individual nodes into a single, global, ent types of user-level communication protocols. virtual-address space and (2) network fault tolerance via The Quadrics network (QsNet) surpasses the above inter- link-level and end-to-end protocols that can detect faults connects in functionality by including a novel approach to and automatically re-transmit packets. QsNet achieves these integrate the local virtual memory of a node into a globally feats by extending the native operating system in the nodes shared, virtual-memory space; a programmable processor in with a network operating system and specialized hardware the network interface that allows the implementation of intel- support in the network interface. As these and other impor- ligent communication protocols; and an integrated approach tant features of QsNet can be found in the InfiniBand speci- to network fault detection and fault tolerance. Consequently, fication, QsNet can be viewed as a precursor to InfiniBand. QsNet already possesses many of the salient aspects of In- In this paper, we present an initial performance evalu- finiBand [2], an evolving standard that also provides an inte- ation of QsNet. We first describe the main hardware and grated approach to high-performance communication. software features of QsNet, followed by the results of bench- marks that we ran on our experimental, Intel-based, Linux 2 QsNet cluster built around QsNet. Our initial analysis indicates that QsNet performs remarkably well, e.g., user-level latency QsNet consists of two hardware building blocks: a under 2 £ s and bandwidth over 300 MB/s. programmable network interface called Elan [9] and a Keywords: performance evaluation, interconnection net- high-bandwidth, low-latency, communication switch called works, user-level communication, InfiniBand, Linux, MPI. Elite [10]. With respect to software, QsNet provides sev- eral layers of communication libraries that trade off between 1 Introduction performance and ease of use. These hardware and software components combine to enable QsNet to provide the fol- lowing: (1) efficient and protected access to a global virtual With the increased importance of scalable system-area memory via remote DMA operations and (2) enhanced net- networks for cluster (super)computers, web-server farms, work fault tolerance via link-level and end-to-end protocols and network-attached storage, the interconnection network that can detect faults and automatically re-transmit packets. and its associated software libraries and hardware have be- come critical components in achieving high performance. 2.1 Elan Network Interface Such components will greatly impact the design, architec- ture, and use of the aforementioned systems in the future. The Elan network interface connects the high- Key players in high-speed interconnects include Gigabit performance, multi-stage Quadrics network to a processing Ethernet (GigE) [11], GigaNet [13], SCI [5], Myrinet [1], node containing one or more CPUs. In addition to generat- and GSN (HiPPI-6400) [12]. These interconnect solutions ing and accepting packets to and from the network, the Elan differ from one another with respect to their architecture, provides substantial local processing power to implement programmability, scalability, performance, and ease of in- high-level, message-passing protocols such as MPI. The ¤ internal functional structure of the Elan, shown in Figure 1, This work was supported by the U.S. Department of Energy through Los Alamos National Laboratory contract W-7405-ENG-36. This paper is centers around two primary processing engines: the £ code LA-UR 01-4100. processor and thread processor. 10 200MHz 10 the network route. Several routing tables can be loaded in order to have different routing strategies. Link FIFO FIFO The Elan has 8KB of cache memory, organized as 4 sets Mux 0 1 of 2KB, and 64MB of SDRAM memory. The cache line size 72 SDRAM Thread µ code is 32 bytes. The cache performs pipelined fills from SDRAM I/F Processor Processor and can issue a number of cache fills and write backs for DMA Inputter Buffers different units while still being able to service accesses for units that hit on the cache. The interface to SDRAM is 64 64 Data Bus bits in length with 8 check bits added to provide error-code 100 MHz 64 correction. The memory interface also contains a 32-byte 32 write buffer and a 32-byte read buffer. Table Clock & MMU & Walk Statistics TLB Engine Registers 28 4 Way Set Associative Cache 2.2 Elite Switch PCI Interface 66MHz 64 The Elite provides (1) eight bidirectional links supporting two virtual channels in each direction, (2) an internal ¢¡¤£¦¥ 1 Figure 1. Elan Functional Units full crossbar switch, (3) a nominal transmission bandwidth of 400 MB/s in each link direction and a flow-through la- tency of §©¨ ns, (4) packet error detection and recovery with routing and data transactions CRC-protected, (5) two priority The 32-bit £ code processor supports four threads of exe- levels combined with an aging mechanism to ensure fair de- cution, where each thread can independently issue pipelined livery of packets in the same priority level, (6) hardware sup- memory requests to the memory system. Up to eight requests port for broadcasts, and (7) adaptive routing. The switches can be outstanding at any given time. The scheduling for the are interconnected in a quaternary fat-tree topology, which £ code processor enables a thread to wake up, schedule a new belongs to the more general class of -ary -trees [7, 6]. memory access on the result of a previous memory access, Elite networks are source-routed, and the transmission of and go back to sleep in as few as two system-clock cycles. each packet is pipelined into the network using wormhole The four £ code threads are described below: (1) input- flow control. At the link level, each packet is partitioned in ter thread: Handles input transactions from the network. smaller units called flits (flow control digits) [3] of 16 bits. (2) DMA thread: Generates DMA packets to be written to Every packet is closed by an End-Of-Packet (EOP) token, the network, prioritizes outstanding DMAs, and time-slices but this is normally only sent after receipt of a packet ac- large DMAs so that small DMAs are not adversely blocked. knowledge token. This implies that every packet transmis- (3) processor-scheduling thread: Prioritizes and controls the sion creates a virtual circuit between source and destination. scheduling and descheduling of the thread processor. (4) Packets can be sent to multiple destinations using the command-processor thread: Handles operations requested broadcast capability of the network. For a broadcast packet by the host (i.e., “command”) processor at user level. to be successfully delivered a positive acknowledgment must The thread processor is a 32-bit RISC processor that aids be received from all the recipients of the broadcast group. in the implementation of higher-level messaging libraries All Elans connected to the network are capable of receiving without explicit intervention from the main CPU. In order the broadcast packet but, if desired, the broadcast set can be to better support the implementation of high-level message- limited to a subset of physically contiguous Elans. passing libraries without explicit intervention by the main CPU, its instruction set was augmented with extra instruc- tions to construct network packets, manipulate events, ef- 2.3 Global Virtual Memory ficiently schedule threads, and block save and restore a thread’s state when scheduling. The Elan can transfer information directly between The MMU translates 32-bit virtual addresses into ei- the address spaces of groups of cooperating processes ther 28-bit local SDRAM physical addresses or 48-bit PCI while maintaining hardware protection between the process physical addresses. To translate these addresses, the MMU groups. This capability is a sophisticated extension to the contains a 16-entry, fully-associative, translation lookaside conventional virtual memory mechanism and is known as buffer (TLB) and a small data-path and state machine used virtual operation. Virtual operation is based on two con- to perform table walks to fill the TLB and save trap informa- cepts: (1) the Elan virtual memory and (2) the Elan context. tion when the MMU faults. The Elan contains routing tables that translate every vir- 1The crossbar has two input ports for each input link, to accommodate tual processor number into a sequence of tags that determine two virtual channels. Main Processor 2.3.1 Elan Virtual Memory Virtual Address Space The Elan contains an MMU to translate the virtual mem- Elan Virtual Address Space ory addresses issued by the various on-chip functional units 1FFFFFFFFFF FFFFFFFF (Thread Processor, DMA Engine, and so on) into physical heap heap addresses. These physical memory addresses may refer to ei- bss ther Elan local memory (SDRAM) or the node’s main mem- bss data ory. To support main memory accesses, the configuration data text tables for the Elan MMU are synchronized with the main text processor’s MMU tables so that the virtual address space can stack stack 1FF0C808000 be accessed by the Elan. The synchronization of the MMU C808000 tables is the responsibility of the system code and is invisible memoy allocator heap 200MB 8000 system to the user programmer.
Recommended publications
  • Half-Year Financial Report at 30 June 2013 Finmeccanica
    HALF-YEAR FINANCIAL REPORT AT 30 JUNE 2013 FINMECCANICA Disclaimer This Half-Year Financial Report at 30 June 2013 has been translated into English solely for the convenience of the international reader. In the event of conflict or inconsistency between the terms used in the Italian version of the report and the English version, the Italian version shall prevail, as the Italian version constitutes the sole official document. CONTENTS BOARDS AND COMMITTEES ...................................................................................................... 4 REPORT ON OPERATIONS AT 30 JUNE 2013 .......................................................................... 5 Group results and financial position in the first half of 2013 .................................................................. 5 Outlook ................................................................................................................................................. 12 “Non-GAAP” alternative performance indicators ................................................................................. 22 Industrial and financial transactions ...................................................................................................... 26 Corporate Governance .......................................................................................................................... 29 CONDENSED CONSOLIDATED HALF-YEAR FINANCIAL STATEMENTS AT 30 JUNE 2013 ..................................................................................................................................................
    [Show full text]
  • Finmeccanica 2009 Consolidated Financial Statements
    FINMECCANICA 2009 CONSOLIDATED FINANCIAL STATEMENTS Disclaimer This Annual Report 2009 has been translated into English solely for the convenience of the international reader. In the event of conflict or inconsintency between the terms used in the Italian version of the report and the English version, the Italian version shall prevail, as the Italian version constitutes the official document. WorldReginfo - 3452d26b-cc32-4c0c-b20e-c4d9fb240864 CONTENTS Boards and Committees ................................................................................................................................ 6 REPORT ON OPERATIONS AT 31 DECEMBER 2009 ........................................................................... 7 The results and financial position of the Group ........................................................................................ 7 “Non-GAAP” performance indicators .................................................................................................... 24 Operations with related parties ............................................................................................................... 29 Performance by division ......................................................................................................................... 32 HELICOPTERS ............................................................................................................................ 32 DEFENCE ELECTRONICS AND SECURITY ............................................................................. 38 AERONAUTICS
    [Show full text]
  • High Performance Computing Tera 10: Beyond 50 Thousand Billion Operations Per Second
    JUNE 2006 - SPECIAL ISSUE IN PARTNERSHIP WITH BULL AND INTEL High Performance Computing Tera 10: beyond 50 thousand billion operations per second Editorial The lightning growth of intensive computing oday, researchers, engineers an architecture promising a million they will require a 100 to 10,000- and entrepreneurs in vir- operations a second for a cost of just fold increase in computing capacity Ttually every field are finding a million dollars. Thirty years on within the next 10-20 years. that high-performance computing – with the advances in electronic In this frenetic race, there is general (HPC) is starting to play a central circuit technology and the science agreement that France has not pro- role in maintaining their competi- of computer simulation – a mil- gressed as fast as we would have wan- tive edge: in research and business lion times that processing power is ted over the past few years. In order activities alike. now available for an equivalent cost. to be effective and consistent, it is Advances in scientific research Today’s most powerful machine deli- not enough just to give the scientific have always depended on the link vers 280 teraflops, and there are plans community the most powerful com- between experimental and theore- in the near future for a 1 petaflop puters. Active support for research tical work. The emergence of the machine in China and a 5 petaflop and higher education into algorithms supercomputer opened up a whole machine in Japan. and the associated computer sciences new avenue: high-powered compu- Where will this progress lead us? is also essential.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 8,862,870 B2 Reddy Et Al
    USOO886287OB2 (12) United States Patent (10) Patent No.: US 8,862,870 B2 Reddy et al. (45) Date of Patent: Oct. 14, 2014 (54) SYSTEMS AND METHODS FOR USPC .......... 713/152–154, 168, 170; 709/223, 224, MULTI-LEVELTAGGING OF ENCRYPTED 709/225 ITEMIS FOR ADDITIONAL SECURITY AND See application file for complete search history. EFFICIENT ENCRYPTED ITEM (56) References Cited DETERMINATION U.S. PATENT DOCUMENTS (75) Inventors: Anoop Reddy, Santa Clara, CA (US); 5,867,494 A 2/1999 Krishnaswamy et al. Craig Anderson, Santa Clara, CA (US) 5,909,559 A 6, 1999 SO (73) Assignee: Citrix Systems, Inc., Fort Lauderdale, (Continued) FL (US) FOREIGN PATENT DOCUMENTS (*) Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 CN 1478348 A 2, 2004 U.S.C. 154(b) by 0 days. EP 1422.907 A2 5, 2004 (Continued) (21) Appl. No.: 13/337.735 OTHER PUBLICATIONS (22) Filed: Dec. 27, 2011 Australian Examination Report on 200728.1083 dated Nov.30, 2010. (65) Prior Publication Data (Continued) US 2012/O17387OA1 Jul. 5, 2012 Primary Examiner — Abu Sholeman (74) Attorney, Agent, or Firm — Foley & Lardner LLP: Related U.S. Application Data Christopher J. McKenna (60) Provisional application No. 61/428,138, filed on Dec. (57) ABSTRACT 29, 2010. The present disclosure is directed towards systems and meth ods for performing multi-level tagging of encrypted items for (51) Int. Cl. additional security and efficient encrypted item determina H04L 9M32 (2006.01) tion. A device intercepts a message from a server to a client, H04L 2L/00 (2006.01) parses the message and identifies a cookie.
    [Show full text]
  • High Performance Network and Channel-Based Storage
    High Performance Network and Channel-Based Storage Randy H. Katz Report No. UCB/CSD 91/650 September 1991 Computer Science Division (EECS) University of California, Berkeley Berkeley, California 94720 (NASA-CR-189965) HIGH PERFORMANCE NETWORK N92-19260 AND CHANNEL-BASED STORAGE (California Univ.) 42 p CSCL 098 Unclas G3/60 0073846 High Performance Network and Channel-Based Storage Randy H. Katz Computer Science Division Department of Electrical Engineering and Computer Sciences University of California Berkeley, California 94720 Abstract: In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called I/O channels. With the dramatic shift towards workstation-based com- puting, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. In this paper, we discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage. Key Words and Phrases: High Performance Computing, Computer Networks, File and Storage Servers, Secondary and Tertiary Storage Device 1. Introduction The traditional mainframe-centered model of computing can be characterized by small numbers of large-scale mainframe computers, with shared storage devices attached via I/O channel hard- ware. Today, we are experiencing a major paradigm shift away from centralized mainframes to a distributed model of computation based on workstations and file servers connected via high per- formance networks.
    [Show full text]
  • Ing. Paolo Neri Direttore Architetture Grandi Sistemi Selex Sistemi Integrati
    La Tecnologia a Supporto della Sicurezza Transfrontaliera Intervento a cura di: Ing. Paolo Neri Direttore Architetture Grandi Sistemi Selex Sistemi Integrati Roma, Convegno OSN, 25 ottobre 2007 SELEX Sistemi Integrati nel contesto Finmeccanica Missione ¾ Nell’ambito del Gruppo Finmeccanica, SELEX Sistemi Integrati è responsabile della impostazione d’architettura per i Grandi Sistemi in applicazioni civili e militari ¾ La capacità di rispondere positivamente alla missione assegnata è basata sull’esperienza maturata nelle diverse attività correlate al tradizionale ruolo di Ansaldo Galileo Energia Avionica SELEX SELEX Sistemi Integrati: Ansaldo Comms STS SELEX • Sistemi di controllo del traffico aereo Sensors and Telespazio Airborne (ATM) Systems Selex • Sistemi marittimi e portuali (VTMS) Thales Sistemi Alenia Space Integrati • Sistemi di sorveglianza, monitoraggio e GALASSIA Alenia GRANDI SISTEMI Selex controllo (in campo sia civile che Aeronautica SEMA militare) FINMECCANICA Agusta • Sistemi di Comando e Controllo Westland Seicos • Sistemi di difesa aerea WASS • Sistemi di difesa navale ElsagDatamat OtoMelara Ansaldo • Sale Operative Breda MBDA Quadrics L’integrazione per al Sicurezza Transfrontaliera Sicurezza Transfrontaliera L’Approccio Integrato Al fine di Garantire la sicurezza transfrontaliera operano in Italia Intelligence molteplici organizzazioni governative e non, in ambiti spesso eterogenei, che si Controllo delle Coste trovano a dover cooperare tra loro a volte in emergenza. Controllo delle Frontiere terrestri L’idea stessa di frontiera è mutata affiancandosi ai classici confini fisici Protezione Porti quelli virtuali dell’informatica. Protezione Aeroporti E’ evidente inoltre l’importanza ai fini dell’integrazione di disporre di strumenti che da un lato facilitino lo scambio di Sicurezza IT informazioni, dall’altro mettano a disposizione le informazioni Accoglienza e primo soccorso effettivamente necessarie.
    [Show full text]
  • HIPPI Developments for CERN Experiments A
    VERSION OF: 5-Feb-98 10:15 HIPPI Developments for CERN experiments A. van Praag ,T. Anguelov, R.A. McLaren, H.C. van der Bij, CERN, Geneva, Switzerland. J. Bovier, P. Cristin Creative Electronic Systems, Geneva, Switzerland. M. Haben, P. Jovanovic, I. Kenyon, R. Staley University of Birmingham, Birmingham, U.K. D. Cunningham, G. Watson Hewlett Packard Laboratories, Bristol, U.K. B. Green, J. Strong Royal Hollaway and Bedford New College, U.K. Abstract HIPPI Standard, fast, simple, inexpensive; is this not a contradiction in terms? The High-Performance Parallel We have decided to use the High Performance Parallel Interface (HIPPI) is a new proposed ANSI standard, using a Interface (HIPPI) to implement these links. The HIPPI minimal protocol and providing 100 Mbyte/sec transfers over specification was started in the Los Alamos laboratory in distances up to 25 m. Equipment using this standard is 1989 and is now a proposed ANSI standard (X3T9/88-127, offered by a growing number of computer manufacturers. A X3T9.3/88-23, HIPPI PH) [1,2]. This standard allows commercially available HIPPI chipset allows low cost 100 Mbyte/sec synchronous data transfers between a "Source" implementations. In this article a brief technical introduction and a "Destination". Seen from the lowest level upwards the to the HIPPI will be given, followed by examples of planned HIPPI specification proposes a logical framing hierarchy applications in High Energy Physics experiments including where the smallest unit of data to be transferred, called a the present developments involving CERN: a detector "burst" has a standard size of 256 words of 32 bit or optional emulator, a risc processor based VME connection, a long 64 bit (Fig.
    [Show full text]
  • PC Hardware Contents
    PC Hardware Contents 1 Computer hardware 1 1.1 Von Neumann architecture ...................................... 1 1.2 Sales .................................................. 1 1.3 Different systems ........................................... 2 1.3.1 Personal computer ...................................... 2 1.3.2 Mainframe computer ..................................... 3 1.3.3 Departmental computing ................................... 4 1.3.4 Supercomputer ........................................ 4 1.4 See also ................................................ 4 1.5 References ............................................... 4 1.6 External links ............................................. 4 2 Central processing unit 5 2.1 History ................................................. 5 2.1.1 Transistor and integrated circuit CPUs ............................ 6 2.1.2 Microprocessors ....................................... 7 2.2 Operation ............................................... 8 2.2.1 Fetch ............................................. 8 2.2.2 Decode ............................................ 8 2.2.3 Execute ............................................ 9 2.3 Design and implementation ...................................... 9 2.3.1 Control unit .......................................... 9 2.3.2 Arithmetic logic unit ..................................... 9 2.3.3 Integer range ......................................... 10 2.3.4 Clock rate ........................................... 10 2.3.5 Parallelism .........................................
    [Show full text]
  • Lecture 12: I/O: Metrics, a Little Queuing Theory, and Busses
    Lecture 12: I/O: Metrics, A Little Queuing Theory, and Busses Professor David A. Patterson Computer Science 252 Fall 1996 DAP.F96 1 Review: Disk Device Terminology Disk Latency = Queuing Time + Seek Time + Rotation Time + Xfer Time Order of magnitude times for 4K byte transfers: Seek: 12 ms or less Rotate: 4.2 ms @ 7200 rpm (8.3 ms @ 3600 rpm ) Xfer: 1 ms @ 7200 rpm (2 ms @ 3600 rpm) DAP.F96 2 Review: R-DAT Technology 2000 RPM Four Head Recording Helical Recording Scheme Tracks Recorded ±20° w/o guard band Read After Write Verify DAP.F96 3 Review: Automated Cartridge System STC 4400 8 feet 10 feet 6000 x 0.8 GB 3490 tapes = 5 TBytes in 1992 $500,000 O.E.M. Price 6000 x 20 GB D3 tapes = 120 TBytes in 1994 1 Petabyte (1024 TBytes) in 2000 DAP.F96 4 Review: Storage System Issues • Historical Context of Storage I/O • Secondary and Tertiary Storage Devices • Storage I/O Performance Measures • A Little Queuing Theory • Processor Interface Issues • I/O Buses • Redundant Arrarys of Inexpensive Disks (RAID) • ABCs of UNIX File Systems • I/O Benchmarks • Comparing UNIX File System Performance DAP.F96 5 Disk I/O Performance 300 Response Metrics: Time (ms) Response Time Throughput 200 100 0 0% 100% Throughput (% total BW) Queue Proc IOC Device Response time = Queue + Device Service time DAP.F96 6 Response Time vs. Productivity • Interactive environments: Each interaction or transaction has 3 parts: – Entry Time: time for user to enter command – System Response Time: time between user entry & system replies – Think Time: Time from response until user
    [Show full text]
  • Police Aviation News December 2008
    Police Aviation News December 2008 ©Police Aviation Research Number 152 December 2008 IPAR Police Aviation News December 2008 2 PAN – POLICE AVIATION NEWS is published monthly by INTERNATIONAL POLICE AVIATION RESEARCH 7 Windmill Close, Honey Lane, Waltham Abbey, Essex EN9 3BQ UK Main: +44 1992 714162 Cell: +44 7778 296650 Skype: Bryn.Elliott Bryn Elliott E-mail: [email protected] Bob Crowe www.bobcroweaircraft.com Digital Downlink www.bms-inc.com L3 Wescam www.wescam.com Innovative Downlink Solutions www.mrcsecurity.com Power in a box www.powervamp.com Turning the blades www.turbomeca.com Airborne Law Enforcement Association www.alea.org European Law Enforcement Association www.pacenet.info Sindacato Personale Aeronavigante Della Polizia www.uppolizia.it INTERNATIONAL GULF: A new anti piracy move early last month the European Union launched a security operation off the coast of Somalia to combat growing acts of piracy and help protect aid ships. This Operation Atalanta is the first of its kind. The mission will be led by Britain, with its headquarters in North- wood, near London but it is seen as a symbol of the evolution in European defence, the coming together of an operation involving at least seven ships, three of them frigates and one a supply vessel backed by surveil- lance aircraft. The ten Nations involved include France, Germany, Greece, the Netherlands and Spain, with Portugal, Sweden and non-EU nation Nor- way also likely to take part. It is claimed that Somali pirates were now responsible for nearly a third of all reported attacks on ships, often using violence and taking hostages.
    [Show full text]
  • Security Hardened Remote Terminal Units for SCADA Networks
    University of Louisville ThinkIR: The University of Louisville's Institutional Repository Electronic Theses and Dissertations 5-2008 Security hardened remote terminal units for SCADA networks. Jeff Hieb University of Louisville Follow this and additional works at: https://ir.library.louisville.edu/etd Recommended Citation Hieb, Jeff, "Security hardened remote terminal units for SCADA networks." (2008). Electronic Theses and Dissertations. Paper 615. https://doi.org/10.18297/etd/615 This Master's Thesis is brought to you for free and open access by ThinkIR: The University of Louisville's Institutional Repository. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of ThinkIR: The University of Louisville's Institutional Repository. This title appears here courtesy of the author, who has retained all other copyrights. For more information, please contact [email protected]. SECURITY HARDENED REMOTE TERMINAL UNITS FOR SCADA NETWORKS By Jeffrey Lloyd Hieb B.S., Furman University, 1992 B.A., Furman University, 1992 M.S., University of Louisville, 2004 A Dissertation Submitted to the Faculty of the Graduate School of the University of Louisville in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Department of Computer Science and Computer Engineering J. B. Speed School of Engineering University of Louisville Louisville, Kentucky May 2008 SECURITY HARDENED REMOTE TERMINAL UNITS FOR SCADA NETWORKS By Jeffrey Lloyd Hieb B.S., Furman University, 1992 B.A., Furman University, 1992 M.S., University of Louisville, 2004 A Dissertation Approved on February 26, 2008 By the following Dissertation Committee members Dr. James H. Graham, Dissertation Director Dr.
    [Show full text]
  • ISO/IEC JTC 1/SC 25 N 4Chi008 Date: 2004-06-22
    ISO/IEC JTC 1/SC 25 N 4Chi008 Date: 2004-06-22 ISO/IEC JTC 1/SC 25 INTERCONNECTION OF INFORMATION TECHNOLOGY EQUIPMENT Secretariat: Germany (DIN) DOC TYPE: Administrative TITLE: Status of projects of SC25/WG 4, Chitose, Japan, 2004-06-22/24. SOURCE: ISO/IEC JTC 1/SC 25/WG 4 Convener PROJECT: All projects of SC 25/WG 4 STATUS: Agenda ACTION ID: FYI DUE DATE: n/a REQUESTED: For information ACTION MEDIUM: Open DISTRIBUTION: ITTF, JTC 1 Secretariat P-, L-, O-Members of SC 25 No of Pages: 08 (including cover) Page 1 of 8 Status of projects of WG 4, Chitose, Japan, 2004-06-22/24 6 Project 1.25.13.01.XX - Channel Interface Specifications: Fibre Distributed Data Interface (FDDI) 6.1. Project 1.25.13.01.03 - FDDI - Part 1: Physical Layer Protocol (PHY) [ISO 9314-1:1989] no action required 6.2. Project 1.25.13.01.04 - FDDI - Part 2: Media Access Control (MAC) [ISO 9314-2:1989] - - no action required 6.3. Project 1.25.13.01.05 - FDDI - Part 3: Physical Layer Medium Dependent (PMD) [ISO/IEC 9314-3:1990] no action required 6.4. Project 1.25.13.01.06 - FDDI - Part 4: Single-Mode Fibre Physical Layer Medium Dependent (SMF-PMD) [ISO/IEC 9314-4:1999] -- no action required 6.5. Project 1.25.13.01.07 - FDDI - Part 5: Hybrid Ring Control (HRC) [ISO/IEC 9314- 5:1995] no action required 6.6. Project 1.25.13.01.08 - FDDI - Part 6: Station Management (SMT) [ISO/IEC 9314- 6:1998] no action required 6.7.
    [Show full text]