Bluecat Linux Board Support Guide
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Linux Scheduler Documentation
Linux Scheduler Documentation The kernel development community Jul 14, 2020 CONTENTS i ii CHAPTER ONE COMPLETIONS - “WAIT FOR COMPLETION”BARRIER APIS 1.1 Introduction: If you have one or more threads that must wait for some kernel activity to have reached a point or a specific state, completions can provide a race-free solution to this problem. Semantically they are somewhat like a pthread_barrier() and have similar use-cases. Completions are a code synchronization mechanism which is preferable to any misuse of locks/semaphores and busy-loops. Any time you think of using yield() or some quirky msleep(1) loop to allow something else to proceed, you probably want to look into using one of the wait_for_completion*() calls and complete() instead. The advantage of using completions is that they have a well defined, focused pur- pose which makes it very easy to see the intent of the code, but they also result in more efficient code as all threads can continue execution until the result isactually needed, and both the waiting and the signalling is highly efficient using low level scheduler sleep/wakeup facilities. Completions are built on top of the waitqueue and wakeup infrastructure of the Linux scheduler. The event the threads on the waitqueue are waiting for is reduced to a simple flag in ‘struct completion’, appropriately called “done”. As completions are scheduling related, the code can be found in ker- nel/sched/completion.c. 1.2 Usage: There are three main parts to using completions: • the initialization of the ‘struct completion’synchronization object • the waiting part through a call to one of the variants of wait_for_completion(), • the signaling side through a call to complete() or complete_all(). -
Advances in Mobile Cloud Computing and Big Data in the 5G Era Studies in Big Data
Studies in Big Data 22 Constandinos X. Mavromoustakis George Mastorakis Ciprian Dobre Editors Advances in Mobile Cloud Computing and Big Data in the 5G Era Studies in Big Data Volume 22 Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected] About this Series The series “Studies in Big Data” (SBD) publishes new developments and advances in the various areas of Big Data-quickly and with a high quality. The intent is to cover the theory, research, development, and applications of Big Data, as embedded in the fields of engineering, computer science, physics, economics and life sciences. The books of the series refer to the analysis and understanding of large, complex, and/or distributed data sets generated from recent digital sources coming from sensors or other physical instruments as well as simulations, crowd sourcing, social networks or other internet transactions, such as emails or video click streams and other. The series contains monographs, lecture notes and edited volumes in Big Data spanning the areas of computational intelligence incl. neural networks, evolutionary computation, soft computing, fuzzy systems, as well as artificial intelligence, data mining, modern statistics and Operations research, as well as self-organizing systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. More information about this series at http://www.springer.com/series/11970 Constandinos X. Mavromoustakis George Mastorakis ⋅ Ciprian Dobre Editors Advances in Mobile Cloud Computing and Big Data in the 5G Era 123 Editors Constandinos X. -
On the Defectiveness of SCHED DEADLINE W.R.T. Tardiness and Afinities, and a Partial Fix Stephen Tang Luca Abeni James H
On the Defectiveness of SCHED_DEADLINE w.r.t. Tardiness and Afinities, and a Partial Fix Stephen Tang Luca Abeni James H. Anderson [email protected] {sytang,anderson}@cs.unc.edu Scuola Superiore Sant’Anna University of North Carolina at Chapel Hill Pisa, Italy Chapel Hill, USA ABSTRACT create new DL tasks until existing DL tasks reduce their bandwidth SCHED_DEADLINE (DL for short) is an Earliest-Deadline-First consumption. Note that preventing over-utilization is only a neces- sary condition for guaranteeing that tasks meet deadlines. (EDF) scheduler included in the Linux kernel. A question motivated Instead of guaranteeing that deadlines will be met, AC satisfes by DL is how EDF should be implemented in the presence of CPU afnities to maintain optimal bounded tardiness guarantees. Recent two goals [3]. The frst goal is to provide performance guarantees. works have shown that under arbitrary afnities, DL does not main- Specifcally, AC guarantees to each DL task that its long-run con- tain such guarantees. Such works have also shown that repairing sumption of CPU bandwidth remains within a bounded margin DL to maintain these guarantees would likely require an impractical of error from a requested rate. The second goal is to avoid the overhaul of the existing code. In this work, we show that for the starvation of lower-priority non-DL workloads. Specifcally, AC special case where afnities are semi-partitioned, DL can be modi- guarantees that some user-specifed amount of CPU bandwidth fed to maintain tardiness guarantees with minor changes. We also will remain for the execution of non-DL workloads. -
RTEMS C User Documentation Release 4.11.3 ©Copyright 2016, RTEMS Project (Built 15Th February 2018)
RTEMS C User Documentation Release 4.11.3 ©Copyright 2016, RTEMS Project (built 15th February 2018) CONTENTS I RTEMS C User’s Guide1 1 Preface 5 2 Overview 9 2.1 Introduction...................................... 10 2.2 Real-time Application Systems............................ 11 2.3 Real-time Executive.................................. 12 2.4 RTEMS Application Architecture........................... 13 2.5 RTEMS Internal Architecture............................. 14 2.6 User Customization and Extensibility......................... 15 2.7 Portability....................................... 16 2.8 Memory Requirements................................. 17 2.9 Audience........................................ 18 2.10 Conventions...................................... 19 2.11 Manual Organization................................. 20 3 Key Concepts 23 3.1 Introduction...................................... 24 3.2 Objects......................................... 25 3.2.1 Object Names................................. 25 3.2.2 Object IDs................................... 25 3.2.2.1 Thirty-Two Object ID Format.................... 25 3.2.2.2 Sixteen Bit Object ID Format.................... 26 3.2.3 Object ID Description............................. 26 3.3 Communication and Synchronization........................ 27 3.4 Time.......................................... 28 3.5 Memory Management................................. 29 4 RTEMS Data Types 31 4.1 Introduction...................................... 32 4.2 List of Data Types.................................. -
Program Review Department of Computer Science
PROGRAM REVIEW DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL JANUARY 13-15, 2009 TABLE OF CONTENTS 1 Introduction............................................................................................................................. 1 2 Program Overview.................................................................................................................. 2 2.1 Mission........................................................................................................................... 2 2.2 Demand.......................................................................................................................... 3 2.3 Interdisciplinary activities and outreach ........................................................................ 5 2.4 Inter-institutional perspective ........................................................................................ 6 2.5 Previous evaluations ...................................................................................................... 6 3 Curricula ................................................................................................................................. 8 3.1 Undergraduate Curriculum ............................................................................................ 8 3.1.1 Bachelor of Science ................................................................................................. 10 3.1.2 Bachelor of Arts (proposed) ................................................................................... -
A Modeling Framework for Network Processor Systems
A Modeling Framework for Network Processor Systems Patrick Crowley & Jean-Loup Baer Department of Computer Science & Engineering University of Washington Seattle, WA 98195-2350 pcrowley, baer ¡ @cs.washington.edu Abstract cation W at the target line rate and number of inter- faces? This paper introduces a modeling framework for network ¢ If not, where are the bottlenecks? processing systems. The framework is composed of inde- ¢ If yes, can S support application W’ (that is, application pendent application, system and traffic models which de- W plus some new task) under the same constraints? scribe router functionality, system resources/organization ¢ Will a given hardware assist improve system perfor- and packet traffic, respectively. The framework uses the mance relative to a software implementation of the Click Modular router to describe router functionality. Click same task? modules are mapped onto an object-based description of ¢ How sensitive is system performance to input traffic? the system hardware and are profiled to determine maxi- mum packet flow through the system and aggregate resource The framework is composed of independent application, utilization for a given traffic model. This paper presents system and traffic models which describe router function- several modeling examples of uniprocessor and multipro- ality, system resources/organization and packet traffic, re- cessor systems executing IPv4 routing and IPSec VPN en- spectively. The framework uses the Click Modular router cryption/decryption. Model-based performance estimates to describe router functionality. Click modules are mapped are compared to the measured performance of the real sys- onto an object-based description of the system hardware tems being modeled; the estimates are found to be accurate and are profiled and simulated to determine maximum within 10%. -
Upgrading and Repairing Pcs, 21St Edition Editor-In-Chief Greg Wiegand Copyright © 2013 by Pearson Education, Inc
Contents at a Glance Introduction 1 1 Development of the PC 5 2 PC Components, Features, and System Design 19 3 Processor Types and Specifications 29 4 Motherboards and Buses 155 5 BIOS 263 UPGRADING 6 Memory 325 7 The ATA/IDE Interface 377 AND 8 Magnetic Storage Principles 439 9 Hard Disk Storage 461 REPAIRING PCs 10 Flash and Removable Storage 507 21st Edition 11 Optical Storage 525 12 Video Hardware 609 13 Audio Hardware 679 14 External I/O Interfaces 703 15 Input Devices 739 16 Internet Connectivity 775 17 Local Area Networking 799 18 Power Supplies 845 19 Building or Upgrading Systems 929 20 PC Diagnostics, Testing, and Maintenance 975 Index 1035 Scott Mueller 800 East 96th Street, Indianapolis, Indiana 46240 Upgrading.indb i 2/15/13 10:33 AM Upgrading and Repairing PCs, 21st Edition Editor-in-Chief Greg Wiegand Copyright © 2013 by Pearson Education, Inc. Acquisitions Editor All rights reserved. No part of this book shall be reproduced, stored in a retrieval Rick Kughen system, or transmitted by any means, electronic, mechanical, photocopying, Development Editor recording, or otherwise, without written permission from the publisher. No patent Todd Brakke liability is assumed with respect to the use of the information contained herein. Managing Editor Although every precaution has been taken in the preparation of this book, the Sandra Schroeder publisher and author assume no responsibility for errors or omissions. Nor is any Project Editor liability assumed for damages resulting from the use of the information contained Mandie Frank herein. Copy Editor ISBN-13: 978-0-7897-5000-6 Sheri Cain ISBN-10: 0-7897-5000-7 Indexer Library of Congress Cataloging-in-Publication Data in on file. -
Network Store: Exploring Slicing in Future 5G Networks
Network Store: Exploring Slicing in Future 5G Networks Navid Nikaein*, Eryk Schillerx, Romain Favraud*y, Kostas Katsalis\, Donatos Stavropoulos\, Islam Alyafawix, Zhongliang Zhaox, Torsten Braunx, and Thanasis Korakis\ *EURECOM, yDCNS xUniversity of Bern fi[email protected] [email protected] \University of Thessaly {kkatsalis,dostavro,korakis}@uth.gr ABSTRACT coupling between control and data planes, state-full design, In this paper, we present a revolutionary vision of 5G net- and dependency to dedicated hardware. The exploitation works, in which SDN programs wireless network functions, of cloud technologies, Software-Define Networking (SDN) and where Mobile Network Operators (MNO), Enterprises, and Network-Function Virtualization (NFV), can provide and Over-The-Top (OTT) third parties are provided with the necessary tools to break-down the vertical system or- NFV-ready Network Store. The proposed Network Store ganization, into a set of horizontal micro-service network serves as a digital distribution platform of programmable architectures. These can be used to combine relevant net- Virtualized Network Functions (VNFs) that enable 5G ap- work functions together, assign the target performance pa- plication use-cases. Currently existing application stores, rameters, and map them onto infrastructure resources. such as Apple's App Store for iOS applications, Google's In this paper, we present the design of a 5G-ready archi- Play Store for Android, or Ubuntu's Software Center, deliver tecture and a NFV-based Network Store that can serve as applications to user specific software platforms. Our vision a digital distribution platform for 5G application use-cases. is to provide a digital marketplace, gathering 5G enabling The goal of the Network Store is to provide programmable Network Applications and Network Functions, written to run pieces of code that on-the-fly reserve required resources, de- on top of commodity cloud infrastructures, connected to re- ploy and run the necessary software components, configure mote radio heads (RRH). -
Embedded Operating Systems
7 Embedded Operating Systems Claudio Scordino1, Errico Guidieri1, Bruno Morelli1, Andrea Marongiu2,3, Giuseppe Tagliavini3 and Paolo Gai1 1Evidence SRL, Italy 2Swiss Federal Institute of Technology in Zurich (ETHZ), Switzerland 3University of Bologna, Italy In this chapter, we will provide a description of existing open-source operating systems (OSs) which have been analyzed with the objective of providing a porting for the reference architecture described in Chapter 2. Among the various possibilities, the ERIKA Enterprise RTOS (Real-Time Operating System) and Linux with preemption patches have been selected. A description of the porting effort on the reference architecture has also been provided. 7.1 Introduction In the past, OSs for high-performance computing (HPC) were based on custom-tailored solutions to fully exploit all performance opportunities of supercomputers. Nowadays, instead, HPC systems are being moved away from in-house OSs to more generic OS solutions like Linux. Such a trend can be observed in the TOP500 list [1] that includes the 500 most powerful supercomputers in the world, in which Linux dominates the competition. In fact, in around 20 years, Linux has been capable of conquering all the TOP500 list from scratch (for the first time in November 2017). Each manufacturer, however, still implements specific changes to the Linux OS to better exploit specific computer hardware features. This is especially true in the case of computing nodes in which lightweight kernels are used to speed up the computation. 173 174 Embedded Operating Systems Figure 7.1 Number of Linux-based supercomputers in the TOP500 list. Linux is a full-featured OS, originally designed to be used in server or desktop environments. -
CHAPTER 4 Motherboards and Buses 05 0789729741 Ch04 7/15/03 4:03 PM Page 196
05 0789729741 ch04 7/15/03 4:03 PM Page 195 CHAPTER 4 Motherboards and Buses 05 0789729741 ch04 7/15/03 4:03 PM Page 196 196 Chapter 4 Motherboards and Buses Motherboard Form Factors Without a doubt, the most important component in a PC system is the main board or motherboard. Some companies refer to the motherboard as a system board or planar. The terms motherboard, main board, system board, and planar are interchangeable, although I prefer the motherboard designation. This chapter examines the various types of motherboards available and those components typically contained on the motherboard and motherboard interface connectors. Several common form factors are used for PC motherboards. The form factor refers to the physical dimensions (size and shape) as well as certain connector, screw hole, and other positions that dictate into which type of case the board will fit. Some are true standards (meaning that all boards with that form factor are interchangeable), whereas others are not standardized enough to allow for inter- changeability. Unfortunately, these nonstandard form factors preclude any easy upgrade or inexpen- sive replacement, which generally means they should be avoided. The more commonly known PC motherboard form factors include the following: Obsolete Form Factors Modern Form Factors All Others ■ Baby-AT ■ ATX ■ Fully proprietary designs ■ Full-size AT ■ micro-ATX (certain Compaq, Packard Bell, Hewlett-Packard, ■ ■ LPX (semiproprietary) Flex-ATX notebook/portable sys- ■ WTX (no longer in production) ■ Mini-ITX (flex-ATX tems, and so on) ■ ITX (flex-ATX variation, never variation) produced) ■ NLX Motherboards have evolved over the years from the original Baby-AT form factor boards used in the original IBM PC and XT to the current ATX and NLX boards used in most full-size desktop and tower systems. -
Deepmatch: Practical Deep Packet Inspection in the Data Plane Using Network Processors
DeepMatch: Practical Deep Packet Inspection in the Data Plane using Network Processors Joel Hypolite John Sonchack Shlomo Hershkop The University of Pennsylvania Princeton University The University of Pennsylvania [email protected] [email protected] [email protected] Nathan Dautenhahn André DeHon Jonathan M. Smith Rice University The University of Pennsylvania The University of Pennsylvania [email protected] [email protected] [email protected] ABSTRACT Server Security Server ApplianceSecurity Restricting data plane processing to packet headers precludes anal- ApplianceSecurity 1 ysis of payloads to improve routing and security decisions. Deep- Applications DPIAppliance Applications DPI filtering DPI Match delivers line-rate regular expression matching on payloads Scalabilityfiltering IDS rules using Network Processors (NPs). It further supports packet re- DPI Challenge 2 rules ordering to match patterns in flows that cross packet boundaries. smartNIC Bottleneck Our evaluation shows that an implementation of DeepMatch, on a DeepMatch 40 Gbps Netronome NFP-6000 SmartNIC, achieves up to line rate for smartNIC + P4 streams of unrelated packets and up to 20 Gbps when searches span P4 DPI rules multiple packets within a flow. In contrast with prior work, this Header Header rules throughput is data-independent and adds no burstiness. DeepMatch 3 Network rules opens new opportunities for programmable data planes. Figure 1: DPI today (left and network) is limited by per- CCS CONCEPTS formance, scalability, and programming/management com- • Networks → Deep packet inspection; Programming inter- plexities (shaded red) inherent to the underlying deploy- faces; Programmable networks; ment models. DeepMatch (right) pushes DPI into the com- modity SmartNICs currently used to accelerate header fil- KEYWORDS tering and integrates DPI with P4, which improves perfor- Network processors, Programmable data planes, P4, SmartNIC mance and scalability while enabling a simpler deployment ACM Reference Format: model. -
Software Self-Healing Using Error Virtualization
Software Self-healing Using Error Virtualization Stylianos Sidiroglou Submitted in partial fulfillment of the Requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2008 c 2008 Stylianos Sidiroglou All Rights Reserved ABSTRACT Stylianos Sidiroglou Software Self-healing Using Error Virtualization Despite considerable efforts in both research and development strategies, software errors and subsequent security vulnerabilities continue to be a significant problem for computer systems. The accepted wisdom is to approach the problem with a multitude of tools such as diligent software development strategies, dynamic bug finders and static analysis tools in an attempt to eliminate as many bugs as possible. Unfortunately, history has shown that it is very hard to achieve bug-free software. The situation is further exacerbated by the exorbitant cost of system down-time which some estimates place at six million dollars per hour. In the absence of perfect software, retrofitting error toleration and recovery techniques, in systems not designed to deal with failures, becomes a necessary complement to proactive approaches. Towards this goal, this dissertation introduces and evaluates a set of techniques for recovering program execution in the presence of faults by effectively retrofitting legacy applications with exception handling techniques, Error Virtualization and AS- SURE . The main premise of the approach is that there is a mapping between faults that may occur during program execution and a finite set of errors that are explicitly handled by the program’s code. Experimental results are presented to support our hypothesis. The results demonstrate that our techniques can recover program execu- tion in the case of failures in 80 to 90% of the examined cases, based on the technique used.