9.5(0.1) Open Automation Guide

Total Page:16

File Type:pdf, Size:1020Kb

9.5(0.1) Open Automation Guide Open Automation Guide Configuration and Command Line Reference Aug 2014 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Information in this publication is subject to change without notice. Copyright © 2014 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Dell and the Dell logo are trademarks of Dell Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Aug 2014 1 About this Guide . .7 Objectives . .7 Audience . .7 Supported Platforms and Required Dell Networking OS Versions . .7 Conventions . .8 Information Symbols . .8 Related Documents . .9 2 Open Automation Framework . .11 Bare Metal Provisioning . .12 Smart Scripting . .13 Virtual Server Networking . .13 REST API . .15 Web Server with HTTP Support . .15 3 Bare Metal Provisioning . .17 Introduction . .17 How it Works . .17 Prerequisites . .18 Standard Upgrades with BMP . .18 BMP Process Overview . .19 BMP Operations . .19 Configuring BMP . .20 Reload Modes . .20 BMP Mode. .20 Normal Mode . .22 BMP Commands and Examples . .23 System Boot and Set-Up Behavior in BMP Mode . .23 BMP Mode: Boot and Set-UP Behavior . .25 Reload Without a DHCP Server Offer . .26 Reload with a DHCP Server Offer Without a Dell Networking OS Offer . .26 Reload with a DHCP Server Offer and no Configuration File . .27 Reload with a DHCP Server Offer Without a DNS Server. .28 DHCP Offer Vendor-Specific Option for BMP . .29 Software Upgrade Using BMP . .30 Applying Configurations Using BMP Scripts . .30 Preconfiguration Scripts. .30 Post Configuration Scripts . .31 Auto-Execution Scripts . .31 Preconfiguration Process for Scripts . .32 Running Scripts . .33 Using the Post Configuration Script (BMP Mode Only) . .33 Reload Using the Auto-Execution Script (Normal Mode Only) . .33 Script Examples . .34 Auto-Execution Script - Normal Mode . .34 | 3 Preconfiguration Script - BMP Mode . .37 Post Configuration Script - BMP Mode . .38 BMP Operations on Servers Overview . .40 DHCP Server . .40 DHCP Server Settings. .40 DHCP Server IP Blacklist . .41 MAC-Based Configuration. .42 MAC-Based IP Address Assignment . .42 Class-Based Configuration . .43 File Server Settings . .44 Domain Name Server Settings . .44 www.dell.com | support.dell.com 4 Bare Metal Provisioning CLI . .45 Overview . .45 Commands . .46 5 Smart Scripting . .55 Overview . .55 Downloading the Smart Scripting Package . .57 Installing Smart Scripting . .57 Displaying Installed Packages . .59 Uninstalling Smart Scripting . .59 Limits on System Usage . .59 Supported UNIX Utilities . .60 Smart Utils . .62 Creating a User Name and Password for Smart Scripting . .62 Logging in to a NetBSD UNIX Shell . .63 Downloading Scripts to a Switch . .63 Setting a Search Path for Scripts . .64 Scheduling Time / Event-based Scripts . .64 Triggering a Script to Run . .64 SQLite . .67 NET SNMP Client . ..
Recommended publications
  • DMFS - a Data Migration File System for Netbsd
    DMFS - A Data Migration File System for NetBSD William Studenmund Veridian MRJ Technology Solutions NASAAmes Research Center" Abstract It was designed to support the mass storage systems de- ployed here at NAS under the NAStore 2 system. That system supported a total of twenty StorageTek NearLine ! have recently developed DMFS, a Data Migration File tape silos at two locations, each with up to four tape System, for NetBSD[I]. This file system provides ker- drives each. Each silo contained upwards of 5000 tapes, nel support for the data migration system being devel- and had robotic pass-throughs to adjoining silos. oped by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file The volman system is designed using a client-server backing, and coordinates user and system access to the model, and consists of three main components: the vol- files. It stores its internal metadata in a flat file, which man master, possibly multiple volman servers, and vol- resides on a separate file system. This paper will first man clients. The volman servers connect to each tape describe our data migration system to provide a context silo, mount and unmount tapes at the direction of the for DMFS, then it will describe DMFS. It also will de- volman master, and provide tape services to clients. The scribe the changes to NetBSD needed to make DMFS volman master maintains a database of known tapes and work. Then it will give an overview of the file archival locations, and directs the tape servers to move and mount and restoration procedures, and describe how some typi- tapes to service client requests.
    [Show full text]
  • Review Der Linux Kernel Sourcen Von 4.9 Auf 4.10
    Review der Linux Kernel Sourcen von 4.9 auf 4.10 Reviewed by: Tested by: stecan stecan Period of Review: Period of Test: From: Thursday, 11 January 2018 07:26:18 o'clock +01: From: Thursday, 11 January 2018 07:26:18 o'clock +01: To: Thursday, 11 January 2018 07:44:27 o'clock +01: To: Thursday, 11 January 2018 07:44:27 o'clock +01: Report automatically generated with: LxrDifferenceTable, V0.9.2.548 Provided by: Certified by: Approved by: Account: stecan Name / Department: Date: Friday, 4 May 2018 13:43:07 o'clock CEST Signature: Review_4.10_0_to_1000.pdf Page 1 of 793 May 04, 2018 Review der Linux Kernel Sourcen von 4.9 auf 4.10 Line Link NR. Descriptions 1 .mailmap#0140 Repo: 9ebf73b275f0 Stephen Tue Jan 10 16:57:57 2017 -0800 Description: mailmap: add codeaurora.org names for nameless email commits ----------- Some codeaurora.org emails have crept in but the names don't exist for them. Add the names for the emails so git can match everyone up. Link: http://lkml.kernel.org/r/[email protected] 2 .mailmap#0154 3 .mailmap#0160 4 CREDITS#2481 Repo: 0c59d28121b9 Arnaldo Mon Feb 13 14:15:44 2017 -0300 Description: MAINTAINERS: Remove old e-mail address ----------- The ghostprotocols.net domain is not working, remove it from CREDITS and MAINTAINERS, and change the status to "Odd fixes", and since I haven't been maintaining those, remove my address from there. CREDITS: Remove outdated address information ----------- This address hasn't been accurate for several years now.
    [Show full text]
  • The Virtual Filesystem Interface in 4.4BSDI
    The Virtual Filesystem Interface in 4.4BSDI Marshall Kirk McKusick Consultant and Author Berkeley, California ABSTRACT: This paper describes the virtual filesys- tem interface found in 4.4BSD. This interface is de- signed around an object oriented virtual file node or "vnode" data structure. The vnode structure is de- scribed along with its method for dynamically expand- ing its set of operations. These operations have been divided into two groups: those to manage the hierarchi- cal filesystem name space and those to manage the flat filestore. The translation of pathnames is described, as it requires a tight coupling between the virtual filesys- tem layer and the underþing filesystems through which the path traverses. This paper describes the filesystem services that are exported from the vnode interface to its clients, both local and remote. It also describes the set of services provided by the vnode layer to its client filesystems. The vnode interface has been generalized to allow multiple filesystems to be stacked together. After describing the stacking functionality, several examples of stacking filesystems are shown. t To appear in The Design and Implementation of the 4.4BSD Operating System, by Marshall Kirk McKusick, publisher. et al., @1995 by Addison-Wesley Publishing Companf Inc. Reprinted with the permission of the o 1995 The USENIX Association, Computing Systems, Vol. 8 ' No. 1 ' Winter 1995 I. The Virtual Filesystem Interface In early UNIX systems, the file entries directly referenced the local filesystem inode, see Figure I [Leffler et al. 1989]. This approach worked fine when there was a single filesystem implementation. However, with the advent of multþle filesystem types, the architecture had to be generalized.
    [Show full text]
  • Installing Bsd Operating Systems on Ibm Netvista S40
    INSTALLING BSD OPERATING SYSTEMS ON IBM NETVISTA S40 MICHO DURDEVICH Abstract. We present several ways of setting up FreeBSD operating system on IBM Netvista S40, a so-called ‘legacy free’ computer. The difficulty arises because the machine has no standard AT keyboard controller, and the existing FreeBSD subroutines for the gate A20 and for the keyboard controller probes result inappropriate. We discuss a replacement bootstrap code, which more carefully deals with the A20 issue. Some simple modifications to the FreeBSD kernel code are considered, too. A manual method for preparing a bootable installation CD, suitable for both Netvista and all standard configurations, is examined. Installations of DragonFly, NetBSD, OpenBSD and OS/2 are also discussed. 1. Introduction 1.1. The Machine In this note∗ we shall talk about installing FreeBSD on a very interesting and elegant machine: IBM Netvista S40. In its creator own terminology, it is ‘legacy- free’. The computer has no parallel, serial, AT keyboard, nor PS/2 mouse ports. No floppy controller either. Instead, it has 5 USB ports (2 frontal and 3 rear) connected to a single USB controller. Besides these USB ports, the system only counts with standard video and audio connectors. The video controller is Intel 82810E SVGA and audio chip is Intel ICH 82801AA, both integrated onboard. The CPU is Intel ∗Version 5.2005 2 MICHO DURDEVICH PIII at 866MHz. The machine is further equipped with a fast Intel Pro PCI network adapter containing a PXE/RIPL boot prom. A quiet 20G Quantum Fireball HDD and a Liteon ATAPI CD-ROM, both connected as masters, constitute the storage subsystem.
    [Show full text]
  • Dell EMC Networking Open Automation Guide 9.12.1.0 October 2017
    Dell EMC Networking Open Automation Guide 9.12.1.0 October 2017 Regulatory Model: Open Automation Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2017 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. 2017 - 10 Contents 1 About this Guide.............................................................................................................................................9 Audience..............................................................................................................................................................................9 Open Automation Features and Supported Platforms .................................................................................................9 Conventions...................................................................................................................................................................... 10 Related Documents.......................................................................................................................................................... 10 2 Open Automation Framework......................................................................................................................
    [Show full text]
  • Downloaded for Free From
    The Design of the NetBSD I/O Subsystems SungWon Chung Pusan National University 2 This book is dedicated to the open-source code developers in the NetBSD community. The original copy of this publication is available in an electronic form and it can be downloaded for free from http://arXiv.org. Copyright (c) 2002 by SungWon Chung. For non-commercial and personal use, this publication may be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical, photocopying, recording or otherwise. For commercial use, no part of this publication can be reproduced by any means without the prior written permission of the author. NetBSD is the registered trademark of The NetBSD Foundation, Inc. Contents Preface 14 I Basics to Learn Filesystem 15 1 Welcome to the World of Kernel ! 17 1.1 How Does a System Call Dive into Kernel from User Program ? . 17 1.1.1 Example: write system call . 17 1.1.2 Ultra SPARC 0x7c CPU Trap . 18 1.1.3 Jump to the File Descriptor Layer . 24 1.1.4 Arriving at Virtual Filesystem Operations . 28 1.2 General Data Structures in Kernel such as List, Hash, Queue, ... 30 1.2.1 Linked-Lists . 30 1.2.2 Tail Queues . 34 1.2.3 Hash . 38 1.3 Waiting and Sleeping in Kernel . 39 1.4 Kernel Lock Manager . 39 1.4.1 simplelock and lock . 39 1.4.2 Simplelock Interfaces . 40 1.4.3 Lock Interfaces . 40 1.5 Kernel Memory Allocation . 43 1.6 Resource Pool Manager .
    [Show full text]
  • Performance and Protection in the Zofs User-Space NVM File System Mingkai Dong, Heng Bu, Jifei Yi, Benchao Dong, Haibo Chen
    Performance and Protection in the ZoFS User-space NVM File System Mingkai Dong, Heng Bu, Jifei Yi, Benchao Dong, Haibo Chen Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University Oct. 2019 @SOSP ’19 Non-volatile memory (NVM) is coming with attractive features § Fast – Near-DRAM performance § Persistent – Durable data storage § Byte-addressable – CPU load/store access 2 File systems are designed for NVM § NVM File systems in kernel § User-space NVM file systems[1] – BPFS [SOSP ’09] – Aerie [EuroSys ’14] – PMFS [EuroSys ’14] – Strata [SOSP ’17] – NOVA [FAST ’16, SOSP ’17] – SoupFS [USENIX ATC ’17] [1] These file systems also require kernel part supports. 3 User-space NVM file systems have benefits § User-space NVM file systems[1] – Aerie [EuroSys ’14] – Strata [SOSP ’17] üEasier to develop, port, and maintain[2] üFleXible[3] üHigh-performance due to kernel bypass[3,4] [1] These file systems also require kernel part supports. [2] To FUSE or Not to FUSE: Performance of User-Space File Systems, FAST ’17 [3] Aerie: Flexible File-System Interfaces to Storage-Class Memory, EuroSys ’14 [4] Strata: A Cross Media File System, SOSP ’17 4 Metadata is indirectly updated in user space § Updates to metadata are performed by trusted components – Trusted FS Service in Aerie – Kernel FS in Strata Update data Update metadata Aerie direct write via IPCs Indirect updates! append a log in user space, Strata digest in kernel 5 Indirect updates are important but limit performance Create empty files in a shared directory § Indirect
    [Show full text]
  • CS615 - System Administration Slide 1
    CS615 - System Administration Slide 1 CS615 - System Administration Filesystem and OS Boot Process Concepts Department of Computer Science Stevens Institute of Technology Jan Schaumann [email protected] https://stevens.netmeister.org/615/ Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 2 Basic Filesystem Concepts The UNIX Filesystem Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 3 Basic Filesystem Concepts: The UNIX Filesystem The filesystem is responsible for storing the data on the disk. So to read/write data, it needs to know in which physical blocks the actual data is located; ie how to map files to the disk blocks. Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 4 Let’s pretend we’re a filesystem... aws ec2 create-volume --size 1 --availability-zone us-east-1d aws ec2 attach-volume --volume-id XXX --instance-id XXX --device /dev/sda3 dmesg for i in $(seq $((1024 * 128))); do printf ’\0\0\0\0\0\0\0\0’ done | dd of=/dev/xbd2d Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 5 Basic Filesystem Concepts: The UNIX Filesystem Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 6 Basic Filesystem Concepts: The UNIX Filesystem Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 7 Basic Filesystem Concepts: The UNIX Filesystem Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 8 Basic Filesystem Concepts: The UNIX Filesystem Lecture 03: Software Installation Concepts February 10, 2020 CS615 - System Administration Slide 9 Let’s pretend we’re a filesystem..
    [Show full text]
  • Rethinking File Mapping for Persistent Memory
    Rethinking File Mapping for Persistent Memory Ian Neal, Gefei Zuo, Eric Shiple, and Tanvir Ahmed Khan, University of Michigan; Youngjin Kwon, School of Computing, KAIST; Simon Peter, University of Texas at Austin; Baris Kasikci, University of Michigan https://www.usenix.org/conference/fast21/presentation/neal This paper is included in the Proceedings of the 19th USENIX Conference on File and Storage Technologies. February 23–25, 2021 978-1-939133-20-5 Open access to the Proceedings of the 19th USENIX Conference on File and Storage Technologies is sponsored by USENIX. Rethinking File Mapping for Persistent Memory Ian Neal Gefei Zuo Eric Shiple Tanvir Ahmed Khan University of Michigan University of Michigan University of Michigan University of Michigan Youngjin Kwon Simon Peter Baris Kasikci School of Computing, KAIST University of Texas at Austin University of Michigan Abstract on the underlying device—comprises up to 70% of the IO path of real workloads in a PM-optimized file system (as we Persistent main memory (PM) dramatically improves IO show in §4.8). Even for memory-mapped files, file mapping is performance. We find that this results in file systems on PM still involved in file appends. Yet, existing PM-optimized file spending as much as 70% of the IO path performing file map- systems either simply reuse mapping approaches [11,14,24] ping (mapping file offsets to physical locations on storage originally designed for slower block devices [29], or devise media) on real workloads. However, even PM-optimized file new approaches without rigorous analysis [15, 57]. systems perform file mapping based on decades-old assump- PM presents a number of challenges to consider for file tions.
    [Show full text]
  • Flexible Operating System Internals: the Design and Implementation of the Anykernel and Rump Kernels Aalto University
    Department of Computer Science and Engineering Aalto- Antti Kantee DD Flexible Operating System 171 / 2012 2012 Internals: FlexibleSystemOperating Internals: TheDesign andImplementation theofandAnykernelRump Kernels The Design and Implementation of the Anykernel and Rump Kernels Antti Kantee 9HSTFMG*aejbgi+ ISBN 978-952-60-4916-8 BUSINESS + ISBN 978-952-60-4917-5 (pdf) ECONOMY ISSN-L 1799-4934 ISSN 1799-4934 ART + ISSN 1799-4942 (pdf) DESIGN + ARCHITECTURE UniversityAalto Aalto University School of Science SCIENCE + Department of Computer Science and Engineering TECHNOLOGY www.aalto.fi CROSSOVER DOCTORAL DOCTORAL DISSERTATIONS DISSERTATIONS "BMUP6OJWFSTJUZQVCMJDBUJPOTFSJFT %0$503"-%*44&35"5*0/4 &2#&ŗ*,.#(!ŗ3-.'ŗ (.,(&-Ćŗ "ŗ-#!(ŗ(ŗ '*&'(..#)(ŗ) ŗ."ŗ (3%,(&ŗ(ŗ/'*ŗ ,(&-ŗ "OUUJ,BOUFF "EPDUPSBMEJTTFSUBUJPODPNQMFUFEGPSUIFEFHSFFPG%PDUPSPG 4DJFODFJO5FDIOPMPHZUPCFEFGFOEFE XJUIUIFQFSNJTTJPOPGUIF "BMUP6OJWFSTJUZ4DIPPMPG4DJFODF BUBQVCMJDFYBNJOBUJPOIFMEBU UIFMFDUVSFIBMM5PGUIFTDIPPMPO%FDFNCFSBUOPPO "BMUP6OJWFSTJUZ 4DIPPMPG4DJFODF %FQBSUNFOUPG$PNQVUFS4DJFODFBOE&OHJOFFSJOH 4VQFSWJTJOHQSPGFTTPS 1SPGFTTPS)FJLLJ4BJLLPOFO 1SFMJNJOBSZFYBNJOFST %S.BSTIBMM,JSL.D,VTJDL 64" 1SPGFTTPS3FO[P%BWPMJ 6OJWFSTJUZPG#PMPHOB *UBMZ 0QQPOFOU %S1FUFS5SÍHFS )BTTP1MBUUOFS*OTUJUVUF (FSNBOZ "BMUP6OJWFSTJUZQVCMJDBUJPOTFSJFT %0$503"-%*44&35"5*0/4 "OUUJ,BOUFFQPPLB!JLJGJ 1FSNJTTJPOUPVTF DPQZ BOEPSEJTUSJCVUFUIJTEPDVNFOUXJUIPS XJUIPVUGFFJTIFSFCZHSBOUFE QSPWJEFEUIBUUIFBCPWFDPQZSJHIU OPUJDFBOEUIJTQFSNJTTJPOOPUJDFBQQFBSJOBMMDPQJFT%JTUSJCVUJOH NPEJGJFEDPQJFTJTQSPIJCJUFE *4#/ QSJOUFE *4#/
    [Show full text]
  • Kernel/User-Level Collaborative Persistent Memory File System with Efficiency and Protection
    Kernel/User-level Collaborative Persistent Memory File System with Efficiency and Protection Youmin Chen Youyou Lu Bohong Zhu Jiwu Shu Tsinghua University Abstract File systems have long been part of an operating system, Emerging high performance non-volatile memories recall the and are placed in the kernel level to provide data protection importance of efficient file system design. To avoid the virtual from arbitrary user writes. System calls (syscalls) are used file system (VFS) and syscall overhead as in these kernel- for the communication between the kernel and userspace. In based file systems, recent works deploy file systems directly the kernel, the virtual file system (VFS) is an abstraction in user level. Unfortunately, a user level file system can easily layer that hides concrete file system designs to provide be corrupted by a buggy program with misused pointers, and uniform accesses. However, both syscall and VFS incur non- is hard to scale on multi-core platforms which incorporates a negligible overhead in file systems for NVMs. Our evaluation centralized coordination service. on NOVA [33] shows that, even the highly scalable and In this paper, we propose KucoFS, a Kernel and user- efficient NVM-aware file system still suffers great overhead in level collaborative file system. It consists of two parts: a the VFS layer and fails to scale on some file operations (e.g., creat/unlink syscall user-level library with direct-access interfaces, and a kernel ). For , the context switch overhead thread, which performs metadata updates and enforces write occupies up to 34% of the file system accessing time, even protection by toggling the permission bits in the page table.
    [Show full text]
  • GNU/Linux Magazine N159 : Virtualisation Avec Les Linux
    sommaire N°159 éditorial Prosel time ! News Je ne sais pas si vous 04 Nouveautés de Python 3.3 avez remarqué, mais nous avons, aux Éditions 12 Commandes et Démons Diamond, essayé quelque chose de nouveau ces der- NetadmiN niers mois. Si vous ne devi- 20 Sauvegarde dans le cloud avec duplicity ! nez pas de quoi je parle, ami lecteur de GLMF, rassurez-vous c’est normal, puisque la nou- veauté concernait un autre titre. Le lien avec sysadmiN votre cher magazine est triple. D’une part, il 26 réutiliser des greffons Nagios avec Zabbix s’agit d’une publication sœur qui n’est autre que Linux Essentiel et plus exactement un hors-sé- En couverture rie (et d’une). D’autre part, si vous nous suivez de longue date, vous devez connaître certains 32 Virtualisation basée sur les LinuX de mes antécédents dans le domaine de la retouche d’images puisque, je l’avoue, j’ai « com- Containers, aka LXC mis » plusieurs tutoriels dans de précédents Cet article a pour objectif de vous présenter la technologie hors-séries de GLMF consacrés à Gimp. Or précisément, le hors-série de Linux Essentiel de virtualisation LXC [0.1]. Le terme de virtualisation n'est concernait Gimp (et de deux - si l’on considère d'ailleurs pas vraiment adapté, car il n'y a pas de création qu’il existe une sorte de relation trouble entre de machine virtuelle en tant que telle. Gimp et GLMF). En revanche, en plus du simple fait que ce repères hors-série s’adressait à des lecteurs débutants 38 Les patrons variables du C++11 : 1 – Définitions et syntaxes avec Gimp, ce « quelque chose de nouveau » était (et est toujours, mais j’y reviendrai) phy- 44 Le coin du vieux barbu siquement différent.
    [Show full text]