Network Learnings on 66 Bare Metal Nodes

Total Page:16

File Type:pdf, Size:1020Kb

Network Learnings on 66 Bare Metal Nodes White Paper Cloud Computing OpenStack* Network learnings on 66 bare metal nodes Manjeet Singh Bhatia, Ganesh Mahalingam, Nathaniel Potter, Malini Bhandaru, Isaku Yamahata, Yih Leong Sun Open Source Technology Center, Intel Corporation Executive summary switching implementations for LB and OVS. OSIC hosts the world’s largest OpenStack* [1] developer ML2 enables different switching solutions to coexist cloud, comprised of 2,000 nodes, to enable community- and interoperate in a single OpenStack cloud instance. wide development and testing of OpenStack features and The purpose of our experiment was to compare the functionality at an unmatched scale. Cloud and Software performance of these two switching solutions in various Defined Networking (SDN) are front and center in today’s scenarios. Figure 1 shows a high level design of ML2 with a enterprise and telecommunications landscape. OpenStack, switch agent. a leading open source cloud operating system, has a following of thousands and over two hundred production Controller 1 Controller N deployments [2]. Intel® and Rackspace* founded the Neutron Server Neutron Server OpenStack Innovation Center* (OSIC) [3] to accelerate the Plugins Plugins development and innovation of OpenStack for enterprises around the globe. ML2 ML2 In this whitepaper, we share our learnings from a grant of 66 nodes for a 3 week period in the OSIC developer cloud. The goal of this experiment was to compare the performance of two Layer 2 software switching solutions: Linux bridge (LB) and Open VSwitch (OVS). We share how to deploy a cloud on the OSIC cluster, discuss the Network node 1 Network node 1 OpenStack neutron Modular Layer 2 (ML2) architecture, and present our experimental results. In the course of the Linux bridge agent OVS agent work we identified and fixed some deployment bugs. Last, but not least, we also share how to submit an experiment proposal to OSIC. Figure 1. Overview of ML2 design Problem statement: Deploying on the OSIC developer cloud OpenStack networking performance We received 66 bare metal nodes with no operating Several events occur when a user launches a virtual system installed on the OSIC cluster. machine in an OpenStack cloud. Multiple Application The physical connectivity cabling was preprovisioned, Programmer Interface (API) calls are made, including one and no action was required from our end. We received a to create a virtual network interface card (vNIC). Behind document that detailed the switch and network layout. the scenes, there are database transactions, messages exchanged between various services over a message We used iLO* [4], the remote server management tool from queue, plenty of logging, and much more. Each element in HP*, to manage and build the servers. Each server comes this diverse set of components can impact performance with a unique IP address for iLO management. We were and might need to be tuned differently for different cloud given the iLO credentials needed to access the servers via usage scenarios. a browser or the command line. Each server has multiple network interfaces, and two of them are usable with PXE In general, OpenStack is designed to support various booting [5] in conjunction your favorite provisioning tool. vendor plugins. Its networking component, neutron, is no different. neutron allows users to choose their switching Deployment was a two-phase process. First, we installed solution. The Modular Layer 2 (ML2) plugin was introduced a minimal operating system, and second, we deployed in neutron in the Havana release to replace the monolithic the cloud itself on the hosts. Some of the hosts were configured as controller nodes and the rest as compute Test plan hosts. It took around 40 minutes for the first phase and We spawned 100, 200, 500, and 1000 virtual machines about another 21 minutes in phase two to complete the (VMs) for both LB and OVS deployments. All the VMs deployment of OpenStack. Note that the details might were on the same network, each VM was created with change in the future, and the deployment process might one virtual network interface, and a single switch become even simpler. port was hooked up. Spinning up a VM involves some communication between nova and neutron to create Phase I: Server provisioning virtual switch ports. Those switch ports are hooked up to We first manually installed Ubuntu* 14.04 Server as our the VMs. A small script facilitated creating these VMs and operating system on one of the hosts, and then used measuring the time it took them to become active. We did Cobbler* [6] to provision all the others. There are other not mix switching technologies within our deployments. open source tools available to provision servers like bifrost* [7] with ironic* [8]. VM launch results In table 1, we’ve shared the results obtained from our VM Phase II: OpenStack Deployment spawn tests with LB and OVS. Launch time is measured as the time period between the point at which nova’s We used OpenStack kolla [9] to deploy OpenStack on the launch API is called and the point at which nova marks the cluster. kolla was both simple to use and, with its vibrant database as task accomplished and returns all the VM-ids. responsive community, quick to provide bug fixes which However, many other activities happen behind the scenes eased our deployment task. before one can successfully ping the new VMs. We refer to We wrote some scripts and OpenStack-Ansible* [10] the period during which these activities occur as the “ping playbooks for predeployment work, such as configuring time”. network interfaces on all the servers, installing software Note that the times indicated refer to the total number of dependencies, and injecting ssh public keys. kolla uses VMs launched in a test. Interestingly, the launch and the Docker* containers and OpenStack-Ansible playbooks ping times seem to grow sublinearly. As seen on figure 2. to install OpenStack services. For large deployments, we recommend running a Docker registry on the local Time [s] 250 network that contains all the container images for the services you anticipate running in your cloud. 200 Our configuration was comprised of three control nodes, LB Launch three network nodes, and 58 compute nodes. We reserved 150 OVS Launch one node to serve as the deployment and monitoring LB Ping OVS Ping host. Please note that this setup is configurable in most 100 deployment tools. It typically varies based on use case and 50 anticipated workload. In a production system, TLS is used for security and REST API calls are HTTPS. However, we did 0 #VMs not use TLS in our deployment. Thus, all times should only 100 200 300 400 500 600 700 800 900 1000 be used for their trends and not their absolute values. Figure 2. VM launch and ping time growth. Experiments Most cloud deployments choose either LB or OVS, with The times match consistently across all VM ping times with the choice possibly depending on the administrators’ one huge exception: The one variation in the data was for prior experience with one or the other. Few, if any, 500 VMs with OVS. Further investigation is required to deployments have tried to change switching technologies determine the reason for this finding. in their production clouds. We were curious to compare Our study of the logs indicates that further performance and contrast the two solutions in terms of their ease of gains are possible by addressing issues using RabbitMQ* deployment, control plane (network setup latency), and for interprocess communication, and by removing data path performance (speed of handling the packets). database connection bottlenecks by using an active-active We compared performance along these dimensions by database. modifying the number of virtual machines being launched. TOTAL LAUNCH TIME (SECONDS) TOTAL PING TIME (SECONDS) #VMS LB OVS LB OVS 100 12 13 35 43 200 21 21 53 53 500 49 51 210 115 1000 61 63 268 263 Table 1. VM launch results, including both launch time and ping time. Port create/destroy stress results The RabbitMQ was used for message queuing on all three control nodes. The RabbitMQ process consumed a lot of We launched and destroyed virtual machines repeatedly CPU cycles on all three control nodes, sometimes using over ten runs. There was little difference in latency 90% of the CPU by itself, leaving few resources for other for port creation between OVS and LB. However, the processes to use. Some Erlang optimization of RabbitMQ delete operation exposed some differences. In LB, port would definitely save some CPU cycles for other processes. cleanup was the same across multiple runs. In OVS, the performance degraded halfway through our ten runs. This ciao[15] opted for a lightweight messaging protocol to might be a database effect, possibly caused by too many reduce the messaging burden by both reducing message rows in the table. size and the need to ensure persistence. Discussion Neutron-server In the following section, we examine issues spanning ease In our experiments we used a single network. For each of deployment and experimental results, and also propose VM a virtual switch port is created and associated with further work. A quick look at the VM launch times indicates the network as a tap. At some points, the neutron server that OpenStack handles them quickly, as measured by process used over 90% of the CPU, which in turn lead to shorter launch times than the linear growth in the number delays in RabbitMQ message handling and other issues. of VMs launched. However, to develop a deeper insight, Can the neutron server process be more efficient and less we analyzed logs, CPU utilization, and the various running chatty? processes. See table 2 for a list of uncovered issues. Conclusions Database bottleneck In the course of deploying the cloud with OVS and LB The number of connections required to handle the volume we uncovered some issues and chased down their fixes, of API calls was too large to be handled by the CPU, and in so doing we helped improve OpenStack for the given that the database was configured in active-passive community.
Recommended publications
  • Compass – a Streamlined Openstack Deployment System
    2013年11月7日星期四 Compass – A Streamlined OpenStack Deployment System Shuo Yang Principal Architect of Cloud Computing, US R&D Center Outline of This Talk 1 Scope of Problem for Compass 2 Compass Explained 3 DRY, Truly Open Deployment 1 Compass at a Glimpse Think Big, Start Small A General System to Deploy Distributed Systems, Extensibility as a Primary Design Goal Not Limited to OpenStack, but Streamlined Our OpenStack Deployment Like a Charm To Be Open Sourced – Apache 2.0 Soon 100% Python, 5000 Line of Python Code Successfully Deployed Several Dogfood Clusters Compass Wiki Page: https://wiki.openstack.org/wiki/Compass 2 Data Center as a Computer Open Cloud OS (OpenStack) Open Deployment (Compass) OpenStackLinux Quantumeth0, lo Nova/proc Cinder/dev Live Auto CD / Deploy GRUB NIC CPU Disk SwitchSwitchSwitch CPUCPUServer DiskStorage NIC CPU Disk (5020) (2285/1285) (N8000/N900) OpenStack Control Channel LILO/GRUB/LiveCD for OpenStack HW/SW Configuration Deployment Channel 3 Why We Are Doing This? Full HW Portfolio in Data Center No.1 as Storage Revenue Growth No. 2 as `x86 Server Revenue Growth Needless to Say, Networking Gears.. OpenStack Makes the Above a Full Global Excellent Telecom Cloud Solution Cloud Solution Provider of year 2012 4 OpenStack Deployment System Overview Crowbar TripleO (“under the cloud” mode) Pioneer effort, a Ruby web app, Chef based Attractive concept to OpenStack folks: configuration management deploy OpenStack from OpenStack Fuel DevStack A great web apps, Puppet based A great tool for simple OpenStack
    [Show full text]
  • Red Hat Satellite 6.3 Architecture Guide
    Red Hat Satellite 6.3 Architecture Guide Planning Satellite 6 Deployment Last Updated: 2019-04-16 Red Hat Satellite 6.3 Architecture Guide Planning Satellite 6 Deployment Red Hat Satellite Documentation Team [email protected] Legal Notice Copyright © 2019 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
    [Show full text]
  • Automating the Enterprise with Ansible
    AUTOMATING THE ENTERPRISE WITH ANSIBLE Dustin Boyd Solutions Architect September 12, 2017 EVERY ORGANIZATION IS A DIGITAL ORGANIZATION. Today, IT is driving innovation. If you can’t deliver software fast, your organization can’t meet the mission, period. Digital organizations are essentially software. If they expect to thrive in a digital environment, they must have an improved competence in software delivery. Gartner 2015 2 COMPLEXITY KILLS PRODUCTIVITY. Complexity is the enemy of innovation, which is why today’s enterprises are looking to automation and DevOps tools and practices. DevOps can help organizations that are pushing to implement a bimodal strategy to support their digitalization efforts. Gartner 2015 3 WHEN YOU AUTOMATE, YOU ACCELERATE. Ansible loves the repetitive work your people hate. It helps smart people do smarter work. All with fewer errors and better accountability. Automation can crush complexity and it gives you the one thing you can’t get enough of… time. 4 “Ansible delivers DevOps to a broader class of enterprise users that include those inside the business units and teams where agile practices and fast provisioning of infrastructure are in demand.” JAY LYMAN, 451 RESEARCH – NOV 2013 GARTNER COOL VENDOR 2015 “Previous vendors in this [DevOps] market often require unique programming skills. Ansible’s simple language reduces the barrier to adoption and opens it up to a variety of skill sets…” 5 AUTOMATION = ACCELERATION “With Ansible Tower, we just click a button and deploy to production in 5 minutes. It used to take us 5 hours with 6 people sitting in a room, making sure we didn’t do anything wrong (and we usually still had errors).
    [Show full text]
  • Cobbler Provider
    Cobbler Provider The Cobbler provider is used to interact with a locally installed Cobbler (http://cobbler.github.io) service. The provider needs to be congured with the proper credentials before it can be used. Use the navigation to the left to read about the available resources. Example Usage provider "cobbler" { username == "${var.cobbler_username}" password == "${var.cobbler_password}" url == "${var.cobbler_url}" } resource "cobbler_distro" "ubuntu-1404-x86_64" { } Argument Reference The following arguments are supported: username - (Required) The username to the Cobbler service. This can also be specied with the COBBLER_USERNAME shell environment variable. password - (Required) The password to the Cobbler service. This can also be specied with the COBBLER_PASSWORD shell environment variable. url - (Required) The url to the Cobbler service. This can also be specied with the COBBLER_URL shell environment variable. insecure - (Optional) Ignore SSL certicate warnings and errors. This can also be specied with the COBBLER_INSECURE shell environment variable. cacert_file - (Optional) The path or contents of an SSL CA certicate. This can also be specied with the COBBLER_CACERT_FILE shell environment variable. cobbler_distro Manages a distribution within Cobbler. Example Usage resource "cobbler_distro" "ubuntu-1404-x86_64" { name == "foo" breed == "ubuntu" os_version == "trusty" arch == "x86_64" kernel == "/var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/linux" initrd == "/var/www/cobbler/ks_mirror/Ubuntu-14.04/install/netboot/ubuntu-installer/amd64/initrd.gz" } Argument Reference The following arguments are supported: arch - (Required) The architecture of the distro. Valid options are: i386, x86_64, ia64, ppc, ppc64, s390, arm. breed - (Required) The "breed" of distribution. Valid options are: redhat, fedora, centos, scientic linux, suse, debian, and ubuntu.
    [Show full text]
  • Cobbler Documentation Release 3.0.1
    Cobbler Documentation Release 3.0.1 Enno Gotthold May 27, 2020 Contents 1 Quickstart 3 1.1 Preparing your OS..........................................3 1.2 Changing settings..........................................3 1.3 DHCP management and DHCP server template...........................4 1.4 Notes on files and directories....................................5 1.5 Starting and enabling the Cobbler service..............................5 1.6 Checking for problems and your first sync..............................5 1.7 Importing your first distribution...................................6 2 Install Guide 9 2.1 Prerequisites.............................................9 2.2 Installation.............................................. 10 2.3 RPM................................................. 10 2.4 DEB................................................. 11 2.5 Relocating your installation..................................... 12 3 Cobbler CLI 13 3.1 General Principles.......................................... 13 3.2 CLI-Commands........................................... 14 3.3 EXIT_STATUS............................................ 24 3.4 Additional Help........................................... 24 4 Cobblerd 25 4.1 Preamble............................................... 25 4.2 Description.............................................. 25 4.3 Setup................................................. 26 4.4 Autoinstallation (Autoyast/Kickstart)................................ 26 4.5 Options................................................ 26 5 Cobbler Configuration
    [Show full text]
  • Cobbler Documentation Release 2.8.5
    Cobbler Documentation Release 2.8.5 Jörgen Maas Nov 19, 2020 Contents 1 About 3 1.1 Release Notes.........................................3 1.2 Distribution Support......................................5 1.3 Distribution Notes.......................................7 1.4 How We Model Things..................................... 13 2 Installation 15 2.1 Prerequisites.......................................... 15 2.2 Installing from packages.................................... 16 2.3 Installing from Source..................................... 18 2.4 Configuration Files....................................... 20 2.5 Relocating your installation.................................. 21 3 General 23 3.1 Cobbler Primitives....................................... 23 3.2 Cobbler Direct Commands................................... 54 3.3 Cobbler Settings........................................ 69 3.4 Managing Services with Cobbler............................... 90 3.5 Kickstart Templating...................................... 94 3.6 Snippets............................................ 102 3.7 Package Management and Mirroring............................. 112 3.8 File System Information.................................... 114 4 Advanced 119 4.1 Advanced Networking..................................... 119 4.2 SELinux............................................ 122 4.3 Configuration Management.................................. 123 4.4 Extending cobbler....................................... 131 4.5 Power Management...................................... 135 4.6
    [Show full text]
  • Building and Managing Virtual Machines at the Tier 1
    BuildingBuilding andand managingmanaging virtualvirtual machinesmachines atat thethe TierTier 11 Jason A. Smith, John DeStefano, James Pryor System Workflow: From new unconfigured to fully configured & monitored Install New System PXE boot Cobbler RHEL via system w/ Powered off network install Server kickstart. RHEL Install puppet Reboot Fresh booted FusionInventory Manually tell Contact Puppet system w/ Agent reports on Asset Mgmt GLPI to assign system w/ server Puppet RHEL system's assets Service puppet classes Puppet classes Server send catalog to puppet client Puppet configures & Reports back to system w/ makes changes System server via puppet's Puppet Puppet catalog system via catalog Configured exported resources Server The puppet server gives the Nagios server a new Now Nagios knows about the new machine System config catalog. Nagios & service and will automatically begin to Server monitor it. Configured & monitored CobblerCobbler provisioningprovisioning systemsystem System Provisioning Cobbler & Koan Cobbler is provisioning tool-set that allows for rapid setup & installation of systems through the network. It has both a web GUI and a command line interface. Provisioning with Cobbler: ● Just about all non-LinuxFarm machines are PXE booted & provisioned with Cobbler and it's companion tool koan. ● We use Cobbler as the PXE boot kickstart source for RPM packages. ● During post-install of the kickstart, we register the machine against the local Red Hat Satellite. It becomes the sole repo for packages & updates. ● Satellite allows
    [Show full text]
  • Deployment of Compute Nodes for the WLCG with Cobbler, Ansible and Salt
    Deployment of compute nodes for the WLCG with Cobbler, Ansible and Salt. Damien François, Olivier Mattelaer, Thomas Keutgen HEPiX meeting, October 20 !, "arcelona, #$ain Center for High Performance Computing and Mass Storage ')#M T2%"E%&'( Manneback+)ngrid cluster grows organically ; 1 to 10 machines at a time now 6000+ cores, Gb+10Gb 100 local users + CMS grid users, ~2 M jobs per year Manneback/Ingrid cluster -e started “manuall/0111 check-list shell script config. management make persistent make actionable make idempotent 111 and graduall/ im$rove, automation, and ,ocumentation1 -e settled on three tools 2or the provisioning o2 ne3 nodes Unboxing ● Label, rack, connect ● Choose Name, IP ● Gather MAC 1. Deploy 2. Integrate 3. Confgure Ready for jobs “Cobbler is a Linux installation server that allows for rapid setup of network installation environments.” http://cobbler.github.io Wrapper for PXE, TFTP, DHCP servers Manage OS images, machine profiles Install operating system Setup hardware-specifc confguration (disk partitions, NICs, IPMI, etc.) Setup minimal confguration (Admin SSH keys, Salt minion) “Ansible seamlessly unites workflow orchestration with configuration management, provisioning, and application deployment in one easy-to-use and deploy platform.” https://www.ansible.com Shell scripts on steroïds with builtin safety, idempotence, APIs One-off operations register to Zabbix, GLPI, Salt build files: - slurm.conf for Slurm, - /etc/hosts for dnsmasq, - /etc/ssh/ssh_known_hosts create CPU-specific directory for Easybuild “Scalable, flexible, intelligent IT orchestration and automation” https://saltstack.com Central configuration management server Daily management configure system: NTP, DNS, LDAP, Slurm, etc. install admin software, cvmfs, yum repos, etc. mount user filesystems, cvmfs, etc.
    [Show full text]
  • Automated Installation with the Cobbler Provisioning Tool
    Cobbler BEFN$?FN 8lkfdXk\[`ejkXccXk`fen`k_k_\:fYYc\igifm`j`fe`e^kffc E<NJ?F<J Cobbler helps you install new systems in a hurry. We’ll show you how to use this nifty shoemaker to deploy Xen and VMware virtual machines. BY DAVID NALLEY f you try to install or upgrade several uniform environment and does not take article, I show you how to set up a Cob- computers at once, you will soon into account differences in hardware or bler provisioning system and take a look @discover that manual installation is a function. Also, the image gets out of at the provisioning of virtual machines huge time sink. Even if you are working date over time – with the constant ap- with Cobbler’s helper application Koan. with a checklist, it is often difficult to get pearance of updates, managing the im- Cobbler provides a framework for con- everything installed the same way every ages can become a full-time job. The figuring and managing: time. For this reason, most systems ad- more robust way of provisioning is by ฀ ฀฀฀฀ ministrators understand the importance automatically installing the operating ฀ ฀฀฀฀ of an automated install system. system each time, rather than relying on ฀ ฀฀฀ It’s no surprise that virtually every op- a predefined disk image. This method is ฀ ฀฀฀฀฀฀ erating system has the ability to auto- generally considered more challenging. daemon mate installations. What is curious, The Cobbler project [2], born at Red By managing all the configurations for given the necessity of such systems, is Hat and lead by Michael DeHaan, signifi- all services, Cobbler literally can run an that configuring automated installation cantly lowers the barrier for provisioning entire provisioning network from a sin- typically requires so much time and ef- in Linux.
    [Show full text]
  • Fedora 10 Installation Guide
    Fedora 10 Installation Guide Stuart Ellis Paul W. Frields Fedora 10 Installation Guide by Stuart Ellis and Paul W. Frields Version 9.92 (2008-10-18) Copyright © 2006, 2007, 2008 Red Hat, Inc.Stuart EllisPaul W. Frields Permission is granted to copy, distribute, and/or modify this document under the terms of the Open Publication Licence, Version 1.0, or any later version. The terms of the OPL are set out below. REQUIREMENTSI. ON BOTH UNMODIFIED AND MODIFIED VERSIONS Open Publication works may be reproduced and distributed in whole or in part, in any medium physical or electronic, provided that the terms of this license are adhered to, and that this license or an incorporation of it by reference (with any options elected by the author(s) and/or publisher) is displayed in the reproduction. Proper form for an incorporation by reference is as follows: Copyright (c) <year> by <author's name or designee>. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, vX.Y or later (the latest version is presently available at http://www.opencontent.org/openpub/). The reference must be immediately followed with any options elected by the author(s) and/or publisher of the document (see section VI). Commercial redistribution of Open Publication-licensed material is permitted. Any publication in standard (paper) book form shall require the citation of the original publisher and author. The publisher and author's names shall appear on all outer surfaces of the book. On all outer surfaces of the book the original publisher's name shall be as large as the title of the work and cited as possessive with respect to the title.
    [Show full text]
  • Integrating with Cobbler
    Integrating with Cobbler Jesus M. Rodriguez Principal Software Engineer Red Hat What is Cobbler? ● A Linux installation server for rapid setup of network install environments ● Can manage ● DHCP ● DNS ● yum repos Objects ● distro ● profile ● system ● repo – package repository for mirroring (optional) ● Image – virt guest image API ● login ● get_{object*(s)} e.g. get_distro, get_profiles ● find_{object*} ● get_{object*}_handle ● remove_{object*} ● copy_{object*} ● rename_{object*} ● new_{object*} ● modify_{object*} ● save_{object*} Language bindings ● Java via cobbler4j ● cobbler4j directory of cobbler checkout ● Ruby via rubygem-cobbler ● contrib/ruby of cobbler checkout ● XML-RPC cobbler4j ● Each cobbler object has a mirror in cobbler4j ● Distro, Profile, Repo, Image, SystemRecord ● Auto-generated from python api ● Operate on the object not the connection ● Requires cobbler 2.0 as it uses xapi_object_edit ● Seeded from Spacewalk code ● Developed because XML-RPC from Java is a pain (as are many things) cobbler4j example XML-RPC ● Cobbler can also be controlled by basic XML- RPC calls ● Most languages have an XML-RPC library ● Python – xmlrpclib ● Perl – FrontierRPC ● Java – Redstone XML-RPC & Apache XML-RPC ● Ruby - xmlrpc python via XML-RPC example Integration Strategies ● Standalone ● Master ● Slave ● Synchronization required Standalone ● Simply use Cobbler as a provisioning service ● Cobbler handles everything ● PXE ● DHCP ● DNS ● Some light integration via scripts Master ● Store all system & provisioning data in Cobbler ● Optionally
    [Show full text]
  • Nested Virtualization Environments
    Deploy and test oVirt using nested virtualization environments Mark Wu [email protected] 1 Agenda ● Nested KVM ● Kickstart & Cobbler ● Kickstart files for VMs ● Install and clone oVirt VMs ● Integration test with Igor ● Q & A 2 Nested Virtualization ● Running multiple unmodified hypervisors with their associated unmodified VM’s ● Why? ● Operating systems are already hypervisors (Windows 7 with XP mode, Linux/KVM) ● To be able to run other hypervisors in clouds ● Live migration of hypervisors and their vms ● Testing, demonstrating, debugging hypervisors and virtualization setups 3 Nested VMX ● Merged in kernel 3.1 ● No hardware support L1 L2 Guest OS ● Multiplex hardware Guest Hypervisor ● Follows the “trap and Memory VMCS emulate” model Table 1-2 State 0-1 State ● Flow: Memory Memory VMCS VMCS 0-2 State ● L0 intercepts the 'vmlaunch' Table Table instruction which L1 execute to run L2 L0 Host Hypervisor ● L0 generates VMCS0-2 by merging VMCS1-2 and Hardware VMCS0-1 and then launches L2 4 How to enable nested KVM? (For VMX) ● Enable the nested switch of kvm_intel.ko ● enable it at runtime ● modprobe -r kvm_intel ● modprobe kvm_intel nested=1 ● Verify ● $cat /sys/module/kvm_intel/parameters/nested => Y ● Persist the change ● echo “options kvm-intel nested=1″>/etc/modprobe.d/kvm- intel.conf ● Qemu command line ● qemu -cpu host ● qemu -cpu qemu64,+vmx 5 How to enable nested KVM? (cont'd) ● Libvirt XML ● Use host CPU model <cpu mode='host-model'/> ● Specify a CPU model <cpu match='exact'> <model>core2duo</model> <feature policy='require'
    [Show full text]