
Documentation Introduction Installation Guide User Guide System Architecture Test Infrastructure Introduction: Project Atrium develops an open SDN Distribution - a vertically integrated set of open source components which together form a complete SDN stack. Motivation The current state of SDN technology suffers from two significant gaps that interfere with the development of a vibrant ecosystem. First, there is a large gap in the integration of the elements that are needed to build an SDN stack. While there are multiple choices at each layer, there are missing pieces and poor or no integration. Second, there is a gap in interoperability. This exists both at a product level, where existing products from different vendors have limited compatibility, and at a protocol level, where interfaces between the layers are either over or under specified. For example, differences in the implementations of OpenFlow v1.3 makes it difficult to connect an arbitrary switch and controller. On the other hand, the interface for writing applications on top of a controller platform is mostly under specified, making it difficult to write a portable application. Project Atrium attempts to address these challenges, not by working in the specification space, but by generating code that integrates a set of production quality components. Their successful integration then allows alternative component instances (switches, controllers, applications) to be integrated into the stack. Most importantly we wish to work closely with network operators on deployable use-cases, so that they could download near production quality code from one location, and trial functioning software defined networks on real hardware. Atrium is the first fully open source SDN distribution (akin to a Linux distribution). We believe that with operator input, requirements and deployment scenarios in mind, Atrium can be useful as distributed, while also providing the basis for future extensions and alternative distributions which focus on requirements different from those in the original release. Atrium Release 2015/A In the first release (2015/A), Atrium is quite simply an open-source router that speaks BGP to other routers, and forwards packets received on one port /vlan to another, based on the next-hop learnt via BGP peering. Atrium creates a vertically-integrated stack to produce an SDN based router. This stack can have the forms shown below. On the left, the stack includes a controller (ONOS) with a peering application (called BGP Router) integrated with an instance of Quagga BGP. The controller also includes a device-driver specifically written to control an OF-DPA based OpenFlow switch (more specifially it is meant to be used with OF- DPA v2.0). The controller uses OpenFlow v1.3.4 to communicate with the hardware switch. The hardware switch can be a bare-metal switch from either Ac cton (5710) or from Quanta (LY2). On the bare-metal switch we run an open switch operating system (ONL) and an open install environment (ONIE) from the Open Compute Project. In additon, we run the Indigo OpenFlow Agent on top of OF-DPA, contributed by Big Switch Networks and Broadcom. On the right, the control plane stack remains the same. The one change is that we use a different device-driver in the controller, depending on the vendor equipment we work with. Currently Atrium release 2015/A works with equipment from Noviflow (1132), Centec (v350), Corsa (6410), Pica8 (P-3295), and N etronome. The vendor equipment exposes the underlying switch capabilties necessary for the peering application via an OpenFlow agent (typically OVS) to the control plane stack. Details can be found in the Atrium 2015/A release contents. Installation Guide: Distribution VM To get started with Atrium Release 2015/A, download the distribution VM (Atrium_2015_A.ova) from here: size ~ 2GB https://dl.orangedox.com/TfyGqd73qtcm3lhuaZ/Atrium_2015_A.ova login: admin password: bgprouter NOTE: This distribution VM is NOT meant for development. Its sole purpose is to have a working system up and running for test/deployment as painlessly as possible. A developer guide using mechanisms other than this VM will be available shortly after the release. The VM can be run on any desktop/laptop or server with virtualization software (VirtualBox, Parallels, VMWare Fusion, VMWare Player etc.). We recommend using VirtualBox for non-server uses. For running on a server, see the subsection below. Get a recent version of VirtualBox to import and run the VM. We recommend the following: 1) using 2 cores and at least 4GB of RAM. 2) For networking, you can "Disable" the 2nd Network Adapter. We only need the 1st network adapter for this release. 3) You could choose the primary networking interface (Adapter 1) for the VM to be NATted or "bridged". If you choose to NAT, you would need to create the following port-forwarding rules. The first rule allows you to ssh into your VM, with a command from a Linux or MAC terminal like this: $ ssh -X -p 3022 admin@localhost The second rule allows you to connect an external switch to the controller running within the VM (the guest-machine) using the IP address of the host- machine (in the example its 10.1.9.140) on the host-port 6633. If you chose to bridge (with DHCP) instead of NAT, then login to the VM to see what IP address was assigned by your DHCP server (on the eth0 interface). Then use ssh to get in to the VM from a terminal: $ ssh -X admin@<assigned-ip-addr> You can login to the VM with the following credentials --> login: admin, password: bgprouter Once in, try to ping the outside world as a sanity check (ping www.cnn.com). Running the Distribution VM on a Server The Atrium_2015_A.ova file is simply a tar file containing the disk image (vmdk file) and some configuration (ovf file). Most server virtualization software can directly run the vmdk file. However, most people prefer to run qcow2 format in servers. First untar the ova file $ tar xvf Atrium_2015_A.ova Use the following command to convert the vmdk file to qcow2. You can then use your server's virtualization software to create a VM using the qcow2 image. $ qemu-img convert -f vmdk Atrium_2015_A-disk1.vmdk -O qcow2 Atrium_2015_A-disk1.qcow2 Running the Distribution VM on the Switch While it should be possible to run the controller and other software that is part of the distribution VM directly on the switch CPU in a linux based switch OS, it is not recommended. This VM has not been optimized for such an installation, and it has not been tested in such a configuration. Installation Steps Once you have the VM up and running, the following steps will help you to bring up the system. You have two choices: A) You can bring up the Atrium Router completely in software, and completely self-contained in this VM. In addition, you will get a complete test infrastructure (other routers to peer with, hosts to ping from, etc.) that you can play with (via the router-test.py script). Note that when using this setup, we emulate hardware-pipelines using software switches. Head over to the "Running Manual Tests" section on the Test Infrastruture page. B) Or you could bring up the Atrium Router in hardware, working with one of the seven OpenFlow switches we have certified to work for Project Atrium. Follow the directions below: Basically you need to configure the controller/app, bring up Quagga and connect it to ONOS (via the router-deploy.py script), and then configure the switch you are working with to connect it to the controller - 3 easy steps! The following pages will help you do just that: 1. Configure and run ONOS 2. Configure and run Quagga 3. Configure and connect your Switch Accton 5710 Centec v350 Corsa 6410 Netronome NoviFlow 1132 Pica8 P-3295 Quanta LY2 User Guide: Control Plane User Guide In this section we will introduce some of the CLI commands available on ONOS and Quagga, at least the ones that are relevant to Atrium. We will use the following network example. With reference to the picture above, the Atrium router comprises of a dataplane OF switch (with dpid 1), ONOS, Quagga BGP and a control plane switch (OVS with dpid: aa) that shuttles BGP and ARP traffic between ONOS and Quagga. More details of the internals can be found in the System Architecture section. Normally the Dataplane Switch would be the hardware switch of your choice, but for the pupposes of this guide we have chosen a software switch (also OVS) that we use to emulate a hardware pipeline. The Atrium Router has an AS number 65000. It peers with two other traditional routers (peers 1 and 2) which have their own AS numbers (65001 and 65002 respectively). Hosts 1 and 2 are reachable via the peers 1 and 2 resp, and these peers advertise those networks to our Atrium router. The traditional routers, peers 1 and 2, could be any regular router that speaks BGP - we have tried Vyatta and Cisco - but in this example they are Linux hosts behaving as routers with Quagga running in them. Here is a look at the BGP instances in the Atrium Router as well as the peers. The BGP instance in the Atrium Router control plane is simply called "bgp1" (you can change it if you want to, in the router-deploy.py script). The "show ip bgp summary" command shows the peering session status for the Atrium Router's quagga instance. We see that there are 3 peering sessions that are Up (for roughly 2 minutes when this screenshot was taken). The 1.1.1.1 peering session is with ONOS - there is a lightweight implementation of I-BGP within ONOS which is used by ONOS to pull best-route information out of Quagga BGP.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-