
CERN Data Centre Network Architecture: proposed evolution and implementation plan Authors: Edoardo Martelli (CERN, IT-CS-PO), Tony Cass (CERN, IT-CS), with the input of the members of the DCNA Working Group and the CERN IT-CS-CE section. Last updated: 26th of June 2018 Abstract In 2017 the IT-CS group decided to take advantage of the planned hardware upgrade of the data centre routers to deliver more advanced network features to the CERN Data Centre user community. In order to involve all relevant IT groups in selection and prioritisation of the desired features, a Data Centre Network Architecture (DCNA) working group was formed. This working group met several time throughout 2017. This document summarises the conclusions of the Working Group and details the features that will be implemented, giving an indication of the technologies that can be used. An implementation plan, with proposed deadlines dictated by existing constraints, is also included. Table of Contents 1 Terminology and Acronyms..................................................................................................................2 2 2017 Data Centre Network.....................................................................................................................2 2.1 Domains.............................................................................................................................................2 2.2 Features..............................................................................................................................................3 2.3 Limitations........................................................................................................................................3 2.4 Network diagram.............................................................................................................................4 3 Designing a new Data Centre Network...............................................................................................5 3.1 Design objectives..............................................................................................................................5 3.2 Design constraints............................................................................................................................5 3.3 User requirements...........................................................................................................................5 4 The New Data Centre Network Architecture.....................................................................................6 4.1 Addressing the objectives, constraints and requirements.....................................................6 4.1.1 Security......................................................................................................................................7 4.1.2 Agile domain membership....................................................................................................7 4.1.3 Router redundancy.................................................................................................................8 4.1.4 ToR switch redundancy..........................................................................................................8 4.1.5 Virtual machine mobility.......................................................................................................8 4.1.6 Second NIC for storage servers.............................................................................................9 4.1.7 Jumbo frames...........................................................................................................................9 4.1.8 Faster DNS and DHCP updates.............................................................................................9 4.1.9 Openstack information in LANDB.......................................................................................9 5 Implementation plan............................................................................................................................10 5.1 Dependencies..................................................................................................................................10 5.2 Constraints......................................................................................................................................10 5.3 Proposed schedule.........................................................................................................................11 6 References...............................................................................................................................................11 1 Terminology and Acronyms BF = Blocking Factor (uplink bandwidth over-subscription, aggregate downlink bandwidth divided by aggregate uplink bandwidth) CT = Container HV = Hypervisor HC = Host for Containers LANDB = IT-CS network database MLAG = Multichassis Link Aggregation Group NIC = Network Interface Card NAT = Network Address Translation SS = Storage Server TN = Technical Network: LHC accelerator control and management network ToR = Top of Rack (switch) VRRP = Virtual Router Redundancy Protocol VM = Virtual Machine 2 2017 Data Centre Network This chapter briefy sets out the architecture of the data centre networks in Building 513 (Geneva) and Building 9918 (Budapest), describing the features available at the time the DCNA working group was established in 2017. 2.1 Domains We talk of data centre networks since distinct network domains coexist to address four distinct classes of network requirements. The LCG network, with direct access to the LHCOPN and LHCONE network connections to Tier1 and Tier2 sites and extensive support for high-bandwidth connections, is provided to support physics computing services. Non-physics services are connected to the ITS network which includes a zone (in the “Barn”) confgured for router redundancy with connections to the diesel-backed power supply A Technical Network (TN) presence (in B513, but not B9918) provides connectivity to this network for relevant IT-managed servers. Access to this network is restricted to authorised servers by a “gate”. A low bandwidth, non-redundant MGMT network is provided to support connections to dedicated server management interfaces (mostly IPMI interfaces). As for the TN, access to this network is gate protected. Whilst the TN and MGMT networks are physically distinct infrastructures, the LCG and ITS domains are implemented by using virtual routing (VRF) over a common infrastructure. These network domains are shown schematically in this picture: 2.2 Features The key features of the data centre network architecture in 2017 were line rate performances (subject to the agreed Blocking Factor); no NAT, no encapsulation, dual stack IPv4 and IPv6, switch redundancy for critical services; implemented with the Switch Stacking feature, router redundancy for critical services; implemented with VRRP, Internet frewall protection with LANDB driven automatic updates every 15 minutes, to support dynamic virtual machine creation, LANDB driven automatic updates of the DNS and DHCP services (every 10 and 5 minutes respectively). Although VLAN extensions for live VM migration were also supported, this was only on an ad-hoc and temporary basis. 2.3 Limitations Key drawbacks of this data centre network architecture, as perceived by clients, are as follows, in roughly decreasing order of importance. The separation between the LCG and ITS domains is infexible and of low granularity. Domain membership is decided at the switch level, so it is not possible to move a single machine between domains, and a network renumbering is required when domain assignment changes. In practice, machines are therefore rarely moved between domains leading to many machines supporting general IT services being directly exposed to LHCOPN and LHCONE, which is not ideal from a security standpoint. The blocking factors are mostly unknown to users and, linked to the point above, may be diferent if logically comparable servers are in diferent network domains. There is no integration of network domains with AI availability zones, so machines that are supposed to be in diferent availability zones may in fact be behind a single router. It is not possible for the OpenStacks virtual machine orchestrator to ensure that dynamic network changes are correctly recorded in LANDB. For example the real hosting hypervisor for a particular virtual machine may not be the one declared in LANDB. Lack of full support for IP mobility restricts the possibility to deliver load-balanced or high availability solutions. The latency for DNS and DHCP updates (up to 10 and 5 minutes respectively) does not match the speed at which virtual machines and containers can be provisioned. A further drawback, from the network architecture point of view, is that since VRRP is used to provide router redundancy, the backup links are idle. 2.4 Network diagram This diagram shows the most important interconnections of the data centre routers: 3 Designing a new Data Centre Network Evidently, the main aim for the redesign of the data centre network architecture was to address the drawbacks set out just above—i.e. to increase fexibility, improve support for virtual-machine and container based services and to improve the integration between network management and virtual machine orchestration. Certain general objectives, constraints and requirements also had to be taken into account and these are set out in the sections that follow. 3.1 Design objectives The
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-