future

Article Software Defined Wireless Mesh Network Flat Distribution Control Plane

Hisham Elzainand Yang Wu * College of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China * Correspondence: [email protected]

 Received: 5 May 2019; Accepted: 18 July 2019; Published: 25 July 2019 

Abstract: Wireless Mesh Networks (WMNs), have a potential offering relatively stable Internet broadband access. The rapid development and growth of WMNs attract ISPs to support users’ coverage anywhere anytime. To achieve this goal network architecture must be addressed carefully. Software Defined Networking (SDN) proposes new network architecture for wired and wireless networks. Software Defined Wireless Networking (SDWN) has a great potential to increase efficiency, ease the complexity of control and management, and accelerate technology innovation rate of wireless networking. An SDN controller is the core component of an SDN network. It needs to have updated reports of the network status change, as in and quality of service (QoS) in order to effectively configure and manage the network it controls. In this paper, we propose Flat Distributed Software Defined Wireless Mesh Network architecture where the controller aggregates entire topology discovery and monitors QoS properties of extended WMN nodes using Link Layer Discovery Protocol (LLDP) protocol, which is not possible in multi-hop ordinary architectures. The proposed architecture has been implemented on top of POX controller and Advanced Message Queuing Protocol (AMQP) protocol. The experiments were conducted in a Mininet-wifi emulator, the results present the architecture control plane consistency and two application cases: topology discovery and QoS monitoring. The current results push us to study QoS- for video streaming over WMN.

Keywords: SDWN; SDWMN; SDN distributed control plane; topology discovery; QoS monitoring

1. Introduction Wireless Mesh Networks (WMNs)are multi-hop networks that are regarded as a wireless potential key architecture providing wide Internet coverage for specific areas. They have a significant role in various scenarios of applications such as public safety, transportation, enterprise networks, mining fields and emergency response, etc. The deployment and use of WMNs is relatively quick and low-cost due to there being no need for any wired network as a backbone [1]. A WMN consists of three types of nodes: mesh clients (any end-user wireless device e.g., smart phone, laptop, etc.), mesh routers (to build a backbone), and gateways (special routers that can connect to the Internet). In infrastructure WMNs, mesh routers are static and can be equipped with several technologies, such as WiFi (IEEE 802.11), ZigBee (IEEE 802.15.4) and WiMAX (IEEE 802.16) [2,3] fora high coverage of the targeted area. Despite the steadily evolving wireless technology standards, there are structural barriers that still prevent wireless networks’ infrastructure from being open to fulfilling technical innovation requirements [4]. Software Defined Wireless Networks (SDWN) [5,6] is the technical term of applying Software Defined Networking’s (SDN) concepts to wireless networks. SDWN attempts to inherit (wired) SDN flexibility, it decouples radio control and radio data forward functions from each other, to assist wireless networks in becoming more flexible by abstracting the underlying wireless infrastructure from network services and applications through higher-level APIs. So, wireless networks under SDN

Future Internet 2019, 11, 166; doi:10.3390/fi11080166 www.mdpi.com/journal/futureinternet Future Internet 2019, 11, 166 2 of 17 architecture are becoming more innovative ecosystems and break down mentioned structural barriers. As a consequence, wireless networks were faced with applied challenges such as boot-strapping [7] (i.e., network control plane initialization), topology discovery, link quality specifications monitoring, among others [8]. In this work, we propose solutions to these challenges in an SDN-based WMN which is called the Software Defined Wireless Mesh Network (SDWMN) [9]. SDN can create resilient network architectures. It can create a physically centralized network control architecture (a single controller or controllers in a master/slave approach) or a physically distributed logically centralized architecture that relies on distributed controllers organized either hierarchically or flatly, to control multi-domains to achieve network reliability and scalability [10,11]. The distributed control plane approach incurs different complexity to develop and manage SDN controllers. However, these solutions are more responsive to network status changing and handling related events, because the controllers distribute through the network closer to resources of events than in centralized architecture. Several studies [12–14] distribute SDN controllers to handle the network control plane workload based on topology. They follow topological structures on the controllers’ distribution in network architecture in flat and hierarchical designs. An orchestration framework requires maintaining the topology of all network domains [15]. So, an effective network topology discovery mechanism is essential for managing the network and deploying end-to-end applications and services on top of the orchestration architecture among multiple distributed domains. The network global view is a critical factor in SDN architecture, and the controller provides such view by a topology discovery service. Thus, the information offered by topology discovery is crucial for network applications such as routing, network resources allocation and management, and fault recovery that reside on top of the SDN controller. In this context, the topology discovery process time and load are fundamental for a timely and lightweight response. Therefore, up-to-date network topology discovery must use efficient mechanisms in SDN architecture, and it becomes one of the significant design metrics for SDN scalability [16]. Keeping an overall view of a large network surely generates a huge amount of information about the physical plane state. Furthermore, the massive volume of a control flow would produce a heavy load to the SDN controller which could degrade the controller performance [17]. As a consequence, network scalability problems are arising. The current SDN topology discovery scheme, which is designed for wired networks, does not suffice for wireless networks requirements. Also, it does not collect any quality of service (QoS) properties of underlying network elements. This paper addresses distributed multiple-domain SDWMN, as presented in Figure1, which can be deployed for various applications of WMN. Generally, they are decomposed into geographical or administrative interconnected domains, each domain can consist of mesh routers that are equipped with various wireless interface standard technologies, generally any IEEE802.11 standard with at least two frequency bands (one for the control plane and other for the data plane). The Ad-hoc On-Demand Distance Vector Routing (AODV) is used as a QoS routing protocol and delay is used as the main metric, but this work focuses on finding an appropriate architecture that allows using a state-of-art topology discovery and QoS monitoring using Link Layer Discovery Protocol(LLDP) to offer topological and QoS information for any QoS routing protocol that can reside as application on the application plane of the SDN controller in WMN. Surely this is the heterogeneity and distributed nature that WMN calls for to be more robust to failure and adaptable to user requirements. The state-of-the-art distributed solutions of SDWN are not sufficient for WMN scalability, as it needs a fine-grain control plane that depends on an efficient communication system for inter-controller exchange. We propose FD-SDWMN architecture, a Flat Distributed SDWMN. It distributes the control plane among flat distributed SDN controllers; we show how the control plane initializes dynamically to solve the SDN bootstrap problem. After that, the architecture starts providing the main functionalities such as network traffic engineering and disruption and attack survival. Future Internet 2019, 11, 166 3 of 17 Future Internet 2019, 11, 166 3 of 17

Control plan

Local Domain A Local Domain B

Local Controller

Mesh

Local Domain A FigureFigure 1. 1.Flat Flat Distributed-Software Distributed-Software DefinedDefined Wireless Mesh Mesh Network Network (FD-SDWMN) (FD-SDWMN) flat flat architecture. architecture.

ContraryContrary to to currently currently distributed distributed SDNSDN architectures,architectures, WMN WMN based based on on FD-SDWMN FD-SDWMN is isresilient resilient enoughenough to to discriminate discriminate links links with with thethe bestbest characteristicscharacteristics (bandwidth, (bandwidth, latency, latency, packet packet loss, loss, etc.) etc.) for for datadata forwarding. forwarding. FD-SDWMN FD-SDWMN is implementedis implemented on topon oftop POX of [18POX] OpenFlow [18] OpenFlow controller, controller, and Advanced and MessageAdvanced Queuing Message Protocol Queuing AMQP Protocol [19]. AMQP For FD-SDWMN [19]. For FD-SDWMN architecture architecture performance performance evaluation, weevaluation, present its we functionalities present its functionalities on a Mininet-wifi on a Mini [20]net-wifi emulator [20] for emulator SDWN accordingfor SDWN to according a control to plane a consistencycontrol plane test andconsistency two application test and cases:two applicat the topologyion cases: discovery the topology mechanism discovery for multi-hop mechanism networks for (themulti-hop state-of-the-art networks topology (the state-of-the-art discovery cannot topo belogy applied discovery to a multi-hop cannot be network applied such to a as multi-hop WMN) and QoSnetwork monitoring. such as WMN) and QoS monitoring. TheThe rest rest of of the the paper paper is is organized organized asas follows:follows: Section 2 describes the the FD-SDWMN FD-SDWMN architecture, architecture, it presents the modules and agents that compose the controller. Section 3 presents the proposed it presents the modules and agents that compose the controller. Section3 presents the proposed architecture implementation. A detail description of the topology discovery and QoS monitoring is architecture implementation. A detail description of the topology discovery and QoS monitoring is presented in Sections 4–6. In Section 7 this work evaluates the experimental results. Section 8 presented in Sections4–6. In Section7 this work evaluates the experimental results. Section8 concludes concludes the paper and presents our future work. the paper and presents our future work. 2. FD-SDWMN Architecture 2. FD-SDWMN Architecture A centralized controller in SDWMN manages all the switches (mesh routers) of the network A centralized controller in SDWMN manages all the switches (mesh routers) of the network (controller–router connection is in a multi-hop). That means each router in the network needs to rely (controller–routeron one controller connection for forwarding is in decisions. a multi-hop). Thus, That for every means new each flow, router each in router the network generates needs a request to rely onto one the controller controller, for which forwarding responds decisions. with appropriat Thus, fore every flow newentry flow, messages. each router Obviously, generates this ascenario request to thecosts controller, the controller which respondsmore load with and appropriatedecreases netw flowork entry performance. messages. On Obviously, the contrary, this in scenario FD-SDWMN costs the controllerarchitecture, more the load network and decreases control network has logically performance. divided On via the network contrary, slicing in FD-SDWMN into flat distributed architecture, thecontrollers, network control each of haswhich logically controls divided a local viadomain network (controller–router slicing into flat connection distributed is in controllers, a single-hop). each of which controls a local domain (controller–router connection is in a single-hop). 2.1. Overall Architecture 2.1. Overall Architecture FD-SDWMN is a distributed, multi-domain, SDWMN control plane that enables efficient deliveryFD-SDWMN of end-to-end is a distributed, network multi-domain,services. The FD-SDWMN SDWMN control control plane plane that composes enables eofffi cientdistributed delivery of end-to-endcontrollers that network are in services. charge of The network FD-SDWMN domains, control they planecommunicate composes with of distributed each other controllersto exchange that aretheir in charge local ofinformation network domains, to aggregate they communicatea network-wi withde view each for other efficient to exchange end-to-end their localmanagement. information toFigure aggregate 2 shows a network-wide that the controller view architecture for efficient is end-to-end composed of management. two parts: the Figure local level,2 shows where that the the controllermain functionalities architecture of is a composed local domain of two are parts:gathered, the localand the level, global where level, the which main functionalitiesaggregates other of a localdomains’ domain control are gathered, information and the such global as level,topology which state, aggregates QoS properties, other domains’ etc. Also, control southbound information

FutureFuture Internet Internet2019 2019, ,11 11,, 166 166 4 ofof 1717

interfaces are used to push forwarding policies to data plane elements and gather their status. suchFinally, as topology the northbound state, QoS interfaces properties, provide etc. Also, contro southboundllers with applications’ interfaces are management used to push policies forwarding and policiesrequirements. to data plane elements and gather their status. Finally, the northbound interfaces provide controllers with applications’ management policies and requirements.

REST

Local part Global part AMQP Visual Manager Module(GUI) Monitoring Agent Comm. Driver3 Service Path Reachability Manager Computation Aggregate- DB Agent - Device registry - Location registry Comm. Event - Link states Monitoring Connectivity Driver2 Processing - Reservations Manager Agent Coordinator

Link Switch Host Reservation Comm. Discovery Manager Manager Agent Driver1

Core Component

OpenFlow Open Proto. Vendor Driver Driver Spec. Driver

OpenFlow

Figure 2. FD-SDWMN controller architecture. AMQP: Advanced Message Queuing Protocol. Figure 2. FD-SDWMN controller architecture. AMQP: Advanced Message Queuing Protocol. Multiple modules compose the controller functionalities, they are managed (start, stop, update and communicate)Multiple modules by the compose core component. the controller FD-SDWMN functionalities, architecture they are leverages managed from (start, the existing stop, update SDN controller’sand communicate) modules, by such the ascore the component. OpenFlow driverFD-SDW forMN OpenFlow architecture protocol leverages implementation, from the existing router andSDN host controller’s managers modules, for keeping such track as the of network OpenFlow elements driver and for linkOpenFlow discovery protocol that implements implementation, LLDP (Linkrouter Layer and host Discovery managers Protocol) for keeping for topology track of discovery.network elements Furthermore, and link new discovery modules that are implements developed toLLDP enhance (Link architecture Layer Discovery functionality Protocol) as displayed for topolo ingy Figure discovery.2. Controllers Furthermore, must communicate new modules with are eachdeveloped other toto gather enhance and accumulatearchitecture overall functionality network as status displayed information. in Figure In order 2. toControllers accomplish must this task,communicate each controller with each uses other two essential to gather parts: and accumula (i) a coordinatorte overall module network discovers status information. neighboring domainIn order controllersto accomplish to establishthis task, andeach maintain controller a uses reliable two and essential secure parts: distributed (i) a coordinator publish/subscribe module discovers channel. Moreover,neighboring (ii) didomainfferent agentscontrollers that exchangeto establish needed and information maintain amonga reliable controllers and secure via this distributed channel. publish/subscribe channel. Moreover, (ii) different agents that exchange needed information among 2.2.controllers Local Functionalities via this channel. The local modules in charge of network topology discovery, QoS monitoring, etc., manage flow prioritization,2.2. Local Functionalities so according to network parameters the controller computes routes for the priority flow.The Also, local the modules reactin charge to network of network state topology (link broken, discovery, high latency, QoS monitoring, bandwidth degradation,etc., manage flow etc.) dynamicallyprioritization, by so stopping according and to/or network redirecting parameters traffic according the controller to its criticality.computes routes This work for the is di priorityfferent fromflow. our Also, previous the modules work [ 21react] that to presentednetwork state a hierarchical (link broken, architecture. high latency, bandwidth degradation, etc.)The dynamically aggregated-DB: by stopping is a central and/or database redirecting in which traffic a controller according stores to its its criticality. local domain This and work other is • differentdomains from knowledge our previous on work topology, [21] that QoS presented monitoring a hierarchical and ongoing architecture. data flow. Other modules and • agentsThe aggregated-DB: use this information is a central to reach database the ultimate in which goals, a controller i.e., taking stores suitable its local action domain on flows and within other adomains local domain knowledge and entire on topology, network. QoS monitoring and ongoing data flow. Other modules and Theagents QoS use monitoring this information module: gathersto reach QoS the information ultimate goals, such asi.e., bandwidth, taking suitable delay, jitteraction and on packet flows • losswithin of domain a local switchesdomain and and entire links innetwork. between using customized LLDP packets, and other domains’ QoS information is aggregated.

Future Internet 2019, 11, 166 5 of 17

The path computation module: computing routes from source to destination in the local domain • and entire network using the Dijkstra algorithm taking into account QoS metrics of links in between. This module consults the aggregated-DB for network topology and QoS monitoring information. The service manager module: supports end-to-end provisioning by receiving requests from • northbound APIs for efficient network SLA (Service Level Agreement) management. It verifies SLA feasibility and respect for consulting other modules. The virtual manager module: resides on top of the other modules. It offers a Graphical User • Interface (GUI) through network virtualization to interact with other modules. It gathers information from many down modules and displays them to the network operator. Also, it provides them with essential network parameters such as flow priorities, new routes, etc. The coordinator: this module establishes a control channel between controllers. It manages status • information that frequently exchanges (such as link state, host presence) and requests between controllers. Hence, this communication channel should be reliable to guarantee the messaging process. For delivery security and reliability, AMQP is used for Coordinator implementation because it ensures Reliability of message deliveries, Rapid and ensured delivery of messages and Message acknowledgements.

2.3. Global Functionality To collect and maintain a global network view in each controller supporting network efficient QoS routing and reservation functionalities, the following agents are defined and implemented. The connectivity agent: is in charge of maintaining peering links between the controllers. The work • of this agent is based on event-driven fashion, it only sends information if a new controller is discovered or existing peering link between controllers is changed. This information is grouped and maintained into the aggregated-DB, and like other information it is accumulated by all neighboring connectivity agents. Eventually, the connectivity information is used to make global routing decisions by the path computation module. The monitoring agent: periodically receives information (QoS properties) about available links • and their capabilities, in order to support traffic transmission among network domains. The Reachability agent: advertises the hosts presence on local domain margins so they become • reachable. It offers a roadmap between domain switches and hosts. The reservation agent: is concerned with overall network flow setup, e.g., link teardown, • and updates requests and application requirements such as QoS. It is like the Resource Reservation Protocol (RSVP). Each controller handles these requests by the service manager, and the reservation agents of these controllers along the path communicate to create and maintain the needed path. In order to achieve FD-SDWMN control plane consistency, the agents publish and consume messages managing the required topics. The aggregated information that concerns reachability (reachable hosts listed in the network), connectivity (a list of peering controllers) and monitoring (peering transit paths status, in terms of QoS metrics, etc.). This way, a controller can build a view of the entire network. Thus, it has the capabilities to perform path reservation, routing and manage SLAs.

3. Architecture Implementation Weimplemented FD-SDWMN architecture on top of the POX, an open source controller. There were some modules taken from POX’s Python code with a little modification, other modules were developed in Python to manage controllers’ communication. The agents located in each controller notify link states, device location and requests of path reservation using this control channel.

3.1. Coordinator Implementation The coordinator is implemented as any other POX application. It receives Packet-In messages from the Core module after subscription, sends its Packet-Out messages, reads and writes Future Internet 2019, 11, 166 6 of 17

3.1. Coordinator Implementation

FutureThe Internet coordinator2019, 11, 166 is implemented as any other POX application. It receives Packet-In messages6 of 17 from the Core module after subscription, sends its Packet-Out messages, reads and writes information from/to the aggregated-DB and at startup it views the POX controller configuration file. Forinformation coordinator from operation,/to the aggregated-DB there are configurations and at startup parameters it views which the POX should controller determine: configuration agents_list, file. messaging_server_typeFor coordinator operation, and theremessagin are configurationsg_server_listening_por parameterst. Also, which other should optional determine: parameters agents_list, can bemessaging_server_type specified. An agent is anda small messaging_server_listening_port. class that manages inter-domain Also, exchanges other optional to/from parameters a module canto handlebe specified. intra-domain An agent communication. is a small class Coordinator that manages activates inter-domain a specific exchanges agent from to the/from agents_list a module by to specifyinghandle intra-domain the port that communication. it reaches by the Coordinator messaging_server_listening_port. activates a specific agent Moreover, from the agents_listfrom the messaging_server_typeby specifying the port parameter, that it reaches the bycoordinator the messaging_server_listening_port. determines the messaging driver Moreover, that it uses, from for the currentmessaging_server_type implementation the parameter, RabbitM theQ coordinatordriver [21] determinesis used to implement the messaging the driverfederation that itmode uses, of for AMQP,current but implementation the FD-SDWMN the RabbitMQ architecture driver allows [21] th ise used use toof implementother AMQP the implementations federation mode ofsuch AMQP, as Activebut the MQ. FD-SDWMN architecture allows the use of other AMQP implementations such as Active MQ. TheThe coordinator coordinator implements implements an an extended extended LLDP (Link(Link Layer DiscoveryDiscovery Protocol)Protocol)version, version, which which is iscalled called Coordinator-LLDP Coordinator-LLDP(C-LLDP) (C-LLDP) thatthat isis usedused inin discovery fu functionality.nctionality. The The C-LLDP C-LLDP message message containscontains an an OpenFlow OpenFlow option option added added to tothe the regu regularlar LLDP LLDP message. message. Moreover, Moreover, IEEE IEEE has allocated has allocated the Organizationallythe Organizationally Unique Unique Identifier Identifier (OUI) (OUI) to OpenFlow. to OpenFlow. The The coordinator coordinator sends sends these these messages messages announcingannouncing controller controller existence existence in in order order to to discover discover other other domains’ domains’ controllers. controllers. When When it it receives receives a a replyreply toa toa discovery discovery message, message, it it establishes establishes a aconnect connectionion via via AMQP AMQP to to its its peer peer in in that that controller controller and and stopsstops sending discoverydiscovery messages. messages. Otherwise, Otherwise, the coordinatorthe coordinator continues continues sending sending discovery discovery messages messagesperiodically. periodically. A C-LLDP message—asA C-LLDP message—as shown in Figure shown3—contains in Figure the required 3—contains information the required to reach informationa controller. to reach a controller.

FigureFigure 3. 3. The discoverydiscovery message.message. LLDP: LLDP: Link Link Layer Layer Discovery Discovery Protocol; Protocol; OUI: OrganizationallyOUI: Organizationally Unique UniqueIdentifier. Identifier.

TheThe coordinator offoffersers a communicationa communication channel chan (publishnel (publish/subscribe)/subscribe) for inter-domain for inter-domain exchanges. exchanges.It is essential It is the essential operation the is operation done via twois done particular via two topics: particular C_ID.*.* topics: topic, C_ID.*.* where topic, C_ID iswhere a controller C_ID isidentifier. a controller Via identifier. this topic, Via other this controllers topic, other can controllers directly sendcan directly messages send to messages this controller. to this For controller. instance, Forthis instance, is used forthis a is local used domain for a local topology domain discovery topology update. discovery The otherupdate. topic The is other general.*.*, topic is which general.*.*, enables whicha controller enables to a communicate controller to tocommunicate all other controllers to all ot inher the controllers network, suchin the as network, when a controllersuch as when decides a controllerto leave its decides domain. to leave its domain. The coordinator communicates with AMQP implementation via drivers. An AMPQ driver should support a set of functions:

(1) Subscribe (topic)/unsubscribe (topic): to add and remove a topic to/from the topic list. (2) Send (topic, message): send to a particular topic a specific message. Future Internet 2019, 11, 166 7 of 17

(3) Pair (C_ID) and unPair (C_ID): controller function to create a control channel to another controller and the opposite function is to remove this channel when a controller fails and is not able to receive information from neighbors.

The coordinator also maintains a controller existence by sending periodic Keep-Alive messages. In the case that three contiguous Keep-Alive messages have no response, a controller infers that its neighbor controller has failed and it triggers a mitigate procedure for this failure. The coordinator application is flexible enough for an extension without altering the main classes of POX. Developers can provide a new driver for the implementation of AMQP to extend the coordinator class, or by adding a new agent function.

3.2. Agents Implementation Agents use the coordinator to exchange information between distributed controllers among neighboring domains. In this architecture, four agents were developed: Reachability, Connectivity, Monitoring and Reservation (see Section 2.3). They publish on particular topics, e.g., monitoring.C_ID.bandwidth.2s where the monitoring agent at a controller that is identified by C_ID advertises the remaining bandwidth it can offer for traffic transmission. Any received information from neighboring domains’ agents is stored in the aggregated-DB of the local agent. Then this information is used by the modules of the local controller to take a decision on flows over the network. For end-to-end provisioning, the reservation agents implement a mechanism like RSVP reservation protocol, they exchange reservation requests responding to flow descriptors. The development of the coordinator and its assistants (drivers, agents, etc.) adds extra lines of code and their operation consumes extra memory of the POX controller.

4. Topology Discovery Topology discovery is critical for the efficient operation of SDN-based network services and applications, which need to have updated information that describes the network state. An SDN controller is responsible for offering this information using an efficient and reliable approach. There is no standard topology discovery of OpenFlow devices. However, most of the current SDN controller systems use the same approach implemented by the original SDN controller (i.e., NOX controller) [22]. This mechanism, named OpenFlow Discovery Protocol (OFDP) [23], is used as a topology discovery de facto scheme. OFDP leverages the Link Layer Discovery Protocol (LLDP) [24], which allows switches in LAN (IEEE 802 Local Area Network) to advertise their capabilities to each other. Figure4 depicts a simple scenario for topology discovery, the process detailed in [22]. Although the OFDP mechanism was adopted by the majority of the current controller platforms, the controller suffers from the number of messages loaded during the process. For one cycle of topology discovery, the controller sends a Packet-Out message per every active port of the switches and receives two Packet-In messages for each of links between them. Since topology discovery is a periodical process, the OFDP mechanism affects controller performance because the number of packets in/out to/from a controller is dependent on the number of network switches and their active ports. For that, the heavy load of the topology discovery process in a controller is increased according to the network scale. Future Internet 2019, 11, 166 8 of 17 Future Internet 2019, 11, 166 8 of 17

Figure 4. OpenFlow Discovery Protocol (OFDP) topology discovery simple scenario. SDN: Software DefinedFigure 4. Networking. OpenFlow Discovery Protocol (OFDP) topology discovery simple scenario. SDN: Software Defined Networking. However, due to the operation mechanism of the LLDP protocol, it is not able to discover multi-hop links betweenHowever, switches due to inthe pure operation or hybrid mechanism OpenFlow networks.of the LLDP Therefore, protocol, there it is are not two able main to problems discover ofmulti-hop the current links topology between discovery switches approach, in pure oneor hybr is theid controller OpenFlow overload networks. and theTherefore, other is thisthere approach are two onlymain works problems in single-hop of the current networks. topology To solvediscovery the first approach, problem one i.e., is controller the controller overload, overload the authorsand the inother [22] is proposed this approach a new only version works of in OFDP. single-hop They networks. called it OpenFlow To solve the Discovery first problem Protocol i.e., versioncontroller 2 (OFDPv2).overload, the It reduces authors the in number[22] proposed of Packet-Out a new version messages of OFDP. to one They message called per it switch, OpenFlow instead Discovery of one perProtocol each active version port 2 of(OFDPv2). the switch. It Thereduces work the proved numb thater of the Packet-Out modified version messages is identical to one message in discovery per functionalityswitch, instead to of the one original per each version. active Furthermore,port of the switch. it achieves The work the proved aim with that a noticeablethe modified reduction version messagesis identical exchanged in discovery and functionality reduces the discoveryto the original induced version. CPU Furthermore, load of the controller it achieves by the around aim with 45% a withoutnoticeable any reduction consequent messages delay compared exchanged to OFDP. and reduces the discovery induced CPU load of the controllerTo solve by thearound second 45% problem, without i.e.,any the consequent inability ofdelay applying compared OFDP to orOFDP. even OFDPv2 to multi-hop networks,To solve because the second both OFDP problem, and OFDPv2i.e., the inability leverage of LLDP applying packets OFDP as or mentioned even OFDPv2 above, to and multi-hop LLDP packetsnetworks, are because a “bridge-filtered both OFDP multicast and OFDPv2 address”, leverage so they LLDP are packets a single as hop mentioned and not forwardedabove, and across LLDP switchespackets are [22 ].a “bridge-filtered In hybrid networks multicast where address”, there are so one they or moreare a traditionalsingle hop switchesand not forwarded (do not support across OpenFlow)switches [22]. between In hybrid OpenFlow networks switches, where there they processare one theor more LLDP traditional packets and switches drop them. (do not Therefore, support theOpenFlow) controller between should useOpenFlow a combination switches, of LLDP they process and BDDP the (BroadcastLLDP packets Domain and Discoverydrop them. Protocol) Therefore, to discoverthe controller indirect should links betweenuse a combination OpenFlow of switches LLDP and ports BDDP in the (Broadcast same broadcast Domain domain Discovery [25]. However,Protocol) thisto discover work does indirect not present links anybetween solution OpenFlow for pure swit OF switchesches ports in multi-hopin the same networks. broadcast domain [25]. However,Adapting this thework OFDP does topologynot present discovery any solution approach for pure to SDN OF switches based wireless in multi-hop multi-hop networks. networks is a challenge.Adapting the Due OFDP to its topology limitations discovery in collecting approach wireless to SDN node based and wireless link characteristics, multi-hop networks such as is nodea challenge. related attributes:Due to its limitations localization in and collecting QoS properties, wireless andnode link and nature: link characteristics, not point-to-point such andas node not fixedrelated connection attributes: capabilities localization like and wired QoS links,properties, [26] proposed and link “Anature: Generic not andpoint-to-point Configurable and Topology not fixed Discovery”connection tocapabilities analyze the like general wired topology links, [26] discovery proposed representation “A Generic required and Configurable by SDN applications Topology andDiscovery” how the to SDN analyze controllers the general offer it.topology discovery representation required by SDN applications and Wehow consider the SDN our controllers work as aoffer direct it. adaptation of the state-of-the-art topology discovery in wireless multi-hopWe consider networks. our work as a direct adaptation of the state-of-the-art topology discovery in wireless multi-hop networks.

4.1. Aggregated Topology Discovery Mechanism Among the FD-SDWMN architecture, the aggregated topology discovery mechanism can have the ability to apply the topology discovery protocol (OFDP) in multi-hop networks such as a

Future Internet 2019, 11, 166 9 of 17

4.1. Aggregated Topology Discovery Mechanism FutureAmong Internet 2019 the, 11 FD-SDWMN, 166 architecture, the aggregated topology discovery mechanism can9 haveof 17 the ability to apply the topology discovery protocol (OFDP) in multi-hop networks such as a wireless wireless mesh network. The mechanism is based on OFDPv2 and benefited from the multi-hop mesh network. The mechanism is based on OFDPv2 and benefited from the multi-hop control channel control channel breaking into a single-hop by dividing the process into two phases: local domain breaking into a single-hop by dividing the process into two phases: local domain topology discovery topology discovery and global network topology aggregation, the proposed mechanism reduces the and global network topology aggregation, the proposed mechanism reduces the heavy load of topology heavy load of topology discovery in the controller. discovery in the controller.

4.1.1. A Local Domain Topology DiscoveryDiscovery Each controllercontroller startsstarts topologytopology discoverydiscovery service using its LinkLink discoverydiscovery modulemodule toto discoverdiscover locallocal domains’domains’ nodes. Precisely, Precisely, in this process, the local controller is concerned with with link link discovery, discovery, and itit doesdoes notnot needneed toto rediscoverrediscover thethe domaindomain nodesnodes (switches)(switches) sincesince theythey alreadyalready havehave initiatedinitiated aa connectionconnection toto thethe controller.controller. AA controllercontroller sendssends anan individualindividual Packet-Out,Packet-Out, eacheach ofof whichwhich containscontains anan LLDP packet, with with a a rule rule to to send send the the specific specific packet packet out out on on the the corresponding corresponding interface. interface. Then Then via viathe thePacket-In Packet-In message, message, the theLLDP LLDP packets packets send send to tothe the co controllerntroller obeying obeying to to the the pre-installed pre-installed rule thatthat says,says, “Forward“Forward anyany receivedreceived LLDPLLDP packetpacket fromfrom anyany interfaceinterface exceptexcept CONTROLLERCONTROLLER interfaceinterface toto thethe controller”.controller”. The process is repeated for every router in thethe domain,domain, toto discoverdiscover activeactive linkslinks betweenbetween them.them. The entire domain topology discovery process isis periodicallyperiodically performedperformed everyevery 55 s,s, thethe defaultdefault intervalinterval sizesize ofof thethe NOXNOX controller.controller. Ultimately,Ultimately, eacheach controllercontroller maintainsmaintains up-to-dateup-to-date locallocal domaindomain topologytopology informationinformation in itsits aggregated-DB,aggregated-DB, and thethe connectivity agents exchange other domains’ topologytopology information.information. To this end, this work prop proposesoses an initial appropriate SDN based architecture, additional issuesissues willwill bebe addressedaddressed suchsuch asas channelchannel sharingsharing andand propagationpropagation modesmodes later.later.

4.1.2. Entire Network Topology AggregationAggregation Using thethe coordinatorcoordinator modulemodule eacheach locallocal controllercontroller cancan discoverdiscover otherother neighboringneighboring domain’sdomain’s controllerscontrollers andand maintainmaintain a controlcontrol channelchannel toto them,them, thenthen thethe connectivityconnectivity agentsagents cancan exchangeexchange andand aggregate topologytopology discovery discovery information information of the of surroundingthe surrounding domain domain in each controllerin each controller and aggregate and thisaggregate information this ofinformation entire network of topologyentire network in its aggregated-DB. topology in By its using aggregated-DB. this hierarchically By aggregatedusing this mechanismhierarchically over aggregated FD-SDWMN mechanism architecture, over FD-SDWMN the heavy topology architecture, discovery the heavy computation topology burden discovery on acomputation controller can burden reduce on and a controller the slow can convergence reduce and time the problem slow convergence of the distributed time problem nodes of of high the scaledistributed WMNs nodes can be of addressed. high scale FigureWMNs5 depictscan be addr the controlessed. Figure message 5 depicts flow of the control aggregated message topology flow discoveryof the aggregated mechanism. topology discovery mechanism.

Local Domain A Controller 1 Controller 2 Local Domain B

Each controller

aggregates Local domain Local domain Neighboring topology topology discovery topology discovery information 1. Packet_Out with AMQP 1. Packet_Out with LLDP pak LLDP pak

2. Packet_In with 2. Packet_In with LLDP pak LLDP pak

3. Update topology 3. Update topology information information

Figure 5. Messages flowflow for the aggregated topology discovery mechanism.mechanism.

5. QoS Monitoring Quality of service of network forwarding devices is particularly crucial for real-time applications like video streaming. The current SDN topology discovery service does not monitor or

Future Internet 2019, 11, 166 10 of 17

5. QoS Monitoring Future Internet 2019, 11, 166 10 of 17 Quality of service of network forwarding devices is particularly crucial for real-time applications likecollect video the streaming. QoS of underlying The current network SDN topology elements, discovery in spite service of the does fact not that monitor it is based or collect on the QoSLLDP of underlyingprotocol, which network can elements,collect QoS in spiteproperties of the [27]. fact thatAs shown it is based in Figure on the LLDP6a, additional protocol, to which the mandatory can collect QoSTLVs properties (Type/Length/Value, [27]. As shown i.e., in Figurekey-value6a, additional pair with to length the mandatory information) TLVs (Typefields /Lengththat are/Value, used i.e.,in key-valuetopology discovery, pair with lengthLLDP information)has optional fieldsType Length that are fi usedelds inthat topology can be customized discovery, LLDP to discover has optional other Typefeatures Length such fields as QoS that properties. can be customized In [27] four to discover optional other fields features were identified such as QoS to properties.carry bandwidth, In [27] fourdelay, optional jitter and fields packet were loss, identified with an to 8 carry byte bandwidth,size of each delay, property jitter as and shown packet in loss,Figure with 6b. an Therefore, 8 byte size a ofcustomized each property LLDP as packet shown for in FigureQoS collection6b. Therefore, is longer a customized than the original LLDP packet LLDP for packet QoS collectionby 38 bytes. is longerSince thanthe theaggregated original LLDPmechanism packet byallowed 38 bytes. applying Since the aggregatedLLDP-based mechanism topology allowed discovery applying on LLDP-basedFD-SDWMN, topology it also can discovery monitor onand FD-SDWMN, collect QoS. it also can monitor and collect QoS.

Customize

END OF FRAME DESTIATION SOURCE DESTNATION CHASSID PORT ID TIME TO LIVE OP TIONALS TYPE = 127 LEN GTH = 36 ORGNIZAIN CODE SUB TYPE PREAMBLE ETHERTYPE LLDPDU CHECK Qo S MAC MAC MAC TLV TLV TLV TLV 7 bits 9 bits 3 bytes 1 byte TLV SEQUENCE

LLDP DATA UNIT (LLDPDU)

BAN DWIDTH PACKET LOSS DELAY JITTER (a) LLDP Frame Structure 8 byte 8 byte 8 byte 8 byte

QoS properties

(b) Customized QoS-aware Structure Figure 6. LLDP packet format for quality of service (QoS) monitoring.

6. The Controller Traffic Traffic OverheadOverhead Controller loadload and and performance performance is ais critical a critical factor factor of SDN of scalability. SDN scalability. Since the Since topology the discoverytopology servicediscovery in theservice SDN in controller the SDN is controller running periodically, is running itperiodically, is important it tois calculateimportant and to knowcalculate the loadand thisknow service the load exploits this theservice controller. exploits As the was cont mentionedroller. As above, was mentioned the FD-SDWMN above, architecturethe FD-SDWMN offers thearchitecture appropriate offers environment the appropriate for applying environment topology for applying discovery topology based on discovery LDDP. Due based to the on currentLDDP. topologyDue to the discovery current topology mechanism, discovery the controller mechanism, load depends the controller on the load number depends of Packet-Out on the number messages of thatPacket-Out the controller messages should that send the cont additionroller to should the number send addition of Packet-In to the messages number received. of Packet-In Discovering messages a localreceived. domain Discovering topology a is local not didomainfferent topology from the state-of-the-artis not different mechanism.from the state-of-the-art In every discovery mechanism. cycle, the amount of received LDDP Packet-In (PKTIN) messages by a controller depends on the number In every discovery cycle, the amount of received LDDP Packet-In(PKT) messages by a controller ofdepends nodes aon local the domainnumber has.of nodes Actually, a local it is domain twice the has. amount Actually, of active it is inter-routertwice the amount links within of active the domain,inter-router a packet links within per each the link domain, direction. a packet On the per other each hand, link direction. the total amountOn the other of LLDP hand, Packet-Out the total (PKTOUT) messages sent by the controller every cycle is equal to the total number of switches. amount of LLDP Packet-Out (PKT) messages sent by the controller every cycle is equal to the total WithnumberN being of switches. the number of switches the domain has and L the number of inter-switch links, similarlyWith to N a being single the controller number single-hop of switches architecture, the domain the has number andL of the messages number in of/out inter-switch of the controller links, cansimilarly be expressed to a single as follows: controller single-hop architecture, the number of messages in/out of the controllerFor the can current be expressed mechanism as follows: (OFDPv2): For the current mechanism (OFDPv2): PKTIN = 2L (1) PKT =2L (1) PKTOUT = N (2) PKT =N (2) A significant reduction of the LLDP Packet-In and Packet-Out message numbers for the entire A significant reduction of the LLDP Packet-In and Packet-Out message numbers for the entire network topology discovery can be achieved in every controller of the FD-SDWMN architecture network topology discovery can be achieved in every controller of the FD-SDWMN architecture when discovering entire network topology using the aggregated topology discovery mechanism. when discovering entire network topology using the aggregated topology discovery mechanism. The controller load can be calculated in two stages: firstly, when discovering links between local The controller load can be calculated in two stages: firstly, when discovering links between local domain switches, depending on the number of domain members. Secondly, the load of aggregating domain switches, depending on the number of domain members. Secondly, the load of aggregating the other neighbors’ local domain’s topology information as an event-driven message for each domain the other neighbors’ local domain’s topology information as an event-driven message for each using connectivity agents of AMQP, depending on the performance of the AMQP system. domain using connectivity agents of AMQP, depending on the performance of the AMQP system.

With N being the number of local domain switches, L the number of inter-local domain links, M(AMQP) the number of messages exchanged by connectivity agents, the number of messages in/out of the controller can be expressed as follows:

PKT =2L +M(AMQP) (L

Future Internet 2019, 11, 166 11 of 17

With NLocal being the number of local domain switches, LLocal the number of inter-local domain links, M(AMQP) the number of messages exchanged by connectivity agents, the number of messages in/out of the controller can be expressed as follows:

PKTIN = 2LLocal + M(AMQP)(LLC < L) (3)

PKTOUT = NLocal + M(AMQP)(NLC < N) (4) Together with the FD-SDWMN architecture, this adapts a suitable environment to apply current topology discovery in WMN. Moreover, it reduces the load of the controller. It reduces the number of the direct topology discovery message i.e., Packet-In and Packet-Out messages, and after AMQP starts, the connectivity agents send the topological information messages on-demand. Then the load calculation of direct discovery messages in the controller depends only on the number of the switches of the domain it controls instead of the entire network’s switches, in addition to the load of messages exchanged by the connectivity agents between a particular controller and peering controllers that are distributed over WMN. Since the number of switches and their ports is the key parameter that impacts the controller load in OFDP [22], the aggregated mechanism can be compared with OFDPv2 to see how much it has improved the topology discovery load on the controller. We calculate G the gained efficiency in terms of reduction of the number of direct Packet-In and Packet-Out messages, for mesh network with N switches, NLocal local domain switches and fi interface for a router as follows:

PKT PKT G = IN-OFDPv2− IN-aggregated PKT-IN PKTIN-OFDPv2 2L 2L 2L (5) = − Local = 1 Local 2L − 2L PKT PKT G = OUT-OFDPv2− OUT-aggregated PKT-OUT PKTOUT-OFDPv2 N N N (6) = − Local = 1 Local N − N Notice that the reduction gained will be higher for WMNs with a large number of total switches according to Equation (6), but the number of nodes within a same local domain that is sharing the same broadcast must be addressed. We verified this with experiments using Packet-Out because its calculation only depends on the number of mesh routers and local controllers. On the other hand, notice that customizing and extending the LLDP packet for QoS monitoring despite it extends LLDP packet size, but the traffic flow is slightly different as in the topology discovery service.

7. Evaluation This section presents how we assessed FD-SDWMN capabilities. The addition of main functions to this architecture such as QoS routing and reservation, aims to be resilient to disruptions in the control plane (controller failure, inter-controller communication failure) or the data plane (inter-switch link failure). Thus, we present a control plane adaptation test and two use cases to evaluate FD-SDWMN features.

7.1. Experimental Setup For experimental evaluation, we used Mininet-wifi, the Software Defined Wireless Network emulator, which creates a network of virtual SDN switches (routers), stations (hosts) and wireless channels (links). We exactly used Open vSwitch [28], which is a software-based virtual SDN switch with OpenFlow support, to create mesh routers that are equipped with two non-overlapped interfaces. Mininet-wifi was run in VirtualBox as virtualization software, and Linux used as the hosting operating system. As we mentioned previously, the POX controller was used as our experimental SDN controller Future Internet 2019, 11, 166 12 of 17 platform and the programming coding to implement the proposed architecture was implemented in Futurepython. Internet All 2019 the, software11, 166 used for experiment prototype implementation are summarized in Table12 of1 17.

Table 1. Software used in experiment.

SoftwareSoftware Purpose Purpose VersionVersion UbuntuUbuntu Linux Linux Hosting Hosting OS OS 16.04 16.04 Mininet-wifiMininet-wifi SDWN SDWN Emulator Emulator 2.2.1d1 2.2.1d1 POXPOX SDN SDN Controller Controller Dart Dart branch branch FlowVisorFlowVisor [29] [29 ] Network Network slicing slicing 1.2.0 1.2.0 Open vSwitch SDN Virtual Switch 2.0.2 Open vSwitch SDN Virtual Switch 2.0.2 Python Programming Language 2.7 Python Programming Language 2.7

All experimentsexperiments werewere performed performed on on a Della Dell Laptop Laptop (Inspiron15 (Inspiron15 5000 5000 Series) Series) with with an i7-8550U an i7-8550U Intel IntelCPU, CPU, running running at 1.8 at GHz, 1.8GHz, with with 12 GB 12GB DDR4 DDR4 2400 2400MHz MHz RAM. RAM.

7.2. Control Plane Adaptation 7.2. Control Plane Adaptation This scenario shows how FD-SDWMN’s controll controllersers exchange control information and control self-adaptation to the network states. To To mitigate any congestion in the control messages, exchange links between controllers’ agents agents offload offload traffic traffic of a weak direct control connection (via a dedicated control channel (out-of-band)), by identifying alternative routes or reducing the control messages frequency on it if no alternative route. For instan instance,ce, the monitoring agents usually send information every 33s, s, and and this this period period changes changes to to 10s 10 s in in weak weak links. links. Also, Also, connectivity connectivity and and reachability reachability agents agents can send their information via alternative routes. Howeve However,r, they are event-driven agents contrary to the monitoring agent. Upon control control plane plane bootstrapping bootstrapping and and controller controller di discovery,scovery, the the C1 C1 and and C2 C2are are connected connected to C3 to asC3 depicted as depicted in Figure in Figure 7. In 7this. In scenario, this scenario, the cont therol-link control-link between betweenC1 and C2 C1 is andcongested C2 is congested(its latency (its= 50ms). latency Thus,= 50 the ms ).monitoring Thus, the monitoring agent is relaying agent is control relaying messages control messages between betweenC1 and C1C2 andvia C2C3 viato offloadC3 to offl theoad congested the congested link. link.

C2 C2

Congested Congested

C1 C3 C1 C3

Figure 7. The control plane adaptation: ( Left)) the the congested situation: C1 C1 and and C3 C3 use use the the forwarding link viavia C2; C2; (Right (Right) link) link disruption: disruption: C1 and C1 C3and use C3 the use congested the congested control link control with adaptinglink with information adapting informationexchange frequency. exchange frequency.

Figure 88 presentspresents thethe conductedconducted evaluationevaluation toto showshow FD-SDWMNFD-SDWMN controlcontrol planeplane adaptationadaptation withwith network conditions, e.g., the C2 C3 link failure. The figures show the utilization of the link in both network conditions, e.g., the C2↔C3 link failure. The figures show the utilization of the link in both directions, leftleft after after C3 C3 discovers discovers C1 C1 and and C2 andC2 theand AMQP the AMQP system system starts exchanging starts exchanging messages messages between betweenthem. The them. TCP The payload TCP ofpayload received of receiv packetsed splits packets into splits three into stages: three stages: • Local controller discovery: takes the first 9 s where controllers exchange their capabilities to advertise themselves. The AMPQ communication system is active during this period because the brokers should establish the control channel and subscribe to available topics. During this stage, the monitoring process has already started but not adapted to the weak link yet.

Future Internet 2019, 11, 166 13 of 17

Local controller discovery: takes the first 9 s where controllers exchange their capabilities to • advertise themselves. The AMPQ communication system is active during this period because the Future Internetbrokers 2019, should11, 166 establish the control channel and subscribe to available topics. During this stage,13 of 17 the monitoring process has already started but not adapted to the weak link yet. • Monitoring adaptation: after the end of the previous stage until t = 33s.In this period the weak Monitoring adaptation: after the end of the previous stage until t = 33 s. In this period the weak • link linkis discovered is discovered by by monitoring monitoring agents agents and theythey startstart adaptation adaptation behavior. behavior. As As observed observed in in FigureFigure 8a,d,8a,d, the the monitoring monitoring stopped stopped fromfrom t t == 1010s s becausebecause the the C1 C1↔C3C3 is is congested congested (weak), (weak), at the at the ↔ timetime in Figure in Figure 8b,e,8b,e, the the monitoring monitoring tratrafficffic isis increasedincreased over over the the C1 C1C2↔C2 and and C2 C2C3↔ linksC3 links as in as in ↔ ↔ FigureFigure 8b,c,e,f.8b,c,e,f. • FailureFailure recovery: recovery: starts starts rightright after after the the alternative alternative link C2 linkC3 C2 is cut↔C3 at t is= 33.cut Thus, at t the = monitoring33. Thus, the • ↔ monitoringinformation information is forwarded is fo viarwarded the congested via the link congested C1 C3 but link in adapted C1↔C3 frequency, but in adapted as in Figure frequency,8b. ↔ as in Figure 8b.

(a) (b)

(c) (d)

(e) (f)

FigureFigure 8. Monitoring 8. Monitoring information exchange exchange adaptation. adaptation. Different Different agent messages’ agent messages’ packets, the bootstrappackets, the and discovery stage at t = 9 s. The link was cut off at t = 33 s, C2 C3 link is cut off.(a) C1 C2; (b) C1 bootstrap and discovery stage at t = 9s. The link was cut off at↔ t = 33s, C2↔C3 link is cut→ off. (a) C1 → C3; (c) C2 C3; (d) C2 C1; (e) C3 C1; (f) C3 C2. C2; (→b) C1 → C3; →(c) C2 → C3;→ (d) C2 → C1;→ (e) C3 → C1;→ (f) C3 → C2. 7.3. Aggregated Topology Discovery 7.3. Aggregated Topology Discovery We implemented the aggregated topology discovery mechanism after performing the necessary modificationWe implemented on the the POX aggregated Link discovery topology module discov (discovery.py)ery mechanism in python after toperforming apply the enhancedthe necessary modificationversion of on OFDP the POX (i.e., OFDPv2).Link discovery Extensive module tests were(discovery.py) performed in on python the architecture to apply to the establish enhanced version of OFDP (i.e., OFDPv2). Extensive tests were performed on the architecture to establish OFDPv2 functionality. As expected, it was implemented in WMN as one of the multi-hop networks. The main goal of our evaluation is to prove that using the FD-SDWMN architecture allows using the same approach of the current SDN topology discovery, as in Section 4.1, there is no difference between the current mechanism and the first phase of the aggregated mechanism in regard to the topology discovery of the local domains. Our focus is on the entire network topology discovery load produced by LLDP packets that the controller sends and receives. The advantage of the aggregated mechanism in every controller is to gather the topology information from other controllers by distributed connectivity agents of AMQP in dynamic event-driven communication.

Future Internet 2019, 11, 166 14 of 17

OFDPv2 functionality. As expected, it was implemented in WMN as one of the multi-hop networks. The main goal of our evaluation is to prove that using the FD-SDWMN architecture allows using the same approach of the current SDN topology discovery, as in Section 4.1, there is no difference between the current mechanism and the first phase of the aggregated mechanism in regard to the topology discovery of the local domains. Our focus is on the entire network topology discovery load produced by LLDP packets that the controller sends and receives. The advantage of the aggregated mechanism in every controller is to gather the topology information from other controllers by distributed connectivity agents of AMQP in dynamic event-driven communication. We created a wireless mesh network topology of 105 switches, and five controllers, each of them controlling a local domain of 25 switches (NLocal = 25). We observed and collected the statistics of each controller outflow (Packet-Out) per topology discovery cycle. The experiment wasreplicated 10 times with similar outcomes as expected. If we take a particular controller, we find it sends five Packet-Out messages to discover its local domain switches. Then the controller saves this information in its aggregated-DB and sends requests via connectivity agent to collect neighboring domains’ topology information, and it also replies to any other controllers’ requests. So, we can calculate the controller load if we estimate the number of messages produced by AQMP but this is out of the scope of this paper. Contrarily, if we suppose one controller manages the entire network (N = 125), we can calculate the number of Packet-Out = 125 messages according to Equation (2). Also, we can calculate efficiency gain G of the aggregated mechanism over OFDPv2 as in Equation (6). As we see the experimental results corresponding to Equations (2) and (4) and the parameters of the topology. The reduction of Packet-Out messages results in a direct enhancement of controller CPU performance. Then the aggregated mechanism which achieves a reduction in the number of both Packet-In and Packet-Out messages instead of only reduction of Packet-In in OFDPv2, will positively impact the controller CPU load. However, such a reduction of the CPU load of the FD-SDWMN architecture is a significant improvement compared to the state-of-the-art. There are additional benefits of the aggregated mechanism, we mention some of them here without evaluation, such as the traffic reduction on a controller-router channel by reducing Packet-In and Packet-Out message flow, particularly in SDN architectures that adopt an in-band controller.

7.4. QoS Monitoring Scenario Here, we have tested the capability of QoS monitoring over FD-SDWMN. Figure9 depicts a simple scenario of application of video streaming over two local domain’s architecture, where A.H1 is the server that streams a video (size = 400 MB and length = 10 min) to the client B.H2. Initially, QoS properties of the switches’ ports were set as follows: 5 Mbit for bandwidth and 2500 µs for the delay. During the video streaming, QoS properties (bandwidth, delay, jitter and packet loss) were changing due to the consumption of resources. The properties gathered by monitoring agent at controller A proves that the aggregated mechanism can also effectively monitor QoS changing over LLDP. Figure 10 shows fluctuations curves (QoS over LLDP frontend GUI snapshots) of the QoS properties changes. The Figure 10a–d present the real-time statistics of QoS properties bandwidth, delay, jitter and packet loss, respectively, of interface B.S4-eth2 as instance (i.e., the second Ethernet interface eth2 of router s4 in the local domain B) monitoring from the controller B, then the controller A controller aggregates this information in its aggregated-DB, and the monitoring process continues for all switch interfaces of the domain. Duration of 15 s was used as the default interval of LLDP. Despite QoS monitoring over LLDP adding extra bytes to customize the optional TLVs of LLDP packets for QoS properties collecting, it has no significant difference in network traffic compared to that caused by original packets such as those used for topology discovery. To evaluate this increase of network traffic, we compared between pure LLDP packets (used for topology discovery) and customized LLDP packets (used for QoS monitoring). Both scenarios run separately during video streaming in the topology stated in Figure9. Network tra ffic was captured using Wireshark, to measure the total network traffic and only LLDP packets (packet filter set to “LLDP”) for 15 min. The evaluation Future Internet 2019, 11, 166 14 of 17

We created a wireless mesh network topology of 105 switches, and five controllers, each of

them controlling a local domain of 25 switches (N = 25). We observed and collected the statistics of each controller outflow (Packet-Out) per topology discovery cycle. The experiment wasreplicated 10 times with similar outcomes as expected. If we take a particular controller, we find it sends five Packet-Out messages to discover its local domain switches. Then the controller saves this information in its aggregated-DB and sends requests via connectivity agent to collect neighboring domains’ topology information, and it also replies to any other controllers’ requests. So, we can calculate the controller load if we estimate the number of messages produced by AQMP but this is out of the scope of this paper. Contrarily, if we suppose one controller manages the entire network (N = 125), we can calculate the number of Packet-Out = 125 messages according to Equation (2). Also, we can calculate efficiency gain G of the aggregated mechanism over OFDPv2 as in Equation (6). As we see the experimental results corresponding to Equations (2) and (4) and the parameters of the topology. The reduction of Packet-Out messages results in a direct enhancement of controller CPU performance. Then the aggregated mechanism which achieves a reduction in the number of both Packet-In and Packet-Out messages instead of only reduction of Packet-In in OFDPv2, will positively impact the controller CPU load. However, such a reduction of the CPU load of the FD-SDWMN architecture is a significant improvement compared to the state-of-the-art. There are additional benefits of the aggregated mechanism, we mention some of them here without evaluation, such as the traffic reduction on a controller-router channel by reducing Packet-In and Packet-Out message flow, particularly in SDN architectures that adopt an in-band controller.

7.4. QoS Monitoring Scenario Here, we have tested the capability of QoS monitoring over FD-SDWMN. Figure 9 depicts a simple scenario of application of video streaming over two local domain’s architecture, where A.H1 is the server that streams a video (size = 400 MB and length = 10 min) to the client B.H2. Initially, QoS properties of the switches’ ports were set as follows: 5 Mbit for bandwidth and 2500 µs for the delay. During the video streaming, QoS properties (bandwidth, delay, jitter and packet loss)were changing due to the consumption of resources. The properties gathered by monitoring agent at controller A

Futureproves Internet that2019 the, 11aggregated, 166 mechanism can also effectively monitor QoS changing over LLDP.15 of 17 Figure 10 shows fluctuations curves (QoS over LLDP frontend GUI snapshots) of the QoS properties changes. The Figure 10a–d present the real-time statistics of QoS properties bandwidth, delay, jitter resultsand packet showed loss, slightly respectively, different of networkinterface tra B.S4-eth2ffic caused as byinstance QoS monitoring. (i.e., the second The results Ethernet showed interface that theeth2 percentage of router ofs4 QoSin the monitoring local domain is about B) 80%ofmonitoring the total from packet the controller flow compared B, then to 77%the forcontroller topology A discovery.controller aggregates This proves this that information QoS monitoring in its overaggr LLDPegated-DB, does notand causethe monitoring network tra processffic performance continues deterioration,for all switch interfaces Table2 shows of the the domain. evolution Duration results. of 15 s was used as the default interval of LLDP.

Controller A RabbitMQ Controller B

S3 S3 eth2

Local Local S4 Local Domain1 domain A domain B H1 Server eth2

S1 S2 S1 S2

Future Internet 2019, 11, 166 Figure 9. QoS monitoring during videovideo streaming scenario. 15 of 17

(a) (b)

(c) (d)

FigureFigure 10. 10.QoSQoS monitoring monitoring over over LLDP LLDP Frontend Frontend GUI GUI at the at the controller controller A. A.(a) (aB.S4-eth2:) B.S4-eth2: Bandwidth; Bandwidth; (b)( bB.S4-eth2:) B.S4-eth2: Delay; Delay; (c) (B.S4-eth2:c) B.S4-eth2: Jitter; Jitter; (d) (B.S4-eth2:d) B.S4-eth2: Packet Packet loss. loss.

Despite QoS monitoringTable over 2. (QoS LLDP monitoring adding vs.extra topology bytes to discovery) customize over the LLDP. optional TLVs of LLDP packets for QoS properties collecting, it has no significantNetwork difference Traffi cin network traffic compared to that causedLLDP by original Packets forpackets such as those used for topology discovery. To evaluate this increase Total Packets LLDP Packets Total Bytes LLDP Bytes of network traffic, we compared between pure LLDP packets (used for topology discovery) and customizedQoS LLDP monitoring packets (used 459,315for QoS monitoring 3720 (0.80%)). Both scenarios 2,303,289,200 run separately 479,700(0.021%) during video streamingTopology in the topology discovery stated459,250 in Figure 9. 3551Network (0.77%) traffic 2,319,671,750was captured using 401,500(0.017%) Wireshark, to measure the total network traffic and only LLDP packets (packet filter set to “LLDP”) for15 min. The evaluation results showed slightly different network traffic caused by QoS monitoring. The results showed that the percentage of QoS monitoring is about 80%of the total packet flow compared to 77% for topology discovery. This proves that QoS monitoring over LLDP does not cause network traffic performance deterioration, Table 2 shows the evolution results.

Table 2. (QoS monitoring vs. topology discovery) over LLDP.

Network Traffic LLDP Packets for Total Packets LLDP Packets Total Bytes LLDP Bytes QoS monitoring 459,315 3720 (0.80%) 2,303,289,200 479,700(0.021%) Topology discovery 459,250 3551 (0.77%) 2,319,671,750 401,500(0.017%)

8. Conclusions In this paper, we have proposed FD-SDWMN, a Flat Distributed Software Defined Wireless Mesh Network multi-domain architecture. Its organization relies on a peering domain; each domain is managed by a local controller. The controllers establish a lightweight manageable control channel in between domains and are for agents that developed to share and aggregate network-wide information in order to enhance end-to-end network services. The distribution of the controllers surrounding a considerable number of mesh devices that cover large geographical areas supports WMN network scalability, and SDN controllers need to be closer network edges for collecting and monitoring network status. We demonstrated how FD-SDWMN works and resiliently responds and

Future Internet 2019, 11, 166 16 of 17

8. Conclusions In this paper, we have proposed FD-SDWMN, a Flat Distributed Software Defined Wireless Mesh Network multi-domain architecture. Its organization relies on a peering domain; each domain is managed by a local controller. The controllers establish a lightweight manageable control channel in between domains and are for agents that developed to share and aggregate network-wide information in order to enhance end-to-end network services. The distribution of the controllers surrounding a considerable number of mesh devices that cover large geographical areas supports WMN network scalability, and SDN controllers need to be closer network edges for collecting and monitoring network status. We demonstrated how FD-SDWMN works and resiliently responds and survives when network disruption occurs. FD-SDWMN is a suitable architecture for multi-hop networks such as WMN in order to benefit from SDN controller services, which are only applicable for single-hop networks, e.g., topology discovery and QoS monitoring. We have implemented FD-SDWMN architecture on top of the POX OpenFlow controller and the AMQP protocol. The architecture functionalities were evaluated according to control plane adaptation test and two use cases: aggregated topology discovery and QoS monitoring mechanisms. As a future work, we decided to enrich FD-SDWMN architecture with additional self-healing and resilient recovery mechanisms to be more reliable for supporting QoS routing for video streaming over WMN.

Author Contributions: Conceptualization, H.E.; Data curation, H.E.; Formal analysis, H.E.; Methodology, H.E.; Resources, H.E.; Software, H.E.; Supervision, Y.W.; Validation, H.E.; Visualization, H.E.; Writing—original draft, H.E.; Writing—review & editing, H.E. Funding: The International Exchange Program of Harbin Engineering University funds this paper, for Innovation- oriented Talents Cultivation. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Pakzad, F. Towards Software Defined Wireless Mesh Networks. Ph.D. Thesis, The University of Queensland, Brisbane, Australia, 2017. 2. Akyildiz, I.F.; Wang, X.; Wang, W. Wireless mesh networks: A survey. Comput. Netw. 2005, 47, 445–487. [CrossRef] 3. Bertsekas, D.P. Network Optimization: Continuous and Discrete Models; Athena Scientific: Belmont, MA, USA, 1998. 4. Yap, K.-K.; Sherwood, R.; Kobayashi, M.; Huang, T.-Y.; Chan, M.; Handigol, N.; McKeown, N.; Parulkar, G. Blueprint for introducing innovation into wireless mobile networks. In Proceedings of the Second ACM SIGCOMM Workshop on Virtualized Infrastructure Systems and Architectures, New Delhi, India, 3 September 2010; pp. 25–32. 5. Jagadeesan, N.A.; Krishnamachari, B. Software-defined networking paradigms in wireless networks: A survey. ACM Comput. Surv. 2014, 47, 27. [CrossRef] 6. Costanzo, S.; Galluccio, L.; Morabito, G.; Palazzo, S. Software Defined Wireless Networks: Unbridling SDNs. In Proceedings of the 2012 European Workshop on Software Defined Networking (EWSDN), Darmstadt, Germany, 25–26 October 2012; pp. 1–6. 7. Niephaus, C.; Ghinea, G.; Aliu, O.G.; Hadzic, S.; Kretschmer, M. SDN in the wireless context-towards full programmability of wireless network elements. In Proceedings of the 2015 1st IEEE Conference on Network Softwarization (NetSoft), London, UK, 13–17 April 2015; pp. 1–6. 8. Chaudet, C.; Haddad, Y. Wireless software defined networks: Challenges and opportunities. In Proceedings of the 2013 IEEE International Conference on Microwaves, Communications, Antennas and Electronics Systems (COMCAS 2013), Tel Aviv, Israel, 21–23 October 2013; pp. 1–5. 9. Abujoda, A.; Dietrich, D.; Papadimitriou, P.; Sathiaseelan, A. Software-defined wireless mesh networks for internet access sharing. Comput. Netw. 2015, 93, 359–372. [CrossRef] 10. Bannour, F.; Souihi, S.; Mellouk, A. Distributed SDN control: Survey, taxonomy, and challenges. IEEE Commun. Surv. Tutor. 2017, 20, 333–354. [CrossRef] 11. Karakus, M.; Durresi, A. A survey: Control plane scalability issues and approaches in Software-Defined Networking (SDN). Comput. Netw. 2017, 112, 279–293. [CrossRef] Future Internet 2019, 11, 166 17 of 17

12. Bari, M.F.; Roy, A.R.; Chowdhury, S.R.; Zhang, Q.; Zhani, M.F.; Ahmed, R.; Boutaba, R. Dynamic controller provisioning in software defined networks. In Proceedings of the 9th International Conference on Network and Service Management (CNSM 2013), Zurich, Switzerland, 14–18 October 2013. 13. Phemius, K.; Bouet, M.; Leguay, J. Disco: Distributed multi-domain sdn controllers. In Proceedings of the 2014 IEEE Network Operations and Management Symposium (NOMS), Krakow, Poland, 5–9 May 2014. 14. Berde, P.; Gerola, M.; Hart, J.; Higuchi, Y.; Kobayashi, M.; Koide, T.; Lantz, B.; O’Connor, B.; Radoslavov, P.; Snow, W.; et al. ONOS: Towards an open, distributed SDN OS. In Proceedings of the third workshop on Hot topics in software defined networking, Chicago, IL, USA, 22 August 2014. 15. SDN Architecture Overview, Version 1.1; Document TR-504; Open Networking Foundation: Palo Alto, CA, USA, 2014. 16. Ochoa Aday, L.; Cervelló Pastor, C.; Fernández Fernández, A. Discovering the network topology: An efficient approach for SDN. Adv. Distrib. Comput. Artif. Intell. J. 2016, 5, 101–108. [CrossRef] 17. Aslan, M.; Matrawy, A. On the Impact of Network State Collection on the Performance of SDN Applications. IEEE Commun. Lett. 2016, 20, 5–8. [CrossRef] 18. Kaur, S.; Singh, J.; Ghumman, N.S. Network programmability using POX controller. In Proceedings of the ICCCS International Conference on Communication, Computing & Systems, Punjab, India, 8–9 August 2014. 19. AMQP. Available online: http://www.amqp.org (accessed on 23 July 2019). 20. Fontes, R.R.; Afzal, S.; Brito, S.H.B.; Santos, M.A.S.; Rothenberg, C.E. Mininet-WiFi: Emulating software-defined wireless networks. In Proceedings of the 2015 11th International Conference on Network and Service Management (CNSM), Barcelona, Spain, 9–13 November 2015. 21. Hisham, E.; Yang, W. Decentralizing Software-Defined Wireless (D-SDWMN) Control Plane. In Proceedings of the World Congress on Engineering (WCE 2018), London, UK, 4–6 July 2018; Volume 1. 22. Pakzad, F.; Portmann, M.; Tan, W.L.; Indulska, J. Efficient topology discovery in software defined networks. In Proceedings of the 2014 8th International Conference on Signal Processing and Communication Systems (ICSPCS), Gold Coast, Australia, 15–17 December 2014; pp. 1–8. 23. GENI Wiki. Available online: http://groups.geni.net/geni/wiki/OpenFlowDiscoveryProtocol (accessed on 23 July 2019). 24. Attar, V.Z.; Chandwadkar, P. Network discovery protocol lldp and lldp-med. Int. J. Comput. Appl. 2010, 1, 93–97. [CrossRef] 25. Ochoa Aday, L.; Cervelló Pastor, C.; Fernández Fernández, A. Current Trends of Topology Discovery in OpenFlow-Based Software Defined Networks. Available online: https://upcommons.upc.edu/bitstream/ handle/2117/77672/Current%20Trends%20of%20Discovery%20Topology%20in%20SDN.pdf (accessed on 23 July 2019). 26. Chen, L.; Abdellatif, S.; Berthou, P.; Nougnanke, K.B.; Gayraud, T. A Generic and Configurable Topology Discovery Service for Software Defined Wireless Multi-Hop Network. In Proceedings of the 15th ACM International Symposium on Mobility Management and Wireless Access, Miami, FL, USA, 21–25 November 2017; pp. 101–104. 27. Chen, X.; Wu, J.; Wu, T. The Top-K QoS-aware Paths Discovery for Source Routing in SDN. KSII Trans. Internet Inf. Syst. 2018, 12, 2534–2553. 28. Open vSwitch. Available online: http://openvswitch.org (accessed on 23 July 2019). 29. Flowvisor Wiki. Available online: https://github.com/OPENNETWORKINGLAB/flowvisor/wiki (accessed on 23 July 2019).

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).